Using AI in cultural experiences: what works, what doesn't
Artificial intelligence offers real potential for museums and galleries, from personalisation to accessibility. But it also carries risks. A practical guide to what works, what doesn't, and how to keep humans in control when deploying AI in cultural spaces.

Thanos Kokkiniotis
CEO and Co-Founder
6 min read
•
28 Jan 2026
Photo by Corey Buckley on Unsplash
Artificial intelligence has become impossible to ignore in the cultural sector. Every conference panel, every funding application, every strategy document seems to mention it. The promise is compelling: personalised experiences at scale, multilingual access without translation budgets, and content that adapts to individual visitor needs. But beneath the enthusiasm sits genuine uncertainty about what AI can actually deliver—and what might be lost in the process.
Cultural institutions are right to approach AI with both curiosity and caution. The technology offers real potential to solve longstanding problems around access, capacity, and reach. But it also carries risks: the flattening of curatorial voice, the erosion of trust, and the possibility of deploying tools that frustrate rather than help visitors. The question isn't whether to use AI, but how to use it thoughtfully.
Why AI feels inevitable — and risky
The hype around AI makes it feel like institutions have no choice but to adopt it or be left behind. Tech companies promise transformation. Funders ask about digital innovation. Visitors increasingly expect intelligent, responsive digital experiences. The pressure to "do something with AI" is real—even when it's not clear what that something should be.
But hype and reality rarely align neatly. Many AI tools are impressive in demos and frustrating in practice. They work brilliantly in controlled environments and produce baffling results in the wild. The gap between what's marketed and what's actually useful can be enormous, and cultural institutions—often working with limited technical capacity—don't always have the expertise to tell the difference.
Fear of losing curatorial voice is perhaps the most significant concern. Curators spend years developing expertise, refining interpretations, and crafting narratives. The idea of handing that over to an algorithm—even partially—feels like an abdication of responsibility. What happens to scholarly rigour when a chatbot starts answering visitor questions? Who's accountable when AI generates something inaccurate or misleading?
Ethical and trust concerns run deep, too. AI systems can perpetuate bias, produce confident-sounding nonsense, and operate in ways that even their creators don't fully understand. For institutions built on trust and authority, deploying technology that might undermine both is a genuine risk. Visitors need to trust that the information they're receiving is accurate, considered, and safe.
<!-- CTA block -->
Where AI genuinely adds value
Despite the risks, there are areas where AI demonstrably improves visitor experiences—particularly when it's used to solve specific, well-defined problems rather than as a blanket solution.
Discovery and navigation benefit enormously from AI. Large collections are often overwhelming. Visitors don't know where to start, what to prioritise, or how to find objects that match their interests. AI can analyse preferences, suggest routes, and surface content that might otherwise remain buried. This isn't about replacing human curation—it's about helping people find the curated content that's most relevant to them.
Personalisation, done well, makes experiences feel more human, not less. A visitor interested in textiles doesn't need to wade through information about ceramics. Someone with 30 minutes wants a different experience than someone with three hours. AI can tailor content dynamically, adjusting to context, behaviour, and stated preferences in ways that static guides simply can't.
Language and access are perhaps AI's most transformative applications. Real-time translation, automatic captioning, and text-to-speech all rely on AI—and all open cultural content to audiences who would otherwise be excluded. These aren't experimental features; they're proven, mature technologies that work reliably and make a measurable difference to inclusion.
Content scaling, when approached carefully, allows small teams to do more. AI can help draft initial interpretations, suggest alternative phrasings, or generate variations of text for different reading levels. The key word is "carefully"—AI should accelerate human work, not replace it. It's a drafting tool, not a publishing tool.
Where AI falls down
For all its potential, AI has clear limitations—and institutions that ignore them risk damaging visitor trust and their own reputations.
Over-automation is a common trap. Just because something can be automated doesn't mean it should be. Visitor-facing chatbots that can't handle basic questions create frustration. Auto-generated labels that lack nuance diminish scholarship. Automated recommendations that feel random undermine confidence in the system. Automation should be invisible to visitors when it works, and gracefully handled when it doesn't.
Generic outputs are AI's signature weakness. Language models trained on vast datasets tend toward bland, middle-of-the-road text. They smooth out the distinctive voice, the surprising detail, the curatorial perspective that makes content worth reading. An AI-generated museum label sounds like every other AI-generated museum label. That's a problem when distinctiveness is part of your value.
Loss of context happens when AI systems don't understand the broader narrative or conceptual framework surrounding an object. A painting isn't just a visual composition—it's part of a movement, a response to a historical moment, a dialogue with other works. AI can describe what's in the image, but it struggles with why it matters. Context is where meaning lives, and meaning is what visitors come for.
Visitor confusion arises when AI behaves unpredictably or opaquely. If a recommendation system suddenly suggests something completely unrelated, visitors lose trust. If a chatbot gives a wrong answer confidently, it's worse than giving no answer at all. Transparency about what the system can and can't do is essential—but often missing.
Human-in-the-loop design
The most successful AI implementations in cultural settings share a common characteristic: humans remain firmly in control.
AI as assistant, not author, is the crucial framing. AI can suggest, draft, translate, or analyse—but final decisions rest with people who understand the content, the audience, and the institutional mission. This isn't about distrust of technology; it's about recognising that AI lacks judgment, accountability, and the ability to understand what's at stake.
Guardrails prevent AI from going off-script. This might mean constraining outputs to vetted content, setting clear boundaries on what questions a chatbot can attempt to answer, or requiring human review before anything reaches visitors. Guardrails aren't limitations—they're the conditions that make AI safe to use.
Editorial control must be retained at every stage. Even when AI generates content, humans decide whether it's accurate, appropriate, and aligned with institutional values. This requires workflow changes, clear responsibilities, and sometimes, the willingness to discard AI outputs that don't meet standards—no matter how much time was saved generating them.
A pragmatic framework for adoption
Institutions don't need a comprehensive AI strategy to start using AI effectively. They need a pragmatic, iterative approach.
Start small with low-risk, high-value applications. Automatic captions for videos. Multilingual text generation for basic wayfinding. Recommendation engines for collection browsing. These are bounded problems with clear success criteria and limited downside if they don't work perfectly. They're also opportunities to learn how AI behaves in your specific context.
Measure impact honestly. Does the AI feature actually improve the visitor experience, or does it just feel innovative? Are people using it? Are they satisfied with the results? Would they recommend it? Hard data beats assumptions. If something isn't working, stop doing it—even if it was expensive or prestigious to build.
Be transparent with audiences about what's AI-generated and what's not. Visitors deserve to know when they're interacting with automated systems. Transparency builds trust; obscuring AI use erodes it. Some institutions worry that disclosure will make visitors trust content less. In practice, the opposite is often true—honesty about limitations makes people more forgiving when AI stumbles.
More stories
Turn curiosity into play with games and quizzes from Smartify
4 Feb 2026
Smartify turns 10: celebrating a decade of digital culture and innovation
2 Feb 2026
From one-off visits to lasting relationships
29 Jan 2026
Growing revenue without compromising values
29 Jan 2026
What does "engagement" really mean in museums and heritage?
29 Jan 2026





