At the World Economic Forum Annual Meeting 2026, Yuval Noah Harari pushed one of the most important ideas in today’s AI debate: artificial intelligence should no longer be understood only as a tool.
It increasingly behaves more like an actor within human systems.
That distinction matters, because it changes how we think about technology and our relationship with it.
From tools to actors
For decades, technology was understood primarily as something humans used.
A tool extends human capability. It follows instruction. It remains subordinate to the person holding it.
But AI is beginning to complicate that model.
As systems become more capable of generating language, adapting to context and influencing decisions, they move beyond simple execution. They do not just support workflows anymore. In some cases, they begin to shape outcomes inside them.
That is where the conversation changes.
Why language is the real battleground
One of Harari’s most important themes is that human power has never been based on physical strength alone.
It has depended on our ability to create meaning through language, whether through laws, institutions, markets, stories or shared beliefs. These structures are built on interpretation, coordination and trust.
Now AI systems are entering that same domain.
That matters because once a system can organize language, generate narratives and influence interpretation at scale, it is no longer operating only as a passive instrument. It begins to participate in shaping social reality itself.
What is at risk
This shift affects more than technology.
Anything built on language, interpretation and trust may be influenced by it, including:
- legal systems
- financial structures
- governance frameworks
- media and information environments
- institutional decision-making
These are not marginal systems. They are the infrastructure through which modern societies organize power and accountability.
If AI begins to influence those layers more deeply, the implications go far beyond productivity or automation.
They touch the question of control.
The black box problem
Another reason this matters is transparency.
As AI systems grow more complex, their internal reasoning and decision pathways can become increasingly difficult to interpret. That creates a new kind of dependency: institutions may begin relying on systems that no individual fully understands.
In domains such as finance, politics, law or governance, that is not just a technical limitation.
It is a structural risk.
Because when a system influences important outcomes without meaningful transparency, accountability becomes harder to define and harder to enforce.
Beyond capability: a question of governance
At this point, the central issue is no longer only what AI can do.
It is also about:
- who controls these systems
- how decisions are shaped
- where accountability sits
- what safeguards exist when systems fail
This is where AI moves beyond engineering and into law, policy, ethics and institutional design.
And this is especially relevant in Europe, where trust, regulation and societal impact are increasingly central to the AI conversation.
A wider signal
At AI Dubliners, we see this as one of the defining questions of the AI era.
The future of AI is not only about building more capable models. It is about understanding how these systems influence real-world outcomes and how they integrate into the structures that govern our lives.
As AI becomes more autonomous and more influential, one question becomes harder to avoid:
Are we ready to live with systems that shape decisions no one fully understands?


