In 1984, Stewart Brand stated, referred to information, “Infomation wants to be free”. This sparred multiple debates that lasted well after the end of The First Hackers’ Conference, a pivotal moment in computing history.
Today, we stand at a similar crossroads, but this time, the conversation isn’t about hackers—it’s about AI. Amid debates over which AI will dominate 2025 and the Geopolitics of AI, one undeniable truth emerges: AI is no longer just a tool.
AI has seamlessly integrated into our daily lives, businesses, and even our decision-making processes. It’s a companion—one that, when trained on high-quality data, can analyse problems, propose solutions, and even operate with a degree of objectivity that humans often struggle to achieve.
Research shows that AI is already being used for mental health support, career guidance, and strategic decision-making. We trust AI in ways we never imagined. But this leads to a bigger question:
Can AI lead?
For years, discussions on AI ethics have positioned humans as the ultimate controllers of technological development, with much debate centred around bias.
Yet, despite this necessary oversight, we’ve seen global struggles over the ownership and sale of microchips, the persistence of biased data systems, and a growing reliance on decentralised, AI-driven projects that can operate autonomously. Some initiatives, like OpenCog, SingularityNET, and decentralised LLMs, may be moving in this direction, but we’re still in the early stages—and they, too, could end up as disappointing sci-fi dreams that never materialised.
Without slipping into dystopian fears, we must ask: Can AI become not just a tool, but a leader, a decision-maker—even a CEO?

An AI CEO: A Future Within Reach or Sci-Fi?
The idea of an AI CEO might seem like science fiction, but it may also be closer than we think. Technology is proving to be evolving extremely fast.
Could AI-driven companies outperform their human-led counterparts? Would AI leadership bring about greater efficiency, sustainability, and fairness? As AI’s reasoning capabilities evolve, it’s no longer just assisting decision-makers—it’s making decisions.
Collaboration, not Takeover
The future of AI isn’t about domination, but collaboration. We, as humans, have limitations. We have biases, and egos (big egos!); we may dislike people we work with and this may cloud our judgement. AI excels in logic, pattern recognition, and objective analysis. However, it is inherently flawed. It lacks creativity, common sense, morality, ethical judgment and the list could go on!
The real power, therefore, lies in synergy—AI enhancing human decision-making rather than replacing it.
But what happens when AI begins to develop its own reasoning? Could it form preferences, set goals, and act in ways that go beyond human programming? If so, how should we respond?
Society would need to be proactive in setting up regulations and frameworks for AI development. AI’s autonomy would need to be carefully monitored and managed to prevent harm. So, the wars that are bouncing back and forth may need to be put aside in order to establish collaboration between different agentic elements.
By 2030, we may see the first AI-driven company reach a trillion-dollar valuation—led by an AI CEO. This milestone would redefine leadership and challenge our perceptions of corporate governance.
The Real Question: How Far Will We Let AI Go?
This article isn’t about which AI platform will win. Instead, it asks: How much more are we willing to integrate AI into our businesses, institutions, and lives? SMEs and corporations alike will have to decide how to maximise AI’s potential while maintaining control.
Perhaps AI will be regulated, limited, or even suppressed. But for now, one thing is certain: the lines between human and machine intelligence are blurring faster than we anticipated.
The question isn’t whether AI will take over—it’s how we will evolve alongside it.