I woke up to the news that my interview on AI governance had been published in the People’s Daily, China’s largest official national daily. Beyond the personal surprise, what struck me most was what this says about the moment we are in.
The AI debate is no longer confined to technology circles, policymaking rooms or a handful of innovation hubs. It has become unmistakably global. And at the same time, it has become deeply human.
Across regions, the same questions are surfacing. How do we scale AI responsibly? How do we ensure that speed does not come at the expense of trust? And how do we prevent a future in which value, access and opportunity are concentrated in too few hands?
From national ambition to international coordination
In my contribution, I argued that stronger international co-regulation is becoming essential. AI is already expanding rapidly across government, energy, logistics, finance and healthcare. In that context, governance can no longer be treated as an afterthought. It has to be ethical, transparent and practical enough to support adoption at scale. The UAE’s AI Charter is one example of how business and government can begin to align strategy and ethics more concretely.
Three principles are becoming harder to ignore
What is also emerging more clearly is a shared direction around three principles. First, AI must remain human-centred. Second, its value should be more broadly shared. And third, access to the ecosystem — from data and compute to talent and opportunity — should not be limited to only a few markets. These principles matter because AI is no longer just a technological issue. It is becoming an economic, societal and geopolitical one.
No one can govern AI alone
Above all, one message stands out: no single country or organisation can govern AI alone. Compatible frameworks, shared standards and practical coordination are becoming part of the infrastructure AI now requires. That is not a limitation of innovation. It is one of the conditions for scaling it responsibly.