The recent AI Summit attempted to establish the basis for multilateral coordination to regulate this emerging technology.
Although the desire for 'open', 'inclusive' and 'ethical' AI has been echoed by many, the effectiveness of this attempt at governance is not clear.
AI has the potential to be dangerously energy-intensive, opinion manipulating, a threat to public freedoms and the labour market, and even a tool of weaponry.
The UK and the US, leading players in the field of AI, refused to sign the summit's final declaration, showing significant divergences in these stances.
Regulating AI does not necessarily hinder innovation and, in fact, can prove beneficial in preventing monopolistic practices.
Dependence on a particular country, such as the US or China, in terms of AI development should be avoided. It's a matter of sovereignty and how we see AI.
The EU and France have announced major investments in AI, although this approach may not be enough.
Conclusion: While the summit sketches out a helpful alternative, it requires further exploration to support the vision of AI as an accessible, equitable, and safe technology.