This blog post is from Marek Havrda our AI Policy & Social Impact Director, he was invited to The World Government summit to chair the subcommittee on gradual economic and social changes due to AI.
I was honored to participate in the second Global Governance of AI Roundtable (GGAR) in Dubai this month, along with 250 global AI experts. The GGAR is a leading multinational forum on AI and its governance. It initiates a crucial dialogue on the benefits, risks, and pathways to developing effective, yet culturally adaptable, norms that will assure the safe deployment of AI for the betterment of all humanity.
The GGAR was organized as a special track of 2019 World Government Summit, a forum where world leaders, international organizations’ representatives, thinkers, and experts from some 140 countries aim to get together the next generation of government, focusing on how to harness innovation and technology to solve universal challenges facing humanity.
The GGAR has three chief objectives:
- Gather information about the state of AI technologies, their socio-economic impact and the state of AI governance policies around the world.
- Synthesize this information into a governance framework, actionable public policy options, and implementation-level guidelines.
- Serving as the world’s authoritative forum for AI governance.
The 14 expert groups were divided into sub-committees in order to discuss in detail and come up with innovative ideas for key AI governance issues including:
- Interpretable and explainable AI
- Geopolitics of AI
- Governance of development of AGI
- AI adoption in developing countries and AI for SDGs
- AI narratives
- Managing the economic & social impacts of AI
Key takeaways from GGAR
Understanding mid-term risks
We need to understand much better the mid-term risks of AI and its deployment in order to develop pre-emptive measures and policies today. We have developed a relatively good understanding of the very immediate risks posed by AI (including towards privacy and cybersecurity) and so-called existential risks. However, to visualize, and in turn, to tackle the mid-term risks seems crucial in particular due to the pace mismatch of technological development and regulation. It is governments who will play a crucial role in setting up pre-emptive regulation to reduce the risks and potential impact impacts, therefore a better understanding of the mid-term impacts should help us identify crucial decision points where regulatory action may be needed.
Collaboration needed on AGI
We need to support a much more collaborative effort when it comes to building AGI, such as a “CERN for AI” or a United AI. This would enable the cooperation of the best minds while also ensuring the highest security possible. Such a concentrated effort needs to include clear measures and incentives in order to be inclusive for various private efforts towards AGI.
In parallel, we need to be working towards agreements on the distribution of benefits of AGI. This may be done through a separate entity with the representation of governments, public, NGOs and businesses.
I was invited to chair the subcommittee on gradual economic and social changes due to AI. Among the main conclusions of the sub-committee was a new view of the role of regulation. In particular, we discussed the potential of regulation in other areas than the labor market itself which could help to smooth the transition towards more automated production of goods and services. These may include mandating requirements for human-in-the-loop in various areas e.g. interpersonal services as a consumer safety requirements.
Another measure which was mentioned was a “tax on attention,” this is where companies would be taxed depending on how much attention users give to either products and services online. The idea builds on the fact that our attention, i.e. time spent online, is a very limited resource (normally no more than 16 hours per day). Such a tax if well-constructed would drive efforts to the providers of services towards more added value to the user using both, more technology as well as human workers.
Another example related to investment planning in companies: allowing the inclusion of investment into tax-deductible expenses immediately (not as it depreciates over many years) provided the planning around investment included clearly defined people plan. However, any new regulation should not hamper the overall competitiveness of businesses.
Although it is more than clear that we need profound changes in our education systems, to carry out such system change proves to be extremely difficult. Therefore, we may need to start with regulatory pressure, an example was set out of an analogy with governments legally mandating the use of seatbelts in cars which changed the attitudes to transport safety. Other areas of regulation which may positively impact transition towards more automated economies may include requirements on privacy and support of the green economy.
Overall we should be focusing much more energy on the quality of jobs and the meaning of human activity in general, and we must be vigilant when it may come to the reduction of human autonomy due to the deployment of AI.
I would like to thank everyone for the great contributions during the session, in particular Valentino Pacifici of Sana Labs, Mina J. Hanna of IEEE, Christina J. Colclough of UNI global union, Brent Barron of CIFAR, Rob McCargow of PWC and Katryna Dow of MEECO.