Blog

Report from the Expert Workshop on International Governance of AI

February 19, 2020

Wendell Wallach joins the workshop via video link.

Introduction

Last week Marek Havrda of GoodAI, Wendell Wallach of Yale University, together with the City of Prague organized a preparatory workshop in Brussels for the 1st International Congress for the Governance of AI. The congress will take place in Prague from 16–18 April 2020 at the National Museum and will be run in partnership with the Carnegie Council for Ethics in International Affairs, the World Technology Network, and prg.ai among others. Other Congress preparatory meetings took place or will take place, at Stanford University, London, New Delhi, and New York.

In the current climate, there have been continual calls for agile and adaptive governance mechanisms at the international level. This becomes particularly critical for the governance of emerging technologies such as AI, whose speedy development and deployment pose a serious mismatch for traditional approaches to ethical/legal oversight.

Several different international mechanisms have been proposed for the governance of AI. These range from recommendations by the UN Secretary General’s Higher-Level Panel on Digital Cooperation, The European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG), and the IEEE Ethically Aligned Design Initiative. The OECD is about to announce its AI Policy Observatory and scholars have proposed other vehicles for monitoring and managing the development of AI.

 

Prague National Museum, the venue for International Congress for the Governance of AI.

Participants

The main speakers at the workshop, connected via video link, were:

  • Roman V. Yampolskiy — Expert on behavioral biometrics, the security of cyberworlds, and artificial intelligence safety, University of Louisville
  • Yolanda Lannquist — Head of Research and Advisory, The Future Society
  • John C Havens — Executive Director at The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

Workshop participants included experts from various departments of the European Commission including Communications Networks, Content and Technology (DG CONNECT), Economic and Financial Affairs (DG ECFIN), Joint Research Centre (DG JRC), Justice and Consumers (DG JUST), Regional and Urban Policy (DG REGIO), experts from Member States and civil society.

The workshop was held under the Chatham House Rule.

The congress tracks

The workshop began with a short introduction of the International Congress for the Governance of AI and the experts discussed four of the tracks that will run at the congress:

  • Agile, Cooperative, and Comprehensive International Governance Mechanisms
  • Hard Law and Soft Law in the Governance of AI
  • AI and International Security
  • Corporate Self-Governance and Accountability

The format of the workshop was designed to stimulate dialogue and some of the key topics included:

  • Measures to efficiently protect fundamental rights and freedoms in AI applications
  • Regulation specific to biometric identification
  • Measures to assess the impact of AI deployment on well-being and democracy.

Key takeaways from the workshop

From ethics to laws

Work is being done at the moment within general ethics principles, however, moving from ethical principles to concrete hard or soft regulations will be a daunting task.

With regard to hard and soft law, the participants suggested experimenting to combine the advantages of both hard (enforcement) and soft (agility) law. While soft law works in some countries, in others there it can be less effective, there tends to be a cultural/country-specific dependency. Participants pointed to external standardization bodies, such as the International Organization for Standardization (ISO) or the Institute of Electrical and Electronics Engineers (IEEE), as a possible way forward, rather than industry self-regulation, which often doesn’t deliver desired results.

Measuring impacts

There was also a focus on the impacts of AI. Participants suggested that we cannot rely only on risk-based management, but that we need to move towards more system-based thinking in terms of impacts. Computational Sustainability was given as an example that attempts to optimize societal, economic, and environmental resources using methods from mathematics and computer science.

In order to assess the impacts of AI deployment, as well as the effectiveness of regulation, we need to develop functional metrics. It is crucial to start by operationalizing the measurement of impacts, both intended and unintended. This should allow for setting clear regulatory goals and metrics which will allow us to check the effectiveness and efficiency of regulation. Cities may serve as experimental sites (labs) in terms of deploying new AI-based solutions but also as test-beds for new approaches to regulating AI. In these environments, the impacts of AI, as well as the impacts of regulation, could be tested.

Other relevant topics

The workshop opened up questions about many other topics. As well as AI, there will be other new technologies that will have increasing influence, these include virtual and augmented reality, but could also include emerging technologies we do not know about yet. These technologies will also need to be regulated in some way as their impact increases.

Issues regarding privacy and data were also discussed, with a particular focus on protecting children and their data.

The International Congress for the Governance of AI aims to bring together stakeholders from across society. From governments, industry, international organizations, universities, research centers, leaders of underserved nations and communities, and other stakeholders in the AI space. Another key question that was bought up is how to get more scientists among the decision-makers so that they can make decisions based on cutting edge technology.

Suggested Readings

Secretary-General’s High-level Panel on Digital Cooperation
https://www.un.org/en/digital-cooperation-panel/

IEEE Ethically Aligned Design, V. 2.
https://standards.ieee.org/news/2017/ead_v2.html

OECD Principles on Artificial Intelligence
https://www.oecd.org/going-digital/ai/principles/

The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG)
https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top

Global Solutions Network
http://gsnetworks.org/

Artificial Intelligence and Human Rights, Eileen Donahoe and Megan MacDuffee Metzger
https://www.journalofdemocracy.org/articles/artificial-intelligence-and-human-rights/

Toward the Agile and Comprehensive Governance of AI and Robotics, Wendell Wallach and Gary Marchant
https://ieeexplore.ieee.org/document/8662741

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions