Blog

Governance of AI in the years to come

October 10, 2018

This blog post is based on discussions during a workshop with members of the GoodAI team, and Frank and Virginia Dignum.

Summary

  • Regulation is vital for AI especially considering the potential power of the technology
  • Current software regulation may not be robust enough, transparency is key to good practice
  • Two options are binding regulations and incentivisation
  • Better education is needed for the general public and key stakeholders

With narrow AI becoming ever more powerful, and the possibility of general AI on the horizon, it is vital that adequate governance models are put in place to ensure safety today in the development process and beyond. The power of AI technology and its potential to disrupt economic and political structures makes the need even greater.

Building “good” bridges

However, when the stakes get higher, and the technology more powerful, we will no longer have the luxury to make these mistakes. If we look at engineers making bridges, no one would commission their bridges if they finished them before all of the safety requirements were reached. We should begin to take the same attitude with software and raise the bar of AI development. These high standards are already in place in certain industries, for example, medical or aeroplane software. Below we discuss two ways towards a more transparent approach, binding regulation and incentivisation.

The requirements should not only cover robustness, flexibility, and efficiency of the main purpose, i.e. the objective function, but also ethical and social dimensions.

Transparency and binding regulations

Take the example of the catalytic converters in cars, there was little uptake of them in the USA when first invented. However, in 1975 the U.S. Environmental Protection Agency released stricter regulations and almost all cars from then on were fitted with the converters. Car manufacturers could not sell their products without complying leading to a significant reduction of environmental pollution.

In the case of AI, there would need to be some kind of top-down regulations which value transparency and safety over profit. This transparency would need to be defined and does not need to open up the copyright of the product. For example, with medicine we do not know exactly what is in each pill, we trust 3rd party organisations to check that they are safe for human consumption. Standards on software are often fuzzy and difficult to impose so some set of product-by-product regulations for AI technology could be extremely useful.

An example could be an EU law which requires companies to reach certain prerequisites if they wish to sell their algorithms to governments (and for governments to buy only these algorithms). This could have a global impact beyond just the EU. For example, there are estimates that US companies spent more money complying with the EU law of GDPR than European companies because they needed to comply to continue doing business. However, another problem will be that checking algorithms and processes are not as easy as simply checking if a catalytic converter is fitted.

Certifications and incentives

However, these guidelines need to be robust and not just turn into a set of targets for businesses to tick off. There needs to be a balance between creating a framework to comply with and keeping the high quality to avoid what is known as Goodhart Law. To quote Marilyn Strathern: “When a measure becomes a target, it ceases to be a good measure.” Also, we need to ensure the continuous update of such frameworks due to the fast pace of developments in the SW area.

Education

Companies also need to be more clear with the limitations of their products, educating the consumer on what their product can and cannot do. For example “self-drive” mode in some modern cars does not mean that the car will fully drive itself.

Formal technical and ethical education should also be improved, from primary school to university level. Programmers need to have adequate computing skills to be able to scrutinize their own work, to make sure they are reaching the highest possible standards. Furthermore, regulators need to have well-trained supervisors who can thoroughly assess the work of programmers.

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions