Report from Future with AI Workshop, Berlin

June 27, 2018

Organized by: GoodAI, the Aspen Institute Germany and the Estonian Embassy

Attended by: Jeff Bullwinkel (Microsoft), Nicola Jentzsch (Stiftung Neue Verantwortung), Toby Walsh (UNSW and TU Berlin), Andrea Shalal (Reuters), Olaf Theiler (Planungsamt der Bundeswehr), Lennart Wetzel (Microsoft), Tyson Barker (Aspen), Ruediger Lentz (Aspen), Mari Aru (Estonian Embassy, Germany), Olga Afanasjeva (GoodAI), Marek Havrda (GoodAI), Marek Rosa (GoodAI)

Photo credit: NASA

The Future with AI Workshop was organized to foster interdisciplinary discussion on how to avoid the pitfalls of a global race for AI, and how to ensure that AI-incurred transitions in society lead to the wellbeing of humanity as a whole. The workshop focused on three areas of discussion:

  • Incentives to cooperate on AI research and development
  • AI and the future of jobs
  • Changing the doomsday discourse

Incentives to cooperate

Cooperation is vital to moving forward with the development of safe AI. Although competition is not inherently bad, a competition to reach a truly transformative AI could lead to potential pitfalls where:

  • Key stakeholders, including the developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
  • The fruits of the technology won’t be shared by the majority of people to benefit humanity, but only by a selected few

Cooperation, and robust methods to ensure cooperation, are therefore key to making sure that AI is safe, and that the benefits will be shared with as many people as possible. These incentives are likely to change between actors, and over time as technology changes.

Many actors are realizing the transformative potential of AI and starting to see it as a strategic priority. However, incentives to cooperate will change depending on the actors. For example, companies, state actors and individuals will all have different incentives to cooperate.

Individuals: individual actors might be incentivized by gaining (a feeling of) control over AI which will affect them directly. They could sign up to a kind of digital “bill of rights” outlining rights and privileges they would receive from cooperating.

Businesses and states: both businesses and states will most likely be incentivized by the joint wealth and knowledge that a transformative AI has the potential to generate. They both will not want to get left behind in terms of technology and wealth. A transformative AI will allow businesses to maximize their earning potential and allow states to increase their wealth in order to improve conditions for their people.

Incentives will also change with time as AI technology (dramatically) improves. At the start with improvements in narrow AI, incentives are likely to be access to larger markets. However, as AGI is reached, or AI becomes more transformative, actors will have access to a wide range of resources, rather than just monetary incentives.

AI and the Future of Jobs

The impact of AI on jobs is an area that is coming to the top of the agenda for many actors. In particular the issue of job automation and what can humans do in a world of automated employment. Furthermore, education will have to adapt substantially to prepare people to take part in the future economy and society.

Firstly, there are many myths surrounding automation. About what will be automated and how long it will take. For example, something like bicycle repair is a long way off, as it will take time to automate fine motoric skills. Narrow AI is already automating many manual jobs and beginning to automate intellectual ones as well. Therefore, although some things may seem further away, we need to start preparing for the possibility of a large proportion of jobs across the world being automated, of even the possibility of a kind of jobless society as the need for “existential” jobs might be significantly reduces. It is important to approach this issue now to ensure a smooth transition to an AI economy.

Some key issues we need to think about, when it comes to automation, are:

  • Emotional intelligence: concerning automation we need to understand the distinction between what is possible and what is socially acceptable
  • Shortening of supply chain: with automation and other new technologies (3D printing for example) it will make sense to relocate physical production closer to the end user to cut down on costs of transport and to shorten the supply chain in terms of space and time. This will disrupt many industries and may particularly impact developing countries.
  • Impacts on developing countries: new methods for international development will most likely have to be implemented, such as cash to individuals, or some sort of universal basic income. This might potentially be funded through a Global AI Fund.

Increasing automation will likely give rise to new jobs which place a premium on the “human touch.” For example, once automated cars are the norm something like “Uber Escorts” may thrive, where a person helps you with your bags or assists with hotel check-in etc.

Other jobs may also focus on the human aspects. For example, nurses in hospitals will have time to spend with their patients as they will need to spend less time doing manual tasks. We can also expect a redistribution of current paid jobs, for example new teachers focusing on social and emotional skills development, as it is in these areas that we can outperform machines.

With the advent of new jobs and labour markets, or non-labour markets, will come many challenges for the education system, which will risk being outdated. We need more research into how humans learn in order to develop effective strategies for re-skilling. Other areas which may need more focus are parenting skills education and methods how to develop and sustain grit in terms of a long-term focus on achieving goals.

With a radical change in economy, we will also need to see a change in policy, governance and responsibilities. Governments will need to step in to ensure that their citizens are not losing out and the private sector should help assist in the re-skilling of the workforce, as they are likely to be the ones benefiting from automation.

With the advent of general AI, and a potential abundance of wealth, there could also be the potential to work less. However, countries will still need to maintain a level of international competitiveness. Similarly to raising work safety standards need to be accompanied by increase in productivity.

Changing the doomsday discourse

Such massive changes in society can bring about panic and fear in the population. It is imperative to discuss the risks involved with the development of AI, but at the same time we should not get carried away in a doomsday discourse. We need to continue to foster positive narratives in order to shape a positive future.

It is useful to imagine these scenarios, maybe as part of a roadmap or a simulation game, in order to help think of strategies to avoid them. However, it is vital that these scenarios are not painted as the only possible outcome. AI is effectively a tool that humans will use to augment our own abilities. Through effective policy and governance (including at global level), AI can be used as a tool for good rather than evil.

To understand the doomsday scenarios it is important to understand why people are unhappy or scared. With such big changes likely to occur in the future it is unsurprising that people are scared of things that they do not feel in control of. Many of the discussions about the future are left to policy makers and governments who are setting the agenda. There may be a feeling that these groups are ill-informed, or not fully prepared. A lack of transparency from these groups could also lead to further fear and agitation.

Unfortunately the doomsday scenarios is one that newspapers like to cover, as it makes a good story. There is a lack of leadership advancing the positive narratives of AI in society. Advocates are needed to lead the movement and to raise the profile of positive AI futures. There needs to be educated public debate, that can lead to tangible results for a better future, rather than fear mongering, which leads to public hysteria and could have negative impacts on AI development as a whole, and in democratic countries in particular.

Overall, it is going to be very difficult to change the doomsday discourse and it is likely that there will not be a single solution. But by demonstrating alternative futures and working on ways to manage a smooth transition to an AI economy will be a good starting point.

AI developers and implementers must keep in mind public perception, and do their best to develop trust and give people a feeling of empowerment. Education will be key to this process and campaigns focusing on the promotion of positive futures might also be effective. Both the education and campaigns must must focus on making people accept change and study effective public perception campaigns to do so.

GoodAI team members: (L-R) Marek Havrda, Olga Afanasjeva and Marek Rosa at the Apsen conference on AI.

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions