Round results

The Roadmap Institute organized and partook in events within as well as outside the scope of the institute’s interest.

Round I

Gradual Learning:
Learning like a Human

About Gradual Learning: Learning like a human

This video demonstrates three examples of the tasks that the agents needed to solve in the first round of the General AI Challenge.

The Challenge’s first round asked participants to program and train an AI agent to engage in a dialogue with the CommAI-Env environment. They were expected to exchange bytes of information, and in addition the environment gave feedback signals to the agent to guide its behavior.

The agent should demonstrate gradual learning—the ability to use previously learned skills to more readily learn new skills (and in this case, to answer questions generated by the environment).

The goal is not to optimize the agent’s performance on existing skills (how good an agent is at delivering solutions for problems it knows), but the agent’s efficiency at learning to solve new/ unseen problems.

Participants were provided with a platform-independent (Win,Linux,Mac) set of training tasks that could be used as reference, implemented in CommAI-Env. The tasks are based on the CommAImini set recently proposed by Baroni et al., 2017 (https://arxiv.org/abs/1701.08954).

Results and prize winners

The jury selected no winner, but to encourage further work on gradual learning and to reward the participants for their considerable efforts, they decided to split the 2nd prize ($7000) among the four finalists.

Details about the evaluation can be found here.

The recipients of the joint award (in alphabetical order):

Dan Barry – a former NASA astronaut and a veteran of three space flights, four spacewalks and two trips to the International Space Station. He retired from NASA in 2005 and started his own company, Denbar Robotics that focuses on smart robots and artificial intelligence interfaces, concentrating on assistive devices for people with disabilities. In 2011 he co-founded Fellow Robots, a company that provides robots for retail settings. He has ten patents, over 50 articles in scientific journals and has served on two scientific journal editorial boards.

Andrés del Campo Novales – AI hobbyist passionate about the idea of a general AI. He is a Software Engineer with 15 years of professional experience. He has been working for Microsoft in Denmark for the last 11 years in business applications. Andrés studied computer science & engineering at Córdoba and Málaga. He created a chatbot that could learn conversation patterns, context and numerical systems. Download white paper submission and code.

Andreas Ipp – research fellow at the TU Wien where he obtained his habilitation in the field of theoretical physics. His current research is focused on simulating the production of the quarkgluon plasma in heavy ion colliders like the LHC in CERN. After obtaining his PhD, he had postdoctoral fellow positions in Italy and at the Max Planck Institute in Germany. Since his return to TU Wien, he is involved in teaching activities, including lecturing on quantum electrodynamics. Apart from his scientific achievements, he founded the choir of the TU Wien a few years ago, which successfully participates at international choir competitions. Download white paper submission.

Susumu Katayama – assistant professor at the University of Miyazaki in Japan, inventor of the MagicHaskeller inductive functional programming system. He has been working on inductive functional programming (IFP) for fifteen years. His research goal is to realize a human-level AI based on IFP. Download white paper submission and code.

Rules & Submissions

Submission could be:

  • Proposals for policies, or other strategies, that can be acted upon in the nearest future (today)
  • Solutions to AI race-related questions
  • Meta-solutions, e.g. a submission which proposes a better way to approach and deal with AI race problems
  • Frameworks for analyzing AI race questions
  • Aids to understand the problem better: convergent or open-ended roadmaps with various level of detail

Submissions could have included:

  • Social and economic analysis of AI impacts or any other quantitative analysis
  • Analysis of landscape of actors
  • Analysis of socio-economic frameworks such as economy, legal institutions, ethics, democracy, with respect to the expected new ecosystem, where people and AI are increasingly interwoven
  • Proposals for doctorate/post-doctorate studies, to be passed onto university/industry for sponsorship
  • Modelling the race dynamics, both current and future. For example, identifying likely unknown futures and suggesting what (meta)framework we should follow in order to be prepared for unknown risks

Format

  • Submissions were expected in text format and charts and/or visualisations were welcome
  • Submissions needed to have a max. two page summary and, if needed, a longer/unlimited submission
  • White papers, or essays, with suggested solutions or next steps
  • Language: English

By sending in a submission, the author agreed to give GoodAI permission to publish it online so others can learn from, or build on it.

Round II

Solving the AI Race

About Solving the AI Race

The Challenge’s second round took place from 18 January – 18 May 2018 and asked participants to come up with solutions to the problems associated with the AI Race where:

  • Key stakeholders, including the developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
  • The fruits of the technology won’t be shared by the majority of people to benefit humanity, but only by a selected few

The panel judged the entries on five criteria, giving them a score from 0-3. The criteria were:

  • Impact: The potential the solution shows to maximize the chances of a positive future for humanity
  • Feasibility: How practical it will be to implement / apply
  • Acceptance: how likely it is that actors involved will accept the idea E.g. in case of an actionable strategy, what is the chance actors would publicly pledge to it? In case of a framework, how easily could it be adopted?
  • Integrity: How ethical the solution is (ideally solutions should not disadvantage any actors and take into account diversity of values)
  • Novelty: has it been suggested before of ideas

Results and prize winners

Top solutions ($3,000 each)
Kesavan Athimoolam, Solving the Artificial Intelligence Race: Mitigating the problems associated with the AI Race Click for PDF

Kesavan is a freelancer working on the creation of biologically inspired, safe and inspectable AGI that has provisions for emotions, motor-actions , behaviour-shaping & creative thought. He believes that anyone attempting AGI creation should start with learning neuroscience, human brain and behaviour, and that contemporary deep-learning techniques will not lead to AGI.

Alexey Turchin and & David Denkenberger, Classification of Global Solutions for the AI Safety Problem Click for PDF

Alexey (1973) is author of several books and articles on the topics of existential risks and life extension, and was published in “Futures”, “Acta Astronatutica”, “Informatica”, “AI & Society”journals. He graduated in Moscow State University where he studied Physics and Art History (1997). He is an expert on global risks of Russian Transhumanists Movement from 2007. He translated into Russian around 20 main articles about existential risks by Bostrom, Yudkowsky, Circovich, Kent, Hanson.

David Klimek, Framework for managing risks related to emergence of AI/AGI Click for PDF

David received his B.S. from Penn State in Engineering Science, his M.S.E. from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. He is an assistant professor at University of Alaska Fairbanks in mechanical engineering. He is also an associate at the Global Catastrophic Risk Institute. He co-founded and directs the ALLiance to Feed the Earth in Disasters (ALLFED).

Ehrik L. Aldana, A Theory of International AI Coordination: Strategic implications of perceived benefits, harms,capacities, and distribution in AI development

Ehrik Aldana is a Research Fellow at University of California Hastings College of the Law’s Institute for Innovation Law. His recent research focuses on the political implications of artificial intelligence, both domestically and internationally. Previously, he has worked as a remote intern for the University of Oxford’s Future of Humanity Institute, as well as various public interest organizations in the United States. He received his B.A. in Political Science from Yale University.

Runners-up ($2,000 each)
David Klimek, Framework for managing risks related to emergence of AI/AGI Click for PDF

David is Business Consultant/AI Enthusiast. He Graduated in Computer Science at Charles University in Prague. Spend most of his professional career in various consulting or managerial roles helping clients by analyzing complex systems, discovering stakeholder needs and finding effective solutions to their problems. Through all of his life, David is trying to understand how a human brain is working and with the latest advances in Artificial Intelligence also how AI impacts business and society.

Gordon Worley, Avoiding AGI Races Through Self-Regulation Click for PDF

Gordon is a researcher working to address the existential risks of AI. He was introduced to the issue in the early 2000s by Nick Bostrom and Eliezer Yudkowsky and has made it a priority ever since. His current work focuses on using phenomenological methods to perform philosophical investigations into fundamental issues relevant to AI alignment research.

Morris Stuttard & Anastasia Slabukho, The AI Engineers’ Guild: proposal for an AI risk mitigation strategy Click for PDF

Morris is a Writers’ Guild of Great Britain writer from the UK. A graduate in Archaeology and Prehistory and former Head of English, Morris is currently based in Prague, where he writes screenplays, novels and video games internationally. One of his primary genres is science fiction and his work in this field inspired both he and a sci-fi writing student to contribute what they could to the AI race problem.

Rules & Submissions

Submission could be:

  • Proposals for policies, or other strategies, that can be acted upon in the nearest future (today)
  • Solutions to AI race-related questions
  • Meta-solutions, e.g. a submission which proposes a better way to approach and deal with AI race problems
  • Frameworks for analyzing AI race questions
  • Aids to understand the problem better: convergent or open-ended roadmaps with various level of detail

Submissions could have included:

  • Social and economic analysis of AI impacts or any other quantitative analysis
  • Analysis of landscape of actors
  • Analysis of socio-economic frameworks such as economy, legal institutions, ethics, democracy, with respect to the expected new ecosystem, where people and AI are increasingly interwoven
  • Proposals for doctorate/post-doctorate studies, to be passed onto university/industry for sponsorship
  • Modelling the race dynamics, both current and future. For example, identifying likely unknown futures and suggesting what (meta)framework we should follow in order to be prepared for unknown risks

Format

  • Submissions were expected in text format and charts and/or visualisations were welcome
  • Submissions needed to have a max. two page summary and, if needed, a longer/unlimited submission
  • White papers, or essays, with suggested solutions or next steps
  • Language: English

By sending in a submission, the author agreed to give GoodAI permission to publish it online so others can learn from, or build on it.

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions