GoodAI

Prosek Point - Building A

Prosecká 851/64,

190 00

Prague, Czech Republic

  • Black Facebook Icon
  • Black Twitter Icon
  • Black YouTube Icon

Check out our sister company Keen Software House

Get in touch at info@goodai.com

  • Grey Facebook Icon
  • Grey Twitter Icon
  • Grey YouTube Icon
  • Grey Facebook Icon
  • Grey Twitter Icon
  • Grey YouTube Icon

GoodAI Research

“GoodAI is building towards my lifelong dream to create 

 general artificial intelligence. I've been focused on this goal

 since I was 15 years old.”

Marek Rosa, CEO, CTO, & Founder 

Our long-term goal at GoodAI is to build a general artificial intelligence software program that will automate cognitive processes in science, technology, business, and other fields. Our future artificial intelligences will perceive stimuli in the same way that a human does – by seeing, feeling, interacting, and learning – and use this data to generate behavior, perform tasks, and respond to motivations given by human mentors.

 

Marek Rosa, CEO/CTO of GoodAI, has set the long-term vision for the company. He directs the focus of the company, leading both the technical research and business sides of GoodAI and pushing the teams to work in tandem to achieve our common goal. He takes a hands-on approach to our daily research and development as a researcher and programmer himself.

 

Our work takes inspiration from a number of sources across a variety of disciplines, from AI Safety to Machine Learning and more. Read more about our research inspirations here.

GoodAI’s research has several areas of concentration

 

Framework - describes how we understand intelligence and provides tools for studying, measuring and testing various skills/abilities.

 

Roadmap - an ordered list of skills/abilities (research milestones) our general AI needs to accumulate in order to achieve human-level intelligence.

 

School for AI - an optimized set of learning tasks which we will use to teach the AI new skills in a gradual and guided way.

 

Growing Topology Architectures - our implementation of the first prototypes of neural network architectures that support the gradual accumulation skills - currently achieved by growing the network topology, modular networks, reuse of skills, and many more.

 

AI Roadmap Institute - a new initiative to compare and study various AI and general AI roadmaps proposed by those working in the field.

Brain Simulator - one of GoodAI's in-house software platforms that we use for our experiments.

 

Arnold Simulator - a software platform designed for rapid prototyping of AI systems with highly dynamic neural network topologies.

 

General AI Challenge - multiple round citizen science project designed to tackle crucial research problems in human-level AI development, with $5mil in prize money over the following years.

 

Soundtrack - to accompany your AI and general AI research!

Our work takes inspiration from a number of sources across a variety of disciplines, from AI Safety to Machine Learning and more. Read more about our research inspirations here.

Progress on each topic
 
Framework
Download framework document
 
We view intelligence as a tool for searching for solutions to problems. The guiding principles of our AI research revolve around an AI which can accumulate skills gradually and in a self-improving manner (where each new skill can be reused and improved in the accumulation of further skills). Each new skill works like a heuristic that helps to guide and narrow the search for problem solutions. Some heuristics even increase the efficiency of the search for additional heuristics.

 

These principles have inspired our framework document, which describes how we understand intelligence and provides tools for studying, measuring, and testing various skills and abilities. The framework itself will aim to be as implementation agnostic as possible, without regard to particular learning methods or environments. It will provide an analytic, systematic, and scalable way to generate hypotheses that are possibly relevant in the search for general AI. 

Roadmap
 

Download the roadmap here

The research roadmap is an ordered list of skills and abilities (research milestones) which our AI will need to be able to acquire in order to achieve human-level intelligence. Each skill or ability represents an open research problem and these problems can be distributed among different research groups either internally at GoodAI, or among external researchers and hobbyists.

 

New skills very often depend (build on) previously acquired skills, and so the research milestones exhibit some intrinsic dependencies. We cannot simply skip to an ability in the middle of the roadmap and start implementing it. Instead, each skill is like a stepping stone to the following skill.

 

There are two parts to the roadmap:

  • a map for open problems

  • a map for known and proposed solutions (where each problem may have multiple or branching solutions

 

The roadmap is a living document which will be updated as we work towards the milestones and evaluate them within the framework document. The current version of the documents is early-stage and a work in progress. We anticipate that more milestones and research directions will be added to the roadmap as our understanding matures.

 

The first version of the roadmap and framework can be found in the links above. There may still be parts missing, but we feel that it is better to engage with the community as soon as possible.

High-level overview of milestones for the development and education of general AI


In each of the following stages we have an environment, a teacher, and an AI system. The environment and teacher work in tandem to teach the AI a set of useful skills.

Stage 0

This is the stage before the AI gets a chance to start learning.


In this stage, our researchers and engineers program (hardcode) useful skills (problem solving heuristics) to the AI.

Stage 1

In the first stage, the AI starts with zero learned skills.

 

But it already has some hardcoded skills, for example, the capacity to acquire skills in a gradual manner, the reuse of skills, a rudimentary type of recursive self-improvement, the capacity to learn through gradual and guided learning, etc.


In other words, it has the potential to learn new skills and use these new skills to improve its learning capabilities.

Stage 2

The AI learns basic and universal skills through guided and gradual learning, which can be compared to a mix of puppet / supervised /

apprenticeship learning or even multi-objective reinforcement learning.

The AI uses an error signal from a teacher to change its behaviour into desired directions.

 

The goal is to teach the AI a set of general skills that will be useful in follow-up learning. The AI just needs to learn to emulate

these skills (behaviors, biases, heuristics, strategies, etc.).

 

Special attention is given to teaching how to communicate via a (simple) language, how the world works, etc., so that all skills the AI may learn in the future can already build on top of these skills, making all follow-up learning more efficient.

 

At this point, the AI does not need to learn any specific knowledge (e.g. the capital of the Czech Republic, the name of the president, etc). It learns only general skills.


The AI does only a little self-exploration, as the majority of learning is about how to emulate the skill-set provided by the teacher.

Stage 3

At this stage, the AI is fully capable of communicating with the teacher and a

hardwired error feedback signal is not needed anymore.

 

The AI has associated positive and negative feedback with messages received through language.

 

The AI keeps learning additional complex skills - thinking, reasoning, communication, etc., and also additional useful knowledge.

 

The AI also learns how to efficiently explore the world on its own (this skill is not hard coded but learned).


The AI continues in self-exploration (which is principally guided/biased by the skills/behaviours acquired in previous stages).

Stage 4

We have fully developed human-level general purpose AI that has all skills that are

needed and can be directed toward any goal.

 

In its free time (the AI is not working on any particular goal from humans), the AI continues self-learning - which is, in fact, a preparation for anticipated future goals from humans.

 

The AI continues in recursive self-improvement.


The next step is inevitable: super-human general AI ☺

AI Roadmap Institute

https://www.roadmapinstitute.org/

 

The AI Roadmap Institute was founded in 2016 to compare and study various AI and general AI roadmaps proposed by those working in the field. It maps the space of AI skills and abilities (research topics, open problems, and proposed solutions). The institute uses architecture-agnostic common terminology provided by the framework to compare the roadmaps, allowing research groups with different internal terminologies to communicate effectively.

 

The amount of research into AI has exploded over the last few years, with many papers appearing daily. The institute's major output will be consolidating this research into an (ideally single) visual comparison of roadmaps which outlines the similarities and differences among roadmaps, where roadmaps branch and converge, stages of roadmaps which need to be addressed by new research, and examples of skills and testable milestones. This summary will be constantly updated and available for all who are interested, regardless of technical expertise.

 

There are currently two categories of roadmaps:

  • Research and development, or how to get us to general AI

  • Safety/Futuristic - which explore how to keep humanity safe in the years after general AI is reached.

 

These roadmaps will be described by the institute using the framework in an implementation agnostic manner. The roadmaps will show the problems and any proposed solutions, and the implementations of others will be mapped out in a similar manner.

 

The institute is concerned with ‘big picture’ thinking, without focusing on many local problems in the search for general AI. With a point of comparison among different roadmaps and with links to relevant research, the institute can highlight aspects of AI development where solutions exist or are needed. This means that other research groups can take inspiration from or suggest new milestones for the roadmaps.

 

Finally, the institute is for the scientific community and everyone is invited to contribute. It phrases higher level concepts in an accessible and architecture-agnostic language, with more technical expressions made available to those who are interested.


You can find more details on workshops and roadmap comparisons on the AI Roadmap Institute blog.

 

School for AI

Besides having hard-coded skills, we expect the AI to be able to learn. We will teach the AI new skills in a gradual and guided way in the School for AI which we are now developing.

In the School for AI, we first design an optimized set of learning tasks, or a "curriculum." The curriculum teaches the AI useful skills and abilities, so it doesn't have to discover them on its own. Next, we subject the AI to training. We use the performance of the AI on the learning tasks of the curriculum to improve both the curriculum and hard-coded AI skills.

The amount of research into AI has exploded over the last few years, with many papers appearing daily. The institute's major output will be consolidating this research into an (ideally single) visual comparison of roadmaps which outlines the similarities and differences among roadmaps, where roadmaps branch and converge, stages of roadmaps which need to be addressed by new research, and examples of skills and testable milestones. This summary will be constantly updated and available for all who are interested, regardless of technical expertise.​

Main principles
Gradual learning means learning skills one by one, where complex skills are based on previously learned skills.
 
Guided learning means that there is someone (a mentor or society) who has already discovered many skills for us, and we can learn these skills from them. Guided learning is extremely important, because without it, the AI would waste time exploring areas that evolution and society have already explored, or that we know are not useful or perhaps even dangerous.
 
Curriculum requirements
A good curriculum:

  • Minimizes the time needed for getting the AI into a target state. When the AI is in the target state, it can learn and evolve on its own;

  • Minimizes the effort required for its own creation;

  • Minimizes the number of skills that need to be hard-coded into the AI.

Finding the optimal curriculum for the AI is a multi-objective optimization problem. The better the curriculum, the faster the learning. However, it isn't possible to design a universally optimal curriculum (recall the no free lunch theorem). We are limited by the level of our current knowledge and by the eventual architecture of the general AI.
 
However, we believe that a high-quality curriculum can optimize the learning process and allow for rapid advances in AI breakthroughs over purely algorithmic advances.
 
Other methods of machine learning (e.g. reinforcement learning) can be combined with curriculum learning for improved performance.  


Artificial learning environment
For teaching the AI, we have created a simulated visual toy world with simplified physical laws. We are designing our curriculum to teach the AI from the most basic rules of the world to the most complex ones, up to the point where it can start learning on its own.
 
The goal is not to teach the AI any arbitrary and specific facts about the world, but the opposite: to teach it useful and general skills for a more efficient understanding and exploration of the world, and for better and more general problem solving.

During development of the School for AI, we encountered an interesting question - how should we specify the tasks for the AI? When there is no, or very little, common language it is very challenging and time-consuming to explain tasks to the AI. For this reason, we are focusing on early language acquisition. To cut down on AI development time, we want to be able to efficiently communicate with the AI as soon as possible.

Brain Simulator

Brain Simulator is one of GoodAI's in-house software platforms that we use for our experiments. It is designed to simplify collaboration, testing, and the implementation of new theories, and to easily visualize experiments and data.

 

On this platform, a researcher can either experiment with existing AI modules (e.g. image recognition, working memory, prediction, motion behavior generator, etc.), or create new ones and link them together. The resulting AI agent can observe, interact, and modify the simulated environment.

Arnold Simulator

Arnold Simulator is a software platform designed for rapid prototyping of AI systems with highly dynamic neural network topologies. The software will provide tools for our research and development, but it is also designed for high performance and is transparently scalable to large computer clusters.

An alpha version of Arnold Simulator is available for on GitHub: https://github.com/GoodAI/ArnoldSimulator

Arnold Simulator is the next generation of GoodAI in-house prototyping software. It follows in the steps of GoodAI's Brain Simulator, which focuses more on the standard machine learning algorithms. We’re designing it for large, highly dynamic, heterogeneous and heterarchical networks of lightweight actors, and with a focus on concurrency, parallelism and low-latency messaging.

 

Arnold Simulator consists of a simulation core and a user interface (UI) client. The core is targeted to run on a network of computers with GNU/Linux or MS Windows operating systems. For now, the UI is targeted only for MS Windows. For concurrency, we're using the actor model, where independent actors communicate via messages. The simulation runs in discrete time-steps, during which the individual actors are processed in parallel. In between simulation steps, the system can interact with any virtual or real environment via sensors and actuators. The design of Arnold Simulator will allow us to effectively implement the growing general AI architectures that we are focusing on.

 

Arnold Simulator is based on Charm++, a parallel computation library which implements the actor model and has built-in load balancing that migrates actors across processing units. We intend to try coupling this technology with the latest manycore processors which we hope will allow us to scale the simulation up to millions of actors and billions of connections per machine. It’s targeted to run on a network of MS Windows or GNU/Linux machines. We’re planning to release a pre-alpha version of our software for non-commercial use in the near future.

 

Arnold Simulator is designed with the following assumptions in mind: the network is composed of heterogeneous elements, it can be manipulated to the granularity of individual elements, it can be growing or shrinking all the time on the granularity of individual elements, and it can be so large that it will not fit the memory of a single machine. This set of assumptions naturally leads to dynamic load balancing of migratable actors communicating via asynchronous messages.

General AI Challenge 

https://www.general-ai-challenge.org/

In 2017 GoodAI launched the General AI Challenge, a worldwide citizen science project, which aims to help tackle crucial research problems in human-level AI development. The General AI Challenge will offer $5millon of prizes over the coming years as prizes for different rounds of the competition. The rounds focus not only on technical problems, but also aim to solve social and political issues which may arise with the emergence of AGI.  You can find more about the General AI Challenge here.

 

What is general artificial intelligence and how can it be useful?

 

AI (Artificial Intelligence) is a software program that is able to learn, adapt, be creative and solve problems. While narrow AI is usually able to solve only one specific problem and unable to transfer skills from domain to domain, general AI (AGI) aims for a human-level skill set.


No one has developed general AI. With general AI we will be able do so many things we simply cannot do with our current level of technology. We will automate science, engineering, production, manufacturing, robots, entertainment, anything you can think of, and more. General AI will help us become better people, augment our own intelligence, and recursively self-improve ourselves.

 

 

How can we build and educate AGI? How can we do it fast?

 

General AI is complicated to design from scratch, especially if we want to teach it everything at once (so called ‘end-to-end’). It is more feasible to do if the whole problem of learning and designing is deconstructed into several (less  complicated)  “sub-problems” which we know how to tackle. For example, it is clear that we want the AI to understand and remember images, so it needs the ability to analyze them and a memory to store data. We want it to be able to communicate with humans, so It will need to write, read and understand language. It will also need to learn and adapt to new things, and much more. We call solutions for each sub-problem skills.

 

A skill can be seen as an ability or heuristic which helps the AI solve a particular problem. Importantly, each skill can also be used for learning other skills, significantly reducing the search space for solving other sub-problems.

 

Skills can range from simple and concrete (like the ability to recognize faces, add numbers, open doors, etc.) to more abstract ones (like the ability to build a model of the world, to compress temporal / spatial data, to receive an error signal and adapt accordingly, to acquire new knowledge without forgetting older knowledge, etc.). Skills also provide a simple way to measure how the system works, as it is clear how to measure which system is better in understanding speech, classification, and game playing. However, evaluating general AI as a whole is still unclear.

 

General AI will essentially be a system that exhibits a very large set of skills. Some of those skills might be hard coded by programmers, but most will be learned. Take, for example, the evolution of humans. Evolution provided us with some hardcoded skills or predispositions, but most of what we know we need to learn during our lifetimes - from our parents, the environment, or society. Those skills cannot be hardcoded because humans, just like an AI, need to be able to adapt to unknown future situations. Sometimes it is also easier to teach the desired skill than to add it as a part of the design. On the other hand, letting the AI discover all skills by itself would be slow and inefficient. This means that our job is to identify essential skills and find the most efficient ways to transfer them to a general AI system – by hardcoding them or by teaching them. It is not necessary to find the best skills. Any skills which have the desired properties and which enable the AI to further learn and improve itself can move us closer to general AI.

 

Just like an AI has to use efficient methods when searching for problem solutions, AI researchers must also look for efficient shortcuts to narrow the search for the general AI architecture, optimal curriculum, etc., as we can’t effectively explore the entire space of potential solutions. We can, for example, draw inspiration from evolution, animal brains, or other systems designs. Part of the problem is also what general AI architecture and skill set is easier for us to attain now, with our current knowledge and resources. The framework, roadmap, the Institute are all part of a method for narrowing the search for the architecture and the curriculum to teach it.  Individual sub-problems can be also outsourced to other researchers and institutions.


We can ask questions like, “What is the minimal skill set that is sufficient for human-level AGI?” If we can optimize the process by cutting out all unnecessary skills, we can get to our goal faster. On the other hand, the learning algorithm alone wouldn’t be sufficient; we also need  the thousands or millions of learned skills for the particular domain. Without them, the AI wouldn’t be able to start solving the problems we need. For example,driving a car is not crucial skill for a researcher AI living only in the world of internet and scientific publications, but a skill such as ability to generalize to similar, but previously unseen situations is universal, and falls into the category of necessary skills for every general AI).

 

 

How do we understand intelligence? Our angle…

 

Intelligence is a problem-solving tool that searches for solutions to problems in dynamic, complex and uncertain environments. From a computational point of view, all problems can be viewed as search and optimization problems and the goal of intelligence (or an intelligent agent) is to narrow the search space in order to find the best available solution with as few resources as possible.

 

Intelligence achieves this by discovering skills (heuristics, shortcuts, tricks) that narrow the search, diversify it, and help steer it towards areas that are potentially more promising.

 

One of the most useful skills is the capacity to gradually acquire new skills - which helps in exploiting accumulated knowledge in order to speed up the acquisition of additional skills, the reuse of existing skills, and recursive self-improvement. This way, the intelligent agent slowly creates a repertoire of skills that are essentially building blocks for new, more complex skills.

 

An intelligent agent operates with limited resources (time, memory, atoms, computation cycles, energy, etc.), which is another constraint put on intelligence, favoring skills that use fewer resources.


Gradual and guided learning also helps narrow the search, because at each step, an intelligent agent has to search for a new solution only within a small and useful area, decreasing the number of candidate solutions, thereby reducing the complexity of the search space. On the other hand, if there was no gradual or guided learning and the agent were expected to find a solution to a complex problem too far from its current capabilities, it might never find the solution.

 

 

What are your objectives and goals in terms of AI safety?

 

Teaching the AI through gradual and guided learning, where we fine-tune individual learning tasks in order to teach the AI desired skills (behaviors), will allow us to have more control over the behaviors it will use later to solve novel problems. The AI’s behavior will therefore be more predictable. 

 

In this way, we can imprint positive human biases into the AI, which will be useful for future value alignment (between AI and humans) - one of the important aspects of AI safety.

 

 

 

Why is "gradual" good?

 

 

If we have a hard task, a good way to solve it is to break it down into smaller problems which are easier to solve. The same is true for learning. It is much faster to learn things gradually than try to learn a complex skill from scratch. One example of this is is a hierarchical decomposition of a task and gradual learning of skills from the bottom of the hierarchy to the top.

 

For instance, if you have a newborn child and you give it a task to learn how to get to the airport, the chance that it will learn to do it are really small, because the space of possible states and actions is just too large to explore in a reasonable amount of time. But if you teach it gradually with small tasks, for instance how to crawl and then walk, you increase the chances of success, as it can use these skills to try to get to the airport.  

We want to build systems which learn gradually. Furthermore, we want to guide their learning in a correct way. Guided learning means showing the system what things makes sense to learn and in what order. This reduces the necessity for exploration even further.

 

Basically, you show the child that it makes sense to learn how to walk and open doors, and only then to try to get to the airport.

 

Another benefit of gradual learning is that it can be more general. We do not have to specify a single global objective function (the main goal of AI) at the beginning, because we are rather teaching universal skills, which can be used later for solving some new tasks.

 

In the case of the child, we basically start teaching it to walk and open the door, even if we don’t know it will need to get to the airport later, or to become a dentist, etc.

 

If we teach skills gradually, we have better control over the knowledge which is learned by the system. Later, if we specify a goal for the system, it is more likely that in order to fulfill it, it will try to use these already learned skills rather than inventing new behavior from the scratch. It means that in this way, we reduce the chances that the system would invent any unwanted or harmful strategy.

 

This is similar to teaching the child how to walk and open the door, and then to go to the airport. It is more likely that it will try to solve the task by walking and opening the door, rather than trying to learn a completely new skill (like flying) from scratch, because it would be just more difficult.

 

Performance benefits:

 

  • Optimizing a model that has few parameters and gradually building up to a model with many parameters is more efficient than starting with a model that has many parameters from the beginning. At each step, you only need to optimize/learn small amount of new parameters.

  • There is no need to know the size of the network a priori

  • Network size can correspond to the complexity of given problems (there are no neurons or weights to prune)

  • Starting with a small network is faster (than the other way around)

  • Reuse of existing skills is made possible

 

 

What is a skill / heuristic?

 

A skill or a heuristic is any assumption about a problem that narrows and diversifies the search for a solution and points the search towards more promising areas. It is not guaranteed to be optimal or perfect, but sufficient to meet immediate goals.

 

Other names for a “skill” or “heuristic” are: behavior, strategy, ability, solution, algorithm, shortcut, trick, approximation, exploiting structure in data, and more.

 

Skills can also be considered biases which restrict behavior.

 

Some skills are simple (e.g. detecting a simple pattern such as a line or an edge) or complex (e.g. navigating through an environment).


One way to compare the intelligence of various intelligent agents is to measure and compare their generality, complexity of problems that they can solve, and efficiency of all of their heuristics.

 

FAQs regarding general AI and GoodAI

What does the GoodAI Research structure look like?

The GoodAI Research team is made up of a few “architecture groups” each working on its own general AI prototype. Each group is designing their own curriculum (School for AI). However, we are aiming to align all of the teams in order to focus on problems similar to the training and evaluation talks from the Gradual Learning round of the General AI Challenge.

 

Our AI Safety team is studying: how we can advance safely with our technology, how to mitigate threats to our team and humankind as a whole, how to we can create an alliance of AI researchers committed to the safe development of general AI, developing our futuristic roadmap, and more.


The General AI Challenge team formulates the problems for each round of the General AI Challenge, and manage the day to day activities of the Challenge.

What is the Futuristic Roadmap?

 

GoodAI’s Futuristic Roadmap is our vision for the future and the specific step-by-step plan we will take to get there. The roadmap outlines challenges we expect to come across in the course of general AI  development and our efforts to keep AI safe, and how we will mitigate risks and difficulties we will face along the way.


Our futuristic roadmap is a statement of openness and transparency from GoodAI, and aims to increase cooperation and build trust within the AI community by inspiring conversation and critical thought about human-level AI technology and the future of humankind. While our R&D roadmap is focused on the technical side of general AI development, this futuristic roadmap is focused on safety, society, the economy, freedom, the universe, ethics, people, and more.

You can find one of the most recent roadmaps here.

 

How is GoodAI different? 

 

GoodAI stands apart from other AI companies because of our roadmap, framework, and big picture view. We pursue general AI with a long-term, 10+ year vision, and remain dedicated to this goal. We will not be distracted by narrow AI approaches or short-term commercialization, though we are certain to find useful applications for our general AI technology along the way.

 

Our roadmap, framework, and experimental implementations are in a very early stage and should be taken as works in progress.  We are focused on the gradual accumulation of skills and recursive self-improvement. We do research in growing network topologies and modular networks, and train and teach our AI in our School for AI.


We are optimizing the process of building and educating general AI.

 

 

How can we compete against bigger companies?

 

 

Our mission is to build general AI as fast as possible, but this is not a race.

 

It’s not about competition, and not about making money.

 

At GoodAI, we want to create a positive future for everyone. Developing general AI will be the most helpful thing in human history, and we want to help make this dream come true.

 

How does our work contribute to the fields of AI and general AI research?

 

There is a significant lack of unified approaches to building general-purpose intelligent machines. Comparable to the biological sciences, most researchers, universities and institutes still operate within a very narrow field of focus, frequently without consideration for the 'big picture'.

 

We believe that our approach is a way to step out of this cycle and provide a fresh, unified perspective on building machines that learn to think. We hope to achieve this in a number of ways, each of which are equally relevant and essential for tackling different aspects of the building process:

 

Our framework provides a unified collection of principles, ideas, definitions and formalizations of our thoughts on the process of developing general AI. This allows us to amalgamate all that we believe is important to define as a basis on which we and others can build. It will act as a common language that everyone can understand, and provide a starting point for a platform for further discussion and evolution of our ideas.

 

Our roadmap is a principled approach to clearly outlining and defining a step-by-step guide for obtaining all abilities and skills that a human level intelligent machine needs to possess. This includes their definitions, as well as the gradual order and way in which to achieve them through curricula of our ‘School for AI’.

 

Our School for AI provides learning curricula -- a principled, gradual and guided way of teaching a machine. This approach differs significantly from current approaches of narrow-focused and fixed datasets. We believe that gradual and guided learning are essential parts of data-efficient learning that are paramount to quick convergence towards a level of intelligence that is above current standards.

 

To compare and contrast existing approaches and roadmaps and foster more effective distillation of knowledge about the process of building intelligent machines, our AI Roadmap Institute is a step towards an impartial research organization advancing the search for an optimal protocol for  achieving general artificial intelligence.

 

Last but not least, our software infrastructure is comprised of our large-scale and highly parallel Arnold Simulator, able to handle extremely dynamic network topologies, as well as various learning environments. It was developed specifically for the numerous curricula of our School for AI, and serves as an ideal platform for transforming our conceptual ideas to practical implementations with tangible results.


Using the language of our principles, the above are simply a set of heuristics for steering our search for general AI that we believe are important and will help us achieve significantly faster convergence towards developing truly intelligent machines.

We’re committed to the idea that solving a general problem will, in the end, offer better outcomes than trying to solve a set of specific problems – even if the narrower problems seem easier to tackle at first.

 

"There lies the inventor's paradox, that it is often significantly easier to find a general solution than a more specific one, since the general solution may naturally have a simpler algorithm and cleaner design, and typically can take less time to solve in comparison with a particular problem."

- Bruce Tate

 

We aim for general AI, not narrow AI use cases. This approach allows us to restrict the search for the right solution and focus more resources on our desired long term goal.

Long-Term Development Plans

 

In the long term, we believe our general AI will fill roles as diverse as:

 

AI scientists

AI engineers

AI programmers

AI doctors

many others

 

We never lose sight of our end goal, which is to build general artificial intelligence that can think, learn, and interact in the world. We want to create an AI that is flexible in a changeable environment, just like human beings. We aim to build general artificial intelligence that can find cures for diseases, invent things for people that would take much longer to invent without the cooperation of AI, and teach us much more than we currently know about the universe.

At GoodAI, we are committed to working together with outside AI institutions, researchers, brain designers, and module programmers.

 

We stand firmly behind our belief

that cooperation is better than competition

 

We collaborate with leading thinkers on the safe pursuit of intelligence that may one day surpass that of humans. At GoodAI, we are candid about the progress of our research and the ways we expect general artificial intelligence to impact human society.