Blog

A Conversation with Ted Chiang

April 11, 2022

Ted Chiang shares his thoughts with GoodAI’s Olga Afanasjeva on the limits of framing learning in terms of optimization and how we can draw inspiration from biological models of adaptation. An acclaimed author whose reach transcends the science fiction world, Chiang’s prose blends a broad view of science and technology with a deeper philosophical inquiry into the human condition. 

His upcoming talk at the 2022 ICLR workshop on collective learning will focus on his novella, The Lifecycle of Software Objects, a narrative tracing the lives of Ana and Derek as they care for primitive artificial intelligences called digients. At its heart, the tale speaks to the complex relationship between society, individuals, and emerging technology. Poignant and compelling, the story raises important questions around responsibility, consent, and choices on the path of AI development. 

This interview has been edited for clarity.


Transcript:

Olga Afanasjeva:

Why do you think that we’re not close to an AI, which is human-like at the moment? If you can expand a little bit on the problem of optimization and the fixation of society as a whole on optimization. You refer to it in some of your interviews – what is the problem with optimization? Maybe not just in society on a whole, but in AI development? How do you see it?

Ted Chiang:  

There are a couple of informal laws that people have coined: one is Goodhart’s Law, which states that once a measure becomes a target, it ceases to be a good measure. And there are other laws which express similar sentiments, like Campbell’s Law, which states that once you use quantitative social indicators for decision making, it becomes subject to your corruption pressures and it becomes a poor indicator of social processes. 

One commonly discussed example of this phenomenon is standardized testing and how it’s intended to measure how good of an education a school provides, but it loses its usefulness when it becomes possible for teachers to teach to the test. I think there’s this analogy to a lot of what has happened in the history of AI. 

A lot of AI programs are essentially standardized-test-taking machines; their entire development was a form of teaching to the test. In the history of AI, people have often said that people keep moving the goalposts for what qualifies as AI, that as soon as AI solves a problem, critics say, well, that wasn’t really AI, real AI means doing this other thing. I don’t think that’s moving the goalposts. I’d say a more accurate way to think about it is that people are recognizing the ways in which AI programmers have been teaching to the test. People initially thought a certain task would be a good way to measure an AI’s ability and proposed it as a standardized test, but then AI programmers found a way to teach to the test. And so in this sense, we aren’t moving the goalposts; we are just trying to identify a test that the programmers have not already studied extensively. When people criticize the role of standardized testing in education, they’re not saying that tests are useless or that tests have no inherent value. What they’re saying is that our current tests are too easy to game.

A lot of AI research is built around teaching to the test — whenever you define an objective function, whenever you define a loss function, you are basically establishing a single, extremely well-specified test, and you are building a machine that will score high on that test. This insistence on optimization is a misguided focus on a test. The question is, how do you actually gauge whether a school is providing a good education? You need some sort of testing, but it has to be a test that is hard to teach to, and one of the ways that you might do that is to create a test which is very unpredictable. 

What would that look like in AI? The researcher Francois Chollet has proposed something he calls the ARC, the Abstraction and Reasoning Corpus. It’s a test where the developers know the format of the test, because they have to be able to code their AI to understand the inputs and deliver an output, but the developers have no access to the questions that the AI will be tested on. And that’s a really interesting idea; it seems like a type of test which traditional optimization techniques are not applicable to. I think it’s going to be very hard to define an objective function because all the developers will ever get is a final score back; they won’t know what the specific questions were that their AI either answered correctly or incorrectly. What they’ll have to do is come up with some more general problem-solving mechanism and hope that it works. And developers will probably try to optimize around the format of the inputs and the outputs, so it would be interesting if a lot of people started offering tests of this sort with very different input and output formats, because that would force the developers to generalize what sorts of inputs their program could accept and what sort of outputs it could generate. If you had a lot of these tests, you might have a really good indicator of an AI’s general problem-solving ability. You would have to, by analogy, provide your AI with a really good education, the kind we want schools to provide, and then hope they have the skills to accomplish whatever task is thrown at them. 

More broadly, there is the question of viewing things in terms of an objective function or a loss function, which leads to a kind of all-consuming or totalizing way of thinking. It can be very tempting to think of the world as just an optimization problem to be solved: if we could just define the appropriate objective function, we could solve everything in the world. I am super skeptical about that. I don’t think that most aspects of life can be accurately characterized as an optimization problem. 

Right now, we as a society have already collectively arrived at a certain objective function, which is profit. A lot of people have internalized the idea that profit is the ultimate good: if something generates the maximum profit, that is by definition the best outcome. I think that’s why AI and capitalism fit together really well – they share this underlying worldview. 

The goal of profit is so pervasive that it’s hard for us to shake, and the people who resist that idea face an immense amount of pressure to conform. I’m not saying that everyone in AI is working to maximize profit, but they are following the same underlying worldview, the idea that once you have defined your objective function correctly, which often means pricing things appropriately, you will know how to obtain the best result. However, I would maintain that most of the outcomes that we want are not the result of maximizing an objective function. What is a good school? What is a good healthcare system? What is a good public transit system? What is a good society? What is a good life? The idea that these can be achieved by maximizing an objective function is not a healthy worldview.

Olga Afanasjeva:

Right. One thing I discussed with my colleague relating to that, about defining the goals as a function you can optimize, and let’s say it’s happiness – as humans, we have this adaptation to whatever is our best possible state. We feel happy about something, but after a while, it becomes the baseline, right? He said something that I really liked – that it’s probably not about reaching the objective, but it’s about moving on the gradient, that change of state from feeling miserable to feeling better. This is what’s actually going to make us truly happy. What are your thoughts on that, maybe moving on the gradient, rather than optimizing or reaching the objective?

Ted Chiang:  

That’s an interesting idea; I’ll have to think about that. It seems like it could be formulated as its own kind of objective function that you would then optimize for. It would be interesting because it would clearly not be maximizing anything like more conventional objective functions like profit; before long, you would have to abandon profit and then move in a different direction. 

So there would be this constant shifting of goals, or the quantity that you’re trying to optimize. Would that make people happier? I think that’s something that deserves experimental investigation. We could learn a lot just through empirical testing. Are people actually reliably happier over the long term if they keep seeking out this gradient, this shift? That is definitely an interesting idea that warrants further study.

Olga Afanasjeva:

Okay. Coming back to the objective function, what should be the question the developer isn’t asking –  well, if a developer isn’t asking a specific question and doesn’t know the specific question that AI is supposed to answer, then the test becomes meaningful, as Chollet says in his paper. If the developer doesn’t know the question that AI should answer, doesn’t know the goal, or presumably doesn’t know all the future possible goals that we would want the AI to solve – what are the questions that we as developers need to ask ourselves when we develop AI? What kind of features do we want to seek in AI? Where would you start? Where would you look for inspiration? 

Ted Chiang:  

My personal preference has always been to look to biological models and the ways in which animals demonstrate intelligence. Animals are not like humans, but they are constantly solving problems, oftentimes problems which they have never encountered before. That sort of general problem-solving ability – even if it’s not on the same level as human problem-solving ability – seems like a good place to start. 

There were recent experimental results published where they taught mice how to drive, did you see this? They put these mice in these little carts and inside each cart were these contacts, and when the mouse put its paws on them, the cart would go forward or turn left or right. The mice could see a food dispenser and tried to get their carts to go toward it, and the scientists found that the mice became quite good at driving. 

The mice were trained three times a week for eight weeks, and after that they were proficient, so that’s twenty-four trials. Twenty-four trials and they learned a skill which no mouse has ever encountered before in the evolutionary history of the species. I thought that was really impressive. And more recently someone has apparently taught fish to do something similar; I was really surprised that fish are capable of that. Obviously these are not radically unfamiliar problems for animals, because they are still navigating physical space and seeking food as a reward, so it’s not as if they’re proving the Pythagorean theorem, but they were able to apply their general learning skills to a situation unlike anything seen in their natural environments.

Do we have any program that could do anything like that, that could learn a new skill after twenty-four trials? Most AI programs need more like twenty-four million trials. Personally I would be much more impressed if AI researchers built a program that could learn a skill that the developers had never thought of within twenty-four trials. That would impress me more than  AlphaGo or AlphaZero. I feel like it would involve a major breakthrough in our ability to implement a generalized learning mechanism.

Olga Afanasjeva:

Yes, for us, we look at it from the perspective of self-improving AI. What you just described, a lot of researchers will call it learning to learn. And I think that is basically a form of self-improvement that we find in humans and in human intelligence. We can learn how to learn better, we obviously have intrinsic capabilities, or innate capabilities that allow us to learn in the first place and we build new stuff on top of the stuff that we learned already. This makes us more powerful learners.

It brings me to one of your thoughts about self-improvement, and correct me if my quote is not precise: that self-improvement isn’t really possible on the level of a single individual. This is why we don’t really have to worry about runaway doomsday scenarios in AI. Rather, it’s a feature which is powerfully demonstrated in collectives, on the level of societal cultures. Can we talk a little bit about the mechanisms behind this powerful cumulative cultural learning that happens on the level of societies, on the level of  civilizations that allows us as a humanity to self-improve. What can we learn from that as AI developers?

Ted Chiang:  

I should say upfront that the whole topic of collective learning is not something that I am super well-versed in and is something that I’m hoping to learn more about by attending this workshop. 

It seems to me that the big advantage that humans have in collective learning is that we have language and that we are able to communicate to others what we have learned. Language isn’t an absolute requirement because individuals can teach each other through demonstration, and we see this in social animals to some extent, but we haven’t seen societies of animals ratchet upward over time in their capabilities. We’ve seen a few skills spread through a population, but the process doesn’t continue indefinitely, and animals’ lack of language may be a gating factor, because language is closely tied to the capacity for abstract thought. 

To what extent could we envision AI agents engaging in a similar form of collective learning? If you could design AI agents that were fully capable of communicating in language, I think you would definitely see collective learning. But short of that, if you had AI agents that were able to demonstrate things to each other in a way that the other agent would pick it up very rapidly – much more readily than they could by just messing around by themselves – I think that would be a setting suitable for collective learning. But it seems to me that those are very difficult tasks. 

This ties back into what I was talking about earlier about the unpredictability of tests. One of the things that characterizes human language is that you can express anything in language. Something similar is true with physical demonstration. You can’t enumerate all the things that one person can demonstrate to another, any more than you can enumerate all the things that one person can express to another using language. I feel like the endless capacity for expression is a big part of what enables collective learning and the ever growing sophistication of culture among humans. The open-endedness of these communication methods is crucial. 

If you were trying to design AI agents that convey information to one another through some mechanism, it seems likely that they will have a much more limited communication medium, because that’s the only thing we know how to implement. We don’t know how to implement AI agents that generate novelty in their utterances. But I could imagine that in a simplified communication mechanism, you could create a sort of society of AI agents that did exhibit collective learning in the ways that we see in troops of monkeys, where one agent discovers the equivalent of, for example, washing potatoes in water to get the sand off and then teaching everyone else. That would be impressive. That would, I think, be a really interesting and momentous development in AI.

Olga Afanasjeva:

Yes. Especially if we can figure out how to make sure that this kind of cultural effect is cumulative. Another aspect that’s interesting about collectives is this emergent aspect. So even when we were talking earlier about the goals, in a collective for example, there isn’t one single entity that sets a goal for everyone to optimize. It depends on your beliefs, of course, but let’s say the way we look at societies as an organism which produces different types of goals – sometimes they’re conflicting, they influence one another – there’s no need for someone to actually impose the goal, but still, meaningful goals can emerge. Silly goals can emerge as well obviously, but somehow we end up also making progress along the way. So this is interesting, the emergent property of collectives and what’s at its core.

Ted Chiang:  

Yes, yes, and again, it’s always the element of surprise. The behavior which no one expected or predicted, or was trying to achieve, that is a really powerful indicator of the kind of intelligence that I think is really interesting. It is not simply that the program performs better than you expected on a certain test. It is that it’s doing things which never occurred to you that it might do. Things which your prior experience – any test that you might have thought to devise – would not be applicable. 

We could see novelty like that, in say, a system of AI agents. These agents would not be necessarily useful or commercially viable or practical in any sense, not for a very long time. But a breakthrough like that would be really, really exciting. I think that would be much more interesting than the high performing but extremely predictable neural net achievements that we see talked about a lot these days. 

Olga Afanasjeva:

I agree, Ted. Thank you so much. This is a beautiful note on which we can conclude. Thank you.

Ted Chiang:  

Thanks for having me.


Workshop: 

Cells to Societies: Collective Learning Across Scales

Date: 

April 29 2022

Website:

https://sites.google.com/view/collective-learning/home

References:

https://en.wikipedia.org/wiki/Ted_Chiang

https://subterraneanpress.com/the-lifecycle-of-software-objects


For the latest from our blog, sign up for our newsletter.

Leave a comment

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions