Blog

A Conversation with Michael Levin

March 04, 2022

Michael Levin is a Distinguished Professor at the Biology department of Tufts University and serves as director of the Tufts Center for Regenerative and Developmental Biology and the Allen Discovery Center. He speaks with GoodAI Senior Research Scientist, Jan Feyereisl, about the philosophical foundations of his work, which include theories of cognition that assert the goal-seeking behavior associated with the “self” is apparent at all scales of life. 

A conceptual shift from the viewpoint of the brain as the cognitive-engine of the body, Levin proposes that every level, from molecular networks to cells, tissues, organs, organisms and swarms, each possess their own goals and agendas. The flexibility, plasticity, and robustness due to the multi-scale competency architecture of biological systems.  

Focused on the molecular mechanisms that cells use to communicate with one another in developing embryos, Levin’s research aims to harness the bioelectric dynamics towards rational control of growth and form. The language medium: bioelectricity. One proof-of-concept for this view of the world, xenobots – synthetic life forms created in his lab from the skin and heart-muscle cells of frogs – demonstrate the ability of organisms to grow and regenerate through the careful direction of goal-seeking behavior.

The topic of his upcoming talk at the ICLR workshop on collective learning, the wisdom of the body: evolutionary origins and implications of multi-scale intelligence, offers inspiration for new machine learning and robotics approaches. 

This interview has been edited for clarity.


Transcript:

Jan Feyereisl:

Welcome, everyone. Welcome, Mike. Thank you very much for joining us. Just to introduce myself, my name is Jan Feyereisl. I’m a research scientist at GoodAI, a research lab based in Prague in the Czech Republic, focused on building artificial general intelligence systems. It was founded by Marek Rosa, who also founded a games company that built a game called Space Engineers, whose mantra is “the need to create.” Together with our friends and colleagues from Google research, we are organizing a workshop at ICLR this year called Collective Learning Across Scales, from Cells to Societies. 

This discussion is to get people interested in the workshop, in the speakers that will be participating in the workshop, and to let the audience, potential participants, and anyone else interested in those topics to kind of learn a little bit more about the topics and about the works of the invited speakers. One of them who is here with me today is Professor Michael Levin. Welcome.

Michael Levin:

Thank you so much. Happy to be here.

Jan Feyereisl:

Thank you. Let’s start. So in some sense, I view your work – and apologies if I misrepresented, you can correct me if I’m wrong – but in some sense, I view it as looking at some form of fundamentals of how elementary biological units communicate among each other in order to give rise to some form of collective or group-wise learning behavior. You’re doing it at a level that is really exciting to look at because it allows essentially to communicate with the biological systems in a high level language that allows you to do things that traditionally in biology were not possible. Instead of micromanaging, you can essentially focus on pinpointing high level structures or sub-routines that will do the work for you. And the way that I understand it, it relates to bioelectricity, in some sense, or in other words, the way that cells communicate among each other.

So my first question relates to this bioelectricity. Would you say that there is some kind of a fundamental process in terms of this bioelectric communication that is shared across all cells in a living organism? And maybe from which neural communication – that in machine learning we really focus on maybe too much – originated?

Michael Levin:

Let’s take one small step back. I just want to point out that you’re absolutely right – in terms of our empirical work, that’s what we do. We study how cells communicate with each other to scale up into larger kinds of systems. But more broadly, what I’m interested in is diverse embodiments of mind and intelligence. I want to understand how different configurations of physical objects can give rise to things that we recognize as intelligence, cognition, memory, learning, preferences, and so on. And so I think that what evolution has done is discovered long before we did, that electricity and electrical networks are a really convenient way of doing that. 

And I’ll talk about the details of that but I think it’s important to say that I don’t believe in any kind of a privileged substrate for intelligence. I don’t think that bioelectrics is somehow magical in the sense that that’s the only way you can get intelligence. I don’t think that neurons, brain synapses, that these kinds of things are uniquely, in some way, required for intelligence. I think that intelligence and cognition can be made with many different substrates. 

Now here on Earth, we have a few good examples. Most of them do, in fact, involve bioelectricity. But you know, that’s not — I don’t think that’s because it has to be this way. I think there are some very wide spaces of possibilities for how to embody intelligence. And I think you’re absolutely right in that what’s important about the way that cellular collectives get together to have goals, preferences, all of these things that we recognize as cognition above the level of single cells. There’s nothing really specifically neural about it. So most of the paradigms of, let’s say, connectionist types of ideas in machine learning and so on — they’re not really about neurons. I mean, every cell does the kinds of things that the network models are doing. So there’s nothing actually very, very neural specific about it. And that’s good, because it’s not even easy to say what a neuron is. Neurons evolved from much more primitive cell types. Cells have been using electrical communication to form networks since the time of bacteria and bacterial biofilms. It’s that old. Every cell does many of the things that in fact, most of the things that neurons do.

Jan Feyereisl:

Thank you. So, if you would then take it even a little bit further down, further away from biology, do you believe that there is some kind of indication that potentially there is some universal rule or mechanism that could describe cognition, intelligence across many more scales than just the biological ones going from physics all the way to societies and the universe as a whole?

Michael Levin:

Well, I can say this. We’ve been thinking pretty hard about a way to come up with some invariants, something that’s going to be the same in all cognitive intelligence systems, regardless of their implementation, regardless of what they’re made of, and regardless of their origin story, whether they’re evolved, engineered, or some combination of the two. I think this is really important because in the past – due to the limitations of technology, and frankly, our imagination – in the past, it was easy to distinguish between machines and intelligent living organisms. 

People to this day are writing papers on how living things are not machines and so on. And the thing is that this was because in decades past, you could look at something and you could sort of knock on it and if you hear a metallic sound, you could conclude that, okay, this is a machine. It came out of a factory. It’s going to be pretty boring. It’s not going to have any of the features we associate with living things. Whereas if you touch it, and it’s soft, and squishy, you say, okay, this is probably evolved, we owe it some ethical consideration, and it will do interesting things and so on. 

That distinction is completely artificial. It’s not going to survive the next decades now because of evolutionary techniques used in engineering and because of chimeric technologies in bioengineering technologies. There is absolutely no firm line to be drawn between these things. So that means that going forward into the future, we really have to establish categories that are deep and not just because of the limitations of what happens to have come out of evolution, or we could engineer at the time. 

So to me the most profound invariant for all cognitive systems, is the ability to pursue goals. So what you can imagine is – and of course, I’m not the first person to say this –  Wiener and Rosenblueth* had this nice paper in the late 40s, talking about the spectrum of goal-directed activity as a marker for cognition. So this is an old idea. But I’ve sort of formalized it more biologically in recent papers, talking about this kind of goal space for any system, whether it be evolved design, natural, artificial, alien, whatever it’s going to be.

You can imagine the spatio-temporal scale of the largest goal it can possibly pursue. So this kind of demarcates the kind of cognitive horizon beyond which the system cannot think. So if you’re a bacterium, the only goals you can really pursue — they’re very small in space and time – they’re local. Let’s say local sugar concentration. Maybe you have a few minutes of memory going back, maybe a little predictive power going forward. But that cone, that light cone is really quite small. 

Whereas if you’re, let’s say, a dog, then you might have some pretty good memory extending backwards. You have pretty good capacity going forwards. But your cognitive system is simply not going to be able to represent and care about states that are going to happen three months from now, 20 miles over, it’s just not going to happen. And if you’re a human, you have perhaps enormous scale goals, maybe even longer than the human lifespan, which leads to interesting psychological issues, right? 

We’re the first creature that can actually conceive of goals that are guaranteed to be unachievable. Things that take longer than the human lifespan. So I think that that kind of strategy of trying to map out what the shape of the possible, the most, the biggest goal that a system is able to pursue, that’s able to be stressed by if that goal isn’t met, that gives you shapes in this virtual space of this virtual bowl space that allows us to compare across really diverse systems. It doesn’t matter what they’re made of, right?

So at that point, you are now free to consider all kinds of unusual embodiments for agents. You can think of AIs. You can think of very small things like molecular networks. You can think of societies. You can think of gravitational synapses. You can come up with all sorts of interesting things. And all of them can be placed somewhere on this kind of diagram next to each other no matter what they’re made of.

Jan Feyereisl:

So that’s very interesting. You’re saying that the goals really – and the representation of goals, and the space of all the possible goals related to that particular cognitive system, that particular intelligence – is the one that we could focus on in order to describe intelligent systems across scales? 

Michael Levin:

That’s correct. 

Jan Feyereisl:

So maybe the next related question to that is related to where do goals come from. So I think just like in machine learning, or in AI, if you want to build agents that, in some sense, are open-ended, and they’re able to develop themselves, or in some sense, evolve for a very long time ahead — how do we actually create such systems that are able to invent their own goals and continue to actually do something useful? Any suggestions on where to look for the origins of goals?

Michael Levin:

Yeah, this is a very profound question. And I think we have to be humble about the fact that we can only begin to sketch the answer. So I’m certainly not going to claim I have all the answers, but I can say a few things, how we think – thought about it. 

In biology, the common story is that goals, like everything else are – if they exist at all – according to the standard paradigm, shaped by evolution by selection. So there were some particular environments that shaped your ancestor’s goals, and thus they shape your goals. And some of those goals are emergent in the sense that we have to understand that genetics doesn’t specify the organism and it doesn’t specify the behavior of the organism. It specifies the micro-level hardware that is available.

And then the behavior of that hardware, in terms of physiology and behavior and learning and so on, is what gets selected upon. So that’s the standard story –that they come from selection. But I think in many ways this is – this story is highly incomplete. And lots of people have said this before me. I’ll just give you a recent example of this in our group. We have been working on this thing which we call xenobots*. 

You take an early frog embryo. And without adding anything to it — so no new genes, no nanomaterials, nothing like that – you simply remove, you take off some of the skin cells and you put them in a separate environment. And what you’ve done is you’ve taken away some of the constraints that normally tell those skin cells to have this very boring two dimensional life on the outside of the embryo keeping out the bacteria, and you allow them to sort of reboot their multicellularity. And you say, okay well, what do you want to be actually without the constraint of all these other cells? What do you want to be? 

And it turns out that these cells – there’s many things they could have done. They could have separated and crawled away. They could have made a flat mono-layer, like a tissue coat, like a cell culture. They could have died — many things they could have done. Instead, what they do is they get together and they make this little creature that is motile. 

So it swims along. It’s completely self-powered, self-motivated. It swims along and has all kinds of behaviors. It can regenerate. And one of the things that it also can do is work. If you are provided with loose cells in the medium, it basically does what we call kinematic self-replication. That’s the kind of von Neumann style replication where they will go around, collect the cells into little piles, and those cells become the next generation of xenobots. And then they go and do the same thing. 

Now, what’s important about that is where do the goals of the frog embryo come from? Well, for millions of years, they were selected by the need to be a good frog in a froggy environment and have high fitness and all of that. But now, there’s never been a selection to be a good xenobot, there have never been any xenobots. 

And so what this means is that you take these cells and within 48 hours, they learn to solve this problem of being a creature with a new set of components, right? They don’t have the typical things they would normally have. In particular, they don’t have any way of reproducing the way that frogs normally reproduce. We’ve made that impossible. In 48 hours, they figured out a completely new way to get the job done that as far as I know, no other animal on earth does.

What did evolution actually learn when making the frog genome? It wasn’t just to make a good frog. That’s clear. We don’t know what exactly it learned. But what I think is clear is that evolution doesn’t produce specific solutions to specific environmental problems. It produces problem solving machines that have — and I have some thoughts about what power is that ability — but it produces machines that can solve problems in other spaces, which leads to the very profound question that you asked me, where do the goals of a xenobot come from? I don’t know, I think that it’s clear that selection is not the entire story, maybe not even the main story. And I can sort of speculate on some things. But I think the most important thing to say is that it’s a very profound question that is not explained by advances in genomics and things like that.

Jan Feyereisl:

Okay, that’s good to hear that we have a lot of potential exploration and interesting research to be done in this area. It personally interests me and kind of relates to the work that we do at GoodAI as well as many other researchers. And in relation to machine learning and artificial intelligence, I think the really interesting question there is related to the ability of biological systems to essentially have somehow solved the issues of generalization and extrapolation. We actually are trying to build collective systems in our simulations, in our agents, and exactly as you said, rather than evolution, or in our case, our learning algorithm trying to find particular solutions to specific problems or tasks, we focus on building or searching for problem solving machines or learning algorithms themselves. But many times what happened was that those systems were too specific, they always in some sense ended up converging or ended up getting stuck or being too biased towards what they were trained on. So do you have any indication or understanding about how evolution was able to achieve the discovery of problem solving machines that allow you to essentially take something like skin cells, create a collective out of them, to work in completely different configurations? And in a probably relatively different environment than what they were evolved for?

Michael Levin:

Yeah, so I guess I can say two things. And of course this is very much an open question. But the first thing is that it’s important to realize that the only key deliverable of evolution is making sure that its products are observable by some observer, like us. In other words, evolution doesn’t necessarily make more intelligent things, or more complex things. All we can assume that’s going to be the result of the process of biological evolution is biomass.

It’s going to produce something that sticks around long enough for us to see it, whether that be through survival, or through a long period of time, or reproduction. Or whatever it is, that’s really all. And so evolution is interesting because if that’s your only constraint – to be propagated long enough for somebody to observe you – it means that you’re not tied to solving any one particular problem. You can pick what problem you’re going to solve. 

So if you’re a bacterium – this is a point that Chris Fields made a while back – where if you’re a bacterium and you’re in a concentration of sugar and you’d like to get more sugar, you have a couple of different options. You can learn to swim and solve this two dimensional or three dimensional movement problem, or you can change your metabolism and start to metabolize a completely different sugar. It doesn’t matter, right? 

And so whereas we look at this and we say how do you evolve to solve a motion problem or whatever, life can simply switch to a different problem space and there’s no requirement. So that’s one thing to keep in mind is that by specifying individual problem spaces, we constrain in a way that evolution is not constrained by. Now specifically, how I think – what I think allows this is something that I call a multi-scale competency architecture.

It’s the fact that when we build, when we engineer – whether it be through software or hardware – when we engineer new agents, we typically have one, at best two levels of agency because we’re working with dumb parts. So you’re working with passive parts that you hope will come together to form something intelligent and that requires us as the engineer to build everything because we have to know how to assemble the parts in a particular way to get the outcome that we want.

This is extremely constraining. In biology, biology is always working with active or agential material. So at every level, the molecular networks, the cells, the tissues, the organs, the organisms of swarms, every level has its own goals, has its own agendas, and they’re all solving problems in various spaces. And the final outcome that you see is the result of coordination and competition within levels and between levels. 

Okay, so, so each level has its own goals. And so I think that the flexibility, the plasticity, the robustness that we see, comes from the fact that every level has its own goals. I’ll give you a simple example from evolution of how that works.

If you take a tadpole and you produce a tadpole, which we can do, where instead of the primary eyes in the head, there’s an eye on the tail. And if you make a tadpole like that, they can see perfectly well out of those eyes. Because even though the primordial eye cells are sitting in a weird environment next to muscle instead of next to the brain, they’ll form a perfectly good eye. They make this optic nerve. The optic nerve might connect to the spinal cord, the whole thing. 

The brain for millions of years expected visual data from a particular point in the head. But now there’s information coming onto the spinal cord from some weird patch of tissue on its tail. No problem, it can learn to use that. And so that means that the whole thing is incredibly plastic. If you can count on your modules on your subroutines to get their jobs done — even when you make changes — that’s very powerful. 

And why do they work? Because they can rely on their parts to get their job done when things have changed for them. Everybody is a — every piece of this – every module is a goal-seeking module. And what that means is it tries to get to a certain state despite perturbations and this is true at every level. So evolution always has to work with this kind of agential material. 

When evolution makes a change, it’s not usually micromanaging what happens. It’s usually having to control what the underlying agents are going to do. The individual cells are going to do things. So if you’re an embryo, it’s not just telling cells to make skin. It’s actually suppressing them from doing other things they would otherwise want to do on their own. It’s very much instructive. You are working in a reward space, not in a micromanagement space. Evolution has to work this way too, because all the parts always want to do things. 

If you don’t coordinate them, they’ll go off and do other stuff. And so much like with the xenobots. When we make these xenobots, all the cells do all the heavy lifting. They make the bots, we don’t make the bots. All we’ve done is put them in the new environment. But what’s amazing is when they reproduce, the cells themselves are doing exactly the same thing. All they’re doing is collecting the other cells into a pile, but not micromanaging what happens next. 

They’re taking advantage of the fact that these cells are also agents and that they will do their part and do what they need to do. So that working with agential materials, the fact that every level of this thing has its own agendas is what provides the robustness and flexibility, but also the open-endedness. Because when your parts have their own goals, in many ways, it becomes easier. But in other ways, it becomes harder to control what’s going to happen next.

Jan Feyereisl:

Yeah, that to me makes perfect sense because one of the things that we tried to investigate for some time is essentially compared to the way that the current machine learning models are built. Rather than having some kind of end-to-end large monolithic system, focusing really on modular and collective systems. And when you’re saying, if I understood correctly is that one thing is the modularity, the collective nature and the fact that each of those modules in the system has some agency or has some goal on their own, that essentially would be doing something no matter what, and that it would essentially be able to achieve particular states or particular goals despite some some small perturbation. So there’s some level of robustness and when you put those things together, you’re able to actually make the system much more interesting, much more robust, and much more open and generative in some sense compared to a single unit with a single goal.

Michael Levin:

Exactly, yeah. The high levels, the higher levels distort the option space for the lower levels and the lower levels are good at navigating those spaces — if anything, they avoid local minima or local maxima and things like that. But the idea is that all the higher levels can do — they never micromanage — all they can do is motivate, to reward or influence the lower levels towards specific goals, but that’s all. 

And then the lower levels get their own thing done or not. Sometimes you get success and sometimes not. But the whole point of all this is to be able to scale goals and I think also scale stresses. So individual cells have very local cell scale goals, metabolic needs, pH, you know, things like that. Very, very local things. But once you have this modular TOTE loop*, where you test operate and exit, you have this homeostatic loop. If it’s modular, you can plug in different things – what do you measure? What do you compare against? And what do you do? 

Think of a simple – like thermostats are simple – a simple homeostatic loop. If things are modular, you can plug in all kinds of things into that loop. And you can merge when cells are merged. When two cells are merged, when they’ve connected electrically, one of the things that means they do is when they take a measurement of what’s going on, they take a bigger measurement. Instead of a single cell scale, now there’s two cells. 

So if you have 100 cells, now they’re taking a bigger measurement. So what that means is, along with the IQ rise of having multiple cells in the network, what that means is that you can now pursue much larger goals. Whereas individual cells pursue very, very humble sort of scaled or local goals, once you have a large network, that whole loop, the goals scale. And so now we can pursue things like let’s make a finger instead of a single hand.

No individual cell knows what a finger is, but the network can know. And the same thing about stress: individual cells are motivated in their activity by the stress that results from not being in their correct homeostatic state. So if you’re hungry, you get some stress — metabolics are going down, you’ve got to eat — this is a single cell stress. 

But once you’re in a network, and you can export that stress to other cells, you can share that information, then the other cells are motivated to act to reduce your stress because you’re sharing your stress with them. So you’re stressing them out, even though they don’t have a local problem, but you have a global problem because your neighbor is now upset, and he’s stressing you out. So stress is part of that glue that binds these collective agents together. Because by scaling the stress, you enlarge the cognitive space of the things that you are trying to implement. So think about any system that you look at, tell me what it’s stressed by, and I could tell you what the cognitive level is. 

If you’re stressed about local glucose concentrations, and that’s it, well, you’re probably a bacterium. If you’re stressed about the global financial markets and what’s going to happen 100 years from now to humanity, you’re probably a human. And if you’re stressed by how many other creatures have entered a space of some 100 meters, then you’re some kind of mammal that’s territorial. And if you’re stressed by the fact that your eye is in the wrong location, you’re probably an embryo trying to put its body together. So the scale of the things that you could possibly be stressed by is a great indicator of your overall intelligence. And that scales. These electrical networks help you scale your stresses, and thus they scale your goals.

Jan Feyereisl:

That’s a really interesting viewpoint on that. If you would imagine that you would like to give some hints to engineers or machine learning researchers, people who want to essentially engineer an artificial life or some intelligent agents, the first question is, what would you suggest for them to focus on? And second, related to this notion of stress, how would you suggest stress to be essentially encoded in such artificial systems? Would it be through minimization or through some other metric or theme that you would suggest people to focus on?

Michael Levin:

Yeah, I’ll give some thoughts. Obviously, I don’t have a final answer. This is something that we’re working on in my group, too. I think the most important piece of all of this is the multi-scale competency idea. The fact that every piece – I mean, you have to bottom out somewhere – but basically, every scale in every level in your system has to be a goal-directed agent. And it has to have a sense of this kind of homeostatic loop of what it is that it’s trying to minimize and maximize.

And then the problem boils down to how do we couple the goal-states, the stresses, and the measurements that each individual unit takes with the others in its lateral level, right? And so all the way down, you have to have this and so the bottom level, units have to compete for — if we take a cue from biology, I don’t know if this is essential, if this is just how biology happens to do it – but the example from biology is the lowest level units have to compete for metabolic survival.

So in your system, you have to have the ability of lower level subunits. If they’re not doing the right things, they’re not going to be rewarded by the next higher level. They’re going to literally die. They’re going to disappear. So they have to have skin in the game, they have their own local goals. But because their space is being bent by the system above, they have to be able to cooperate towards fulfilling some of the goals of the higher level. And of course, the higher level has the same problem with its higher level and so on. And so this is how I envision this multi-scale architecture working. And I think that that’s the first step. Whether something else is going to be essential, I’m not sure, but I think this will be extremely powerful.

Jan Feyereisl:

So in some sense, not focusing on potentially one particular metric, but looking at the multiple scales all somehow jointly, or at the relationship between them, with all of the little details that you just talked about.

Michael Levin:

Yeah, that’s what we’re doing at this point. We’re making some of these multi-scale kinds of simulations, looking at different ways that each level can reward and punish and incentivize and manipulate the levels below to propagate – in many ways, I love the field of machine learning because it has a really nice, there’s a lot of concepts here that even though I think it’s a very nascent stage, it’s a lot of concepts that are very useful for biology. So this idea of credit assignment is really key, right? 

Because biology is amazing at credit assignments at every level. It seems to pick out exactly what I was doing when the good things happen, right? It seems to know. And so this idea of connecting subunits in ways where the lower levels are rewarded for doing the things that the higher level wants, but they’re not micromanaged to do it, they’re rewarded for it. So that’s where all of the interesting search takes place – what are the policies for sharing that credit assignment, for scaling up the stresses, for connecting the subunits? That’s where all the exciting progress is going to be, I think.

Jan Feyereisl:

This is again something that’s very much close to our heart because currently, we’re essentially struggling with the credit assignment problem in our collective system. So we’re able to let it do something interesting. But when we actually somehow connect it to some external world and have it actually interact with the world in a way where it makes sense, and where the right parts of the collective actually get sufficient feedback, that’s actually really tricky. So any kind of inspiration or ideas from biology, especially if it does well, that’s always useful and helpful. 

And then another interesting question or topic is related to whether we’re focusing on the right substrate of building our intelligent agents, whether we should be focusing a little bit more on not only on the ideas and algorithms that biology employs to do this amazing thing that it does, but also, as you have repeatedly mentioned, we essentially have this wonderful machinery and systems available to us. And if we look at them and communicate with them in the right way, they can do a lot of work for us. Should we focus a little bit more on actually interacting with the – or interfacing with biological systems and using their machinery and their ability to build things in order to actually create our artificial agents? What do you think about that?

Michael Levin:

I think certainly there’s a lot of opportunity for really interesting engineering by instrumentalizing biology. So all of the technologies that are coming online — hybridization technologies, brain computer interfaces, which don’t have to be brains, hybrids, Cyborg types of biological robotics — all of these things that really tightly integrate designed components, software components and living things at different scales, whether they be molecular computing or cellular computing. Yeah, I think lots and lots of great engineering is going to come from that. And I think it’ll be very interesting. We’re certainly involved in some of those things.

I think long term, the important thing to keep our eye on is — and I think engineers do fine with this, I think biologists tend to go astray with it a little more — which is that when you do this, it isn’t because the biology brings you some magic that is unattainable to engineers. It’s just a temporary expediency that we use. I think that’s important – there’s a lot of biologists who have written things about how living things are fundamentally different from machines. And so they build up these binary categories, which I think is completely unsupportable given the chimerization. 

We can make any combination of living things and quote, unquote machines. And it’s impossible to put these things in any kind of a binary category. So I think we have to realize there is nothing magical about living things. We are ultimately going to be able to someday reproduce in our engineered constructs, whatever it is that we see living things doing, because they are the result of natural processes, not magic. 

But that’s going to take a long time and in the meantime, I think there’s lots to be learned by including existing biological components with our engineering. And actually, one interesting thing is why is that even possible, right? Why is it even possible to take living cells and make them live on some sort of micro electrode array and make the whole thing play Pong or fly a flight simulator? Why is that even possible? 

Biology is incredibly interoperable. These cells have to survive in many different environments. We can make chimeras where human cells live next to drosophila cells and they do fine and we can instrumentize them, make them live next to nanomaterials and electrodes and in virtual worlds. Biology solves this kind of problem all the time.

There’s nothing new for these cells to live in some sort of weird bioreactor than it is to live inside an organism where they solve exactly the same problem — Who are my neighbors? How do I make them do good things for me? What are they making me do? Do I want to do these things? Or am I better off being a cancerous cell and going trying to go off on my own? 

These are trade offs that cells make all the time. And they’re not at all surprised when we confront them with weird materials and whatever. So I think from that perspective, there’s tons of good biology and engineering to be learned by making these kinds of hybrid constructs. But we shouldn’t pretend that there’s some sort of magic that we’re going to be forever barred from if we don’t use biology.

Jan Feyereisl:

Makes sense. Makes sense. Okay. One more question which is a little bit, maybe a little bit more distant from your work. But I was wondering your thoughts on this as well. And this relates to, essentially, if we talk about the different scales of cognition, intelligence, one of the things that we also find fascinating is cultural evolution and its cumulative nature. 

And in one of your work you mentioned the focus on the importance of gradualism, of the way that systems gradually kind of build up on each other. And whether you have any views on whether those things are connected, whether essentially, again, it’s fundamentally due to the fact that there’s some underlying mechanism, whether it’s this multi-scale competency idea that essentially translates even to the level of culture and the way that essentially we teach our children and the environment around us and so on. Any thoughts?

Michael Levin:

I think the multiscale competency thing is really fundamental in that the first thing we need to do is realize that we are very bad at detecting agency. To be clear, we are great at detecting a very specific type of agency. So all of our – and this is most living things – all of our senses point outwards, and they measure things in the three dimensional world. So from the time that we’re very small babies, we see objects moving around and we learn to assign — we learn the theory of mind, we learn to assign some agency. 

Okay, this is a bowling ball and all it’s going to do is roll down the hill subject to the laws of physics. This thing is a mouse and it’s going to do some very, very different things that I can’t predict in the same way — I’m better off using rewards and motivations and other things if I want to manipulate what the animal does. All of that makes us good at detecting agency in the three dimensional world. 

But imagine, for example, if we had senses that looked inwards, let’s say a biofeedback sense, that told you what your pancreas was doing at any point in the day. So if you had direct perception of all the things that were happening to your inner organs, and what they did, as a consequence, we would have no problem recognizing intelligence and navigating physiological space. 

You would say, wow, I see this thing learning for the last two weeks. Every day at lunch, I ate this particular thing and now it’s anticipating, it’s able to crank up certain enzymes because it knows that that same thing is coming. Here’s how we dealt with a novel poison that was introduced into my system. So if we had these other training sets, we would be better at recognizing intelligence and weird spaces, these other sorts of physiological space, anatomical space, all these other kinds of spaces.

So what I think we need to do is realize that because of that, I think we need to realize that we are only good with recognizing goal-directed systems in a very narrow range. Roughly medium size, like us roughly the same timescale like us, then we are good. In the same spaces, we are very bad at thinking about — we don’t have any practice thinking about goal-directed systems, the scale of the whole evolutionary process. 

For example, a whole lineage. Do lineages have goals in the sense of not in a magical, religious sense, but in the sense of attractors in the space, where even though it’s noisy, it’s under a certain region of the possible space that it’s going to try to get into, right?

Or, if we could zoom into individual cell activity during embryogenesis, and you see all the noise, all the cells running around, we would never, in a million years be able to predict that, oh, yeah, this is always gonna make a fish embryo every single time. If you didn’t already know what development was, you could never tell from looking at individual cell behaviors. 

So that means that both above us — meaning social levels of it, these kinds of structures – and below us – meaning your body organs and cells and other things — are tons of systems with diverse levels of agency, different goal-directed activities, different levels of IQ. We’re blind to most of that. And so from this, the lessons that I take away from this is that, number one, you cannot tell what the agency or intelligence level of a particular system is, by philosophy. 

You can’t sit there in your armchair and say, that can’t be, it doesn’t have a brain and thermostats don’t have preferences and — you can make these philosophical pronouncements, but they mean nothing. What’s important is experiment and asking what kind of model along that spectrum is the most effective at predicting and controlling what’s going to happen. 

So the structures of which we are part, so let’s say social structures — the Internet of Things, who knows what else — may well have certain degrees of goal directedness that we don’t appreciate any more than our cells appreciate the goals that you and I have as emergent humans that come out of these cells. 

So these larger structures may be very primitive, and their goals may be almost non-existent or very, very minor, or they may be quite significant. We don’t, we can’t assume anything. We have to be open to the idea of experiments and I’m sure there’s some sort of Gödelian limitation on what we can tell as far as what the systems — just like cells can’t really conceive of our goals – I’m sure there are larger systems that we could be part of where we can’t even begin to conceive what the goals are. 

But we have to be open to the fact that they may be there. And there may be mathematical tools to be developed to say, what type of system am I part of and can we say anything statistical about what kinds of goals it might have? And then maybe we will have some degree of agency to say, I want to be part of that, or actually, I defect. I don’t want to be part of this. And of course, the higher system will see that in the same way that we see cancer, right?

Yeah, that’s great for you. But I don’t want you to do that, I’d much rather you to sit there as a nice piece of skin. And when it’s your time to fall off and die back, whatever, I’m the higher level, I don’t care. So there will be this tension between the desires and the needs of us at one level and the levels above us. But we have to start developing tools to at least be able to imagine what these instructors might be.

Jan Feyereisl:

Interesting. So related to this, in some sense, and maybe a little bit more specific question – and I hope we still have a little bit more time – but one thing that is also interesting to us and it kind of relates to bioelectricity, but at the high level, if you can tell us about your viewpoint on how essentially groups are formed in a collective in such a way that they’re beneficial to the collective as a whole. And somewhat related is in your work, this bioelectrical, which I view in some sense as some software or some actual program code that runs on the hardware of the body or the biological system that, for example, encodes a particular morphology. So where does that particular code come from?

Michael Levin:

Yeah. Let’s talk about the code for a minute. I agree with you, and I speak of it this way all the time, that the bioelectric dynamics are a kind of software. And I think that they share some important properties with what we think of as software. Biologists get very, very upset often about that kind of terminology because they say, look, there’s no step by step linear algorithm. Nobody sat down and wrote an algorithm, the cells aren’t taking formal instructions off of a stack somewhere to execute — people say this a terrible analogy. 

But I think that it’s important to first of all, generalize the idea that beyond the computers that we’re familiar with — of course, living things aren’t the kind of linear computers that we’re used to — there are deeper aspects of reprogrammability and so on that are important. But the other thing about following an algorithm or being a code, I think, is that it’s entirely in the eye of the beholder.

In other words, I don’t – I’m not sure there’s any objective fact of the matter about whether something is a code, whether something is following an algorithm, right, versus just being a dynamical system, some sort of analog computer. All of that is in the eye of the beholder. We get fooled because we have examples that we made ourselves. 

And so if somebody says, hey, I wrote the algorithm for this thing and therefore, this is a real digital computer following an algorithm, and this thing over here, it just looks like physics to me, I don’t see any algorithm, I got it from – It’s evolved or whatever – and I think the problem there is that we are we are fooled into thinking that there’s an objective answer to this. Because we’re thinking of the very small class of things we made ourselves where we think that we know it’s an algorithm.

Imagine that we were given some sort of alien artifact, right? And let’s say it’s kind of squishy, and it’s sort of biological but it’s putting out some signals. So one person says, okay, all I see is physics and chemistry. It looks like a living thing. I don’t think there’s any algorithm here at all. And somebody else says, no, you don’t understand, this is a – this plays alien chess, it’s absolutely following an algorithm — if I interpret the outputs correctly, this is a lovely chess game that we can have. And the question is who’s right? 

Because you don’t have access to whoever made it, you don’t know where it comes from. And whether or not something is a kind of software is something we paint onto it as a metaphor that helps us understand it. I don’t think there’s any answer to whether living things really are good codes, or machines or anything else. As an observer in regenerative medicine, as an observer trying to understand evolution, I think that some of these computational metaphors are extremely useful in pushing this forward, 

I think that’s the best we’re ever gonna be able to say in science – that these metaphors are helpful. And if you have a better metaphor, by all means, bring it out. And then we can abandon the older one, that’s fine. But for now, these are great metaphors. So I think there is a code, where does it come from? So in part, it comes from – in every relationship like this of scientists to object, there are two players involved, right? 

There’s the system itself, but then there’s us. So part of this code comes from the observer, it comes from us with a particular understanding of what a mapping is, what a code is – so we bring that ourselves. Part of it comes from evolution and the fact that evolution figured out around the time of bacterial biofilms, that voltage gated current conductances – meaning ion channels that are voltage gated – they’re transistors and once you have that, you can have anything, right, you can make anything move. But bacteria already have that. And so you can make these amazing brain-like electrical dynamics in bacterial biofilms that help them coordinate. 

So part of it comes from that. Part of it comes from the laws of physics and computation. They come from the fact that when you make a machine with a particular set of properties, it’s a weird, platonic pythagorean kind of view, where I think you sort of manifest some laws that I don’t know where these things hang – these things live wherever mathematical truths live, I don’t know where that is — but you get to make logic gates and things that function like truth tables, and so on. 

If you can make a particular kind of machine and it doesn’t matter what – is a basic functionalist idea, that doesn’t matter what the machine is made of – it doesn’t have to be biological but evolution certainly discovered that type of dynamic. And then you get to use these things. So the law, the code, rather, it has things like, well, if I make a particular electrical circuit, then I get to have memory, meaning that once the voltage is changed temporarily, I’m just going to keep that new voltage. You can make a flip flop out of that very easily.

Or maybe the other way around – no, I’m extremely stable, you try to change my voltage, I’m going to snap right back as soon as you’re done. Or something else that amplifies small differences, or something else that solves problems, like, how many cells are we — it’s a big problem in cellular automata to figure out how to make a rule that counts cells — yeah, cells solve this all the time. 

They have electrical networks that can count and can say, okay, we are the right size. Now stop, stop whatever you’re doing. So where do you know, where do those laws come from? I don’t know. The same place that mathematics comes from, I guess, but it’s very clear that evolution exploits all this stuff by making machines that take advantage of all that.

Jan Feyereisl:

I find this really fascinating. Because exactly the way that I view it is – or the way that I understand how you describe it in your research – is that yes, evolution gives you those parts; the genes are the hardware and on the hardware you can – or there is some software or some essentially code that exists or emerges during the lifetime of the biological system. And it has some memory, you can somehow augment it, change it, and you are able to essentially drive the hardware that seemed to have been most of the time or during evolution used for a particular software, you’re able to actually change the software and within bounds, you’re able to do a lot of different stuff. 

So that’s super interesting to us. And, yeah, I was wondering, how is it possible and where does the actual original code come from. And I think you talked a lot about why it is robust in terms of all the multi scale-competency and many of the other things, so it’s super, super interesting. Okay, thank you very much. 

I have a few lightning questions, if you don’t mind. And those are specifically targeted to maybe getting people that are a little bit less versed in those topics and how we could kind of interest them in coming over and joining the workshop. So the first question is, if you can give some example of something that truly surprised you in your research.

Michael Levin:

Boy, it’s hard to pick one. We’ve had many and that’s why I love this field. I see surprising things all day long. But the most recent one is the xenobot replication, the ability that – the fact that these skin cells liberated from the rest of the animal within 48 hours get together to make a motile creature that figures out how to use these agential materials, these other cells as way to reproduce themselves the same way that we made the actual xenobots. Just seeing that, knowing that it’s never happened to our knowledge in the history of life on earth, no other lineage does that. And here, to see this appearing in 48 hours in front of our eyes was just, just absolutely stunning to me.

Jan Feyereisl:

Thank you. And then the next question, what do you think researchers in machine learning and AI should really focus on when building intelligent systems or ultimately maybe trying to get to something like artificial general intelligence? And I think you mentioned the multi-scale competency idea, and so on. So if you would have to pick one and suggest what to start focusing on or where to go, where to investigate, what would you say?

Michael Levin:

Yeah, I think a really rich source of inspiration is life before brains. So look at all the fields of basal cognition, spend some time looking at protozoa, there’s some great channels on YouTube and various live streams that are just a microscope set up over a Petri dish of pond water. And when you see those individual cells doing all the things that they do – there’s no brain, there’s no nervous system – and you just realize how competent each one of them is. And ask yourself, what would it take to get a few of them to cooperate together on a much larger goal, like building a body? Right? We’re all bags of coupled amoebas, basically. And so that, to me, is the key to the whole thing.

Jan Feyereisl:

Thanks. And then last question, how important and relevant is machine learning for the outcomes of your work? If you would like to attract people to come and talk to you, to collaborate with you, what would you say in terms of the field of machine learning AI, how it relates to your work and your interests?

Michael Levin:

Yeah, it’s hugely important. We have a few machine learning experts that work in my group. We need a lot more, both as a tool to use machine learning to analyze the data that we have, but also as an inspiration. I mean, we are all machines that learn right? So we are examples of machine learning. What other kinds of machines can learn, what are they learning? What are the concepts that we even need to begin to talk about these things beyond the material that they’re made of. So yeah, absolutely. Experts in machine learning are extremely important to us. So yeah, we’re open to conversations, for sure.

Jan Feyereisl:

Okay, thank you so much, Mike. It was really, really interesting. I’ve learned a lot of interesting things and concepts despite reading a lot about your work. There were many, many points that I kind of learned from it so I’m really grateful and happy for that.

Michael Levin:

Thank you so much.

Jan Feyereisl:

Thank you so much too and I guess we’ll see each other at the workshop and I’m really looking forward to that and hoping that a lot of interesting people come and join and many more interesting outcomes will come out of it.


Original interview: 

https://www.youtube.com/watch?v=87vRvmkJ9o8

Workshop: 

Cells to Societies: Collective Learning Across Scales

Date: 

April 29 2022

Website:

https://sites.google.com/view/collective-learning/home

*Studies mentioned in interview:

    1. https://www.jstor.org/stable/184878 
    2. https://www.pnas.org/doi/full/10.1073/pnas.1910837117
    3. https://www.pnas.org/doi/10.1073/pnas.2112672118
    4. https://www.oxfordreference.com/view/10.1093/oi/authority.20110803105039783

 


For the latest from our blog, sign up for our newsletter.

Leave a comment

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions