Some of the most profound questions in science are also the least tangible. What does it mean to be sentient? What is the self? When issues become imponderable, many researchers demur, but neuroscientist Gerald Edelman dives right in.
A physician and cell biologist who won a 1972 Nobel Prize for his work describing the structure of antibodies, Edelman is now obsessed with the enigma of human consciousness—except that he does not see it as an enigma. In Edelman’s grand theory of the mind, consciousness is a biological phenomenon and the brain develops through a process similar to natural selection. Neurons proliferate and form connections in infancy; then experience weeds out the useless from the useful, molding the adult brain in sync with its environment. Edelman first put this model on paper in the Zurich airport in 1977 as he was killing time waiting for a flight. Since then he has written eight books on the subject, the most recent being Second Nature: Brain Science and Human Knowledge. He is chairman of neurobiology at the Scripps Research Institute in San Diego and the founder and director of the Neurosciences Institute, a research center in La Jolla, California, dedicated to unconventional “high risk, high payoff” science.
In his conversation with DISCOVER contributing editor Susan Kruglinski, Edelman delves deep into this untamed territory, exploring the evolution of consciousness, the narrative power of memory, and his goal of building a humanlike artificial mind.
This year marks the 150th anniversary of The Origin of Species, and many people are talking about modern interpretations of Charles Darwin’s ideas. You have one of your own, which you call Neural Darwinism. What is it?Many cognitive psychologists see the brain as a computer. But every single brain is absolutely individual, both in its development and in the way it encounters the world. Your brain develops depending on your individual history. What has gone on in your own brain and its consciousness over your lifetime is not repeatable, ever—not with identical twins, not even with conjoined twins. Each brain is exposed to different circumstances. It’s very likely that your brain is unique in the history of the universe.
Neural Darwinism looks at this enormous variation in the brain at every level, from biochemistry to anatomy to behavior.
How does this connect to Darwin’s idea of natural selection?If you have a vast population of animals and each one differs, then under competition certain variants will be fitter than others. Those variants will be selected, and their genes will go into the population at a higher rate. An analogous process happens in the brain. As the brain forms, starting in the early embryo, neurons that fire together wire together. So for any individual, the microconnections from neuron to neuron within the brain depend on the environmental cues that provoke the firing. We have the extraordinary variance of the brain reacting to the extraordinary variance of the environment; all of it contributes to making that baby’s brain change. And when you figure the numbers—at least 30 billion neurons in the cortex alone, a million billion connections—you have to use a selective system to maintain the connections that are needed most. The strength of the connections or the synapses can vary depending on experience. Instead of variant animals, you have variant microcircuits in the brain.
Before talking about how this relates to consciousness, I’d like to know how you define consciousness. It’s hard to get scientists even to agree on what it is.William James, the great psychologist and philosopher, said consciousness has the following properties: It is a process, and it involves awareness. It’s what you lose when you fall into a deep, dreamless slumber and what you regain when you wake up. It is continuous and changing. Finally, consciousness is modulated or modified by attention, so it’s not exhaustive. Some people argue about
qualia, which is a term referring to the qualitative feel of consciousness. What is it like to be a bat? Or what is it like to be you or me? That’s the problem that people have argued about endlessly, because they say, “How can it be that you can get that process—the feeling of being yourself experiencing the world—from a set of squishy neurons?”
What is the evolutionary advantage of consciousness?The evolutionary advantage is quite clear. Consciousness allows you the capacity to plan. Let’s take a lioness ready to attack an antelope. She crouches down. She sees the prey. She’s forming an image of the size of the prey and its speed, and of course she’s planning a jump. Now suppose I have two animals: One, like our lioness, has that thing we call consciousness; the other only gets the signals. It’s just about dusk, and all of a sudden the wind shifts and there’s a whooshing sound of the sort a tiger might make when moving through the grass, and the conscious animal runs like hell but the other one doesn’t. Well, guess why? Because the animal that’s conscious has integrated the image of a tiger. The ability to consider alternative images in an explicit way is definitely evolutionarily advantageous.
I’m always surprised when neuroscientists question whether an animal like a lion or a dog is conscious.There is every indirect indication that a dog is conscious—its anatomy and its nervous system organization are very similar to ours. It sleeps and its eyelids flutter during REM sleep. It acts as if it’s conscious, right? But there are two states of consciousness, and the one I call primary consciousness is what animals have. It’s the experience of a unitary scene in a period of seconds, at most, which I call the remembered present. If you have primary consciousness right now, your butt is feeling the seat, you’re hearing my voice, you’re smelling the air. Yet there’s no consciousness of consciousness, nor any narrative history of the past or projected future plans.
How does this primary consciousness contrast with the self-consciousness that seems to define people?Humans are conscious of being conscious, and our memories, strung together into past and future narratives, use semantics and syntax, a true language. We are the only species with true language, and we have this higher-order consciousness in its greatest form. If you kick a dog, the next time he sees you he may bite you or run away, but he doesn’t sit around in the interim plotting to remove your appendage, does he? He can have long-term memory, and he can remember you and run away, but in the interim he’s not figuring out, “How do I get Kruglinski?” because he does not have the tokens of language that would allow him narrative possibility. He does not have consciousness of consciousness like you.
How did these various levels of consciousness evolve?About 250 million years ago, when therapsid reptiles gave rise to birds and mammals, a neuronal structure probably evolved in some animals that allowed for interaction between those parts of the nervous system involved in carrying out perceptual categorization and those carrying out memory. At that point an animal could construct a set of discriminations: qualia. It could create a scene in its own mind and make connections with past scenes. At that point primary consciousness sets in. But that animal has no ability to narrate. It cannot construct a tale using long-term memory, even though long-term memory affects its behavior. Then, much later in hominid evolution, another event occurred: Other neural circuits connected conceptual systems, resulting in true language and higher-order consciousness. We were freed from the remembered present of primary consciousness and could invent all kinds of images, fantasies, and narrative streams.
So if you take away parts of perception, that doesn’t necessarily take away the conceptual aspects of consciousness.I’ll tell you exactly—primitively, but exactly. If I remove parts of your cortex, like the visual cortex, you are blind, but you’re still conscious. If I take out parts of the auditory cortex, you’re deaf but still conscious.
But consciousness still resides in the brain. Isn’t there a limit to how much we can lose and still lay claim to qualia—to consciousness—in the human sense?The cortex is responsible for a good degree of the contents of consciousness, and if I take out an awful lot of cortex, there gets to be a point where it’s debatable as to whether you’re conscious or not. For example, there are some people who claim that babies born without much cortex—a condition called
hydranencephaly—are still conscious because they have their midbrain. It doesn’t seem very likely. There’s a special interaction between the cortex and the thalamus, this walnut-size relay system that maps all senses except smell into the cortex. If certain parts of the thalamocortical system are destroyed, you are in a chronic vegetative state; you don’t have consciousness. That does not mean consciousness is in the thalamus, though.
If you touch a hot stove, you pull your finger away, and then you become conscious of pain, right? So the problem is this: No one is saying that consciousness is what causes you to instantly pull your finger away. That’s a set of reflexes. But consciousness sure gives you a lesson, doesn’t it? You’re not going to go near a stove again. As William James pointed out, consciousness is a process, not a thing.
Can consciousness be artificially created?Someday scientists will make a conscious artifact. There are certain requirements. For example, it might have to report back through some kind of language, allowing scientists to test it in various ways. They would not tell it what they are testing, and they would continually change the test. If the artifact corresponds to every changed test, then scientists could be pretty secure in the notion that it is conscious.
At what level would such an artifact be conscious? Do you think we could make something that has consciousness equivalent to that of a mouse, for example?I would not try to emulate a living species because—here’s the paradoxical part—the thing will actually be nonliving.
Yes, but what does it mean to be alive?Living is—how shall I say?—the process of copying DNA, self-replication under natural selection. If we ever create a conscious artifact, it won’t be living. That might horrify some people. How can you have consciousness in something that isn’t alive? There are people who are dualists, who think that to be conscious is to have some kind of special immaterial agency that is outside of science. The soul, floating free—all of that.
There might be people who say, “If you make it conscious, you just increase the amount of suffering in this world.” They think that consciousness is what differentiates you or allows you to have a specific set of beliefs and values. You have to remind yourself that the body and brain of this artifact will not be a human being. It will have a unique body and brain, and it will be quite different from us. If you could combine a conscious artifact with a synthetic biological system, could you then create an artificial consciousness that is also alive?Who knows? It seems reasonably feasible. In the future, once neuroscientists learn much more about consciousness and its mechanism, why not imitate it? It would be a transition in the intellectual history of the human race.
Do you believe a conscious artifact would have the value of a living thing?Well, I would hope it would be treated that way. Even if it isn’t a living thing, it’s conscious. If I actually had a conscious artifact, even though it was not living, I’d feel bad about unplugging it. But that’s a personal response.
+++
By proposing the possibility of artificial consciousness, are you comparing the human brain to a computer?No. The world is unpredictable, and thus it is not an unambiguous algorithm on which computing is based. Your brain has to be creative about how it integrates the signals coming into it. And computers don’t do that. The human brain is capable of symbolic reference, not just syntax. Not just the ordering of things as you have in a computer, but also the meaning of things, if you will.
There’s a neurologist at the University of Milan in Italy named Edoardo Bisiach who’s an expert on a neuropsychological disorder known as
anosognosia. A patient with anosognosia often has had a stroke in the right side, in the parietal cortex. That patient will have what we call hemineglect. He or she cannot pay attention to the left side of the world and is unaware of that fact. Shaves on one side. Draws half a house, not the whole house, et cetera. Bisiach had one patient who had this. The patient was intelligent. He was verbal. And Bisiach said to him, “Here are two cubes. I’ll put one in your left hand and one in my left hand. You do what I do.” And he went through a motion.
And the patient said, “OK, doc. I did it.”
Bisiach said, “No, you didn’t.”
He said, “Sure I did.”
So Bisiach brought the patient’s left hand into his right visual field and said, “Whose hand is this?”
And the patient said, “Yours.”
Bisiach said, “I can’t have three hands.”
And the patient very calmly said, “Doc, it stands to reason, if you’ve got three arms, you have to have three hands.” That case is evidence that the brain is not a machine for logic but in fact a construction that does pattern recognition. And it does it by filling in, in ambiguous situations.
How are you pursuing the creation of conscious artifacts in your work at the Neurosciences Institute?We construct what we call brain-based devices, or BBDs, which will be increasingly useful in understanding how the brain works and modeling the brain. They may also be the beginning of the design of truly intelligent machines.
What exactly is a brain-based device?It looks like maybe a robot, R2-D2 almost. But it isn’t a robot, because it’s not run by an artificial intelligence [AI] program of logic. It’s run by an artificial brain modeled on the vertebrate or mammalian brain. Where it differs from a real brain, aside from being simulated in a computer, is in the number of neurons. Compared with, let’s say, 30 billion neurons and a million billion connections in the human cortex alone, the most complex
brain-based devices presently have less than a million neurons and maybe up to 10 million or so synapses, the space across which nerve impulses pass from one neuron to another.
What is interesting about BBDs is that they are embedded in and sample the real world. They have something that is equivalent to an eye: a camera. We give them microphones for the equivalent of ears. We have something that matches conductance for taste. These devices send inputs into the brain as if they were your tongue, your eyes, your ears. Our BBD called
Darwin 7 can actually undergo conditioning. It can learn to pick up and “taste” blocks, which have patterns that can be identified as good-tasting or bad-tasting. It will stay away from the bad-tasting blocks, which have images of blobs instead of stripes on them —rather than pick them up and taste them. It learns to do that all on its own.
Why is this kind of machine better than a robot controlled by traditional artificial intelligence software?An artificial intelligence program is algorithmic: You write a series of instructions that are based on conditionals, and you anticipate what the problems might be. AI robot soccer players make mistakes because you can’t possibly anticipate every possible scenario on a field. Instead of writing algorithms, we have our BBDs play sample games and learn, just the way you train your dog to do tricks.
At the invitation of the Defense Advanced Research Projects Agency, we incorporated a brain of the kind that we were just talking about into a
Segway transporter. And we played a match of soccer against Carnegie Mellon University, which worked with an AI-based Segway. We won five games out of five. That’s because our device learned to pick up a ball and kick it back to a human colleague. It learned the colors of its teammates. It did not just execute algorithms.
It’s hard to comprehend what you are doing. What is the equivalent of a neuron in your brain-based device?A biological neuron has a complex shape with a set of diverging branches, called dendrites, coming from one part of the center of the cell, and a very long single process called an axon. When you stimulate a neuron, ions like sodium and potassium and chloride flow back and forth, causing what’s called an action potential to travel down the neuron, through the axon, to a synapse. At the synapse, the neuron releases neurotransmitters that flow into another, postsynaptic neuron, which then fires too. In a BBD, we use a computer to simulate these properties, emulating everything that a real neuron does in a series of descriptions from a computer. We have a set of simple equations that describe neuron firing so well that even an expert can’t tell the difference between our simulation spikes and the real thing.
All these simulations and equations sound a lot like the artificial intelligence ideas that haven’t been very successful so far. How does your concept for a conscious artifact differ?The brain can be simulated on a computer, but when you interface a BBD with the real world, it has the same old problem: The input is ambiguous and complex. What is the best way for the BBD to respond? Neural Darwinism explains how to solve the problem. On our computers we can trace all of the simulated neuronal connections during anything the BBD does. Every 200 milliseconds after the behavior, we ask: What was firing? What was connected? Using mathematical techniques we can actually see the whole thing converge to an output. Of course we are not working with a real brain, but it’s a hint as to what we might need to do to understand real brains.
When are we going to see the first conscious artifact emerge from your laboratory?Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortex—what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you don’t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who aren’t thinking of anything.
In other words, our device has some lovely properties that are necessary to the idea of a conscious artifact. It has that property of indwelling activity. So the brain is already speaking to itself. That’s a very important concept for consciousness.