February 27, 2009

The Edwin Smith Papyrus

One of the earliest medical documents describing the effects of brain damage on function is the Edwin Smith Surgical Papyrus, dating from the 17th Century and named after the Egyptologist who discovered it. It is thought to be a copy of an earlier composite manuscript dated around 3000-2500 BC.

(An extract from the papyrus, original on left and 'clean' version on right)

Using hieroglyphs, the manuscript describes 48 separate observations (case studies) of brain and spinal injury as well as the treatment used in each. Altogether an extraordinary document that was probably the first to contain descriptions of various brain structures including cranial sutures, meninges, external surface (neocortex), cerebrospinal fluid, and is even the first scientific document to use the word 'brain'.

The manuscript also contains the first reported case of disorders such as quadriplegia, urinary incontinence, priaprism, as well as seminal emission following vertebral dislocation. Many of the cases are presented in something of a formal format: Title, examination, diagnosis, treatment.

The following is Case Two of the Edwin Smith papyrus and describes a wound to the head:

Title: Instructions concerning a [gaping] wound [in his head], penetrating to the bone.
Examination: If thou examinest a man having a [gaping] wound [in] his [head], penetrating to the bone, thou shouldst lay thy hand upon it (and) [thou shouldst] pal[pate hi]s wound. If thou findest his skull [uninjured, not hav]ing a perforation in it...
Diagnosis: Thou shouldst say regarding [him]: 'One hav[ing a gaping wou]nd in his head. An ailment which I will treat'.
Treatment: [Thou] shouldst bind [fresh meat upon it the first day; thou shouldst apply for him two strips of linen, and treat afterward with grease, honey (and) lin]t every day until he recovers.
Gloss: As for 'two strips of linen,' [it means] two bands [of linen which one applies upon the two lips of the gaping wound in order to cause that one join] to the other.
(Materials from G. Neil Martin's Human Neuropsychology, 2nd ed.)

February 26, 2009

A Beautiful 'Brainbow'

(Inspired by Encephalon #64)

Neurons are clever little cells, the very material that processes what we think, see, hear, feel, understand, and so much more. Has anyone considered if they look as artistic as they are artful? In 2007 a team of Harvard neuroscientists found a way to activate multiple fluorescent proteins in neurons and which allows over 90 distinct colours to be 'tagged'. Similar to television, a palette of colours and hues can be generated from three primary colours such as red, green and blue. As one might expect, the activity generated by brain activity causes an explosion of colours, referred to as 'brainbows', and not only does this technique present an impressive light show but also allows researchers to gain an insight into the mechanics by which neurons receive and transmit information. Below are my favourite images:


Auditory portion of a mouse brainstem. A special gene (extracted from coral and jellyfish) was inserted into the mouse in order to map intricate connection. As the mouse thinks, fluorescent proteins spread out along neural pathways. Mammals in general have very thick axons in this region which enables sound to be processed very quickly.


A single neuron (red) in the brainstem. The helter-skelter of lines that criss-cross through the image are representative of signal traffic from other neurons. In this image, one brainstem neuron is surrounded by the remnants of signals from other neurons (mainly blue and yellow-coloured). When viewed with a special microscope, cyan, red and yellow lasers can cause each neuron to shine a specific colour, enabling researchers to track the activity of individual neurons.


This view of the hippocampus shows the smaller glial cells (small ovals) in the proximity of neurons (larger with more filaments). The hippocampus is an important brain structure that plays a major role in memory formation, and is also an essential component of the limbic system which is responsible for a variety of functions including emotion.

See all of the images at Wired.

Man, Machine and In-Between

An excellent article on brain-machine interfaces in today's Nature:
-----------------------------------------------------------

Brain-implantable devices have a promising future. Key safety issues must be resolved, but the ethics of this new technology present few totally new challenges, says Jens Clausen.

We are so surrounded by gadgetry that it is sometimes hard to tell where devices end and people begin. From computers and scanners to multifarious mobile devices, an increasing number of humans spend much of their conscious lives interacting with the world through electronics, the only barrier between brain and machine being the senses — sight, sound and touch — through which humans and devices interface. But remove those senses from the equation, and electronic devices can become our eyes and ears and even our arms and legs, taking in the world around us and interacting with it through man-made software and hardware.

This is no future prediction; it is already happening. Brain–machine interfaces are clinically well established in restoring hearing perception through cochlear implants, for example. And patients with end-stage Parkinson's disease can be treated with deep brain stimulation (DBS) (see 'Human brain–machine applications'). Worldwide, more than 30,000 implants have reportedly been made to control the severe motor symptoms of this disease. Current experiments on neural prosthetics point to the enormous future potential of such devices, whether as retinal or brainstem implants for the blind or as brain-recording devices for controlling prostheses (Velliste et al. 2008).

Non-invasive brain–machine interfaces based on electroencephalogram recordings have restored communication skills of patients 'locked in' by paralysis (Birbaumer et al. 1999). Animal research and some human studies (Hochberg et al. 2006) suggest that full control of artificial limbs in real time could further offer the paralysed an opportunity to grasp or even to stand and walk on brain-controlled, artificial legs, albeit likely through invasive means, with electrodes implanted directly in the brain.

Future advances in neurosciences together with miniaturization of microelectronic devices will make possible more widespread application of brain–machine interfaces. Melding brain and machine makes the latter an integral part of the individual. This could be seen to challenge our notions of personhood and moral agency. And the question will certainly loom that if functions can be restored for those in need, is it right to use these technologies to enhance the abilities of healthy individuals? It is essential that devices are safe to use and pose few risks to the individual. But the ethical problems that these technologies pose are not vastly different from those presented by existing therapies such as antidepressants. Although the technologies and situations that brain–machine interfacing devices present might seem new and unfamiliar, most of the ethical questions raised pose few new challenges.

Welcome to the machine

In brain-controlled prosthetic devices, signals from the brain are decoded by a computer that sits in the device. These signals are then used to predict what a user intends to do. Invariably, predictions will sometimes fail and this could lead to dangerous, or at the very least embarrassing, situations. Who is responsible for involuntary acts? Is it the fault of the computer or the user? Will a user need some kind of driver's licence and obligatory insurance to operate a prosthesis?

Fortunately, there are precedents for dealing with liability when biology and technology fail to work. Increasing knowledge of human genetics, for example, led to attempts to reject criminal responsibility that were based on the inappropriate belief that genes predetermine actions. These attempts failed, and neuroscientific pursuits seem similarly unlikely to overturn views on human free will and responsibility (Greely, 2006). Moreover, humans are often in control of dangerous and unpredictable tools such as cars and guns. Brain–machine interfaces represent a highly sophisticated case of tool use, but they are still just that. In the eyes of the law, responsibility should not be much harder to disentangle.

But what if machines change the brain? Evidence from early brain-stimulation experiments done half a century ago suggests that sending a current into the brain may cause shifts in personality and alterations in behaviour. Many patients with Parkinson's disease who have motor complications that are no longer manageable through medication report significant benefits from DBS. Nevertheless, compared with the best drug therapy, DBS for Parkinson's disease has shown a greater incidence of serious adverse effects such as nervous system and psychiatric disorders (Weaver et al. 2009) and a higher suicide rate (Appleby et al. 2007). Case studies revealed hypomania and personality changes of which the patients were unaware, and which disrupted family relationships before the stimulation parameters were readjusted (Mandat, Hurwitz & Honey, 2006).

Such examples illustrate the possible dramatic side effects of DBS, but subtler effects are also possible. Even without stimulation, mere recording devices such as brain-controlled motor prostheses may alter the patient's personality. Patients will need to be trained in generating the appropriate neural signals to direct the prosthetic limb. Doing so might have slight effects on mood or memory function or impair speech control.

Nevertheless, this does not illustrate a new ethical problem. Side effects are common in most medical interventions, including treatment with psychoactive drugs. In 2004, for example, the US Food and Drug Administration told drug manufacturers to print warnings on certain antidepressants about the short-term increased risk of suicide in adolescents using them, and required increased monitoring of young people as they started medication. In the case of neuroprostheses, such potential safety issues should be identified and dealt with as soon as possible. The classic approach of biomedical ethics is to weigh the benefits for the patient against the risk of the intervention and to respect the patient's autonomous decisions (Beauchamp & Childress, 2009). This should also hold for the proposed expansion of DBS to treat patients with psychiatric disorders (Synofzik & Schlaepfer, 2008).

Bench, bedside and brain

The availability of such technologies has already begun to cause friction. For example, many in the deaf community have rejected cochlear implants. Such individuals do not regard deafness as a disability that needs to be corrected, instead holding that it is a part of their life and their cultural identity. To them, cochlear implants are regarded as an enhancement beyond normal functioning.

What is enhancement and what is treatment depends on defining normality and disease, and this is notoriously difficult. For example, Christopher Boorse, a philosopher at the University of Delaware in Newark, defines disease as a statistical deviation from "species-typical functioning" (Boorse, 1977). As deafness is measurably different from the norm, it is thus considered disease. The definition is influential and has been used as a criterion for allocation of medical resources (Daniels, 1985). From this perspective, the intended medical application of cochlear implants seems ethically unproblematic. Nevertheless, Anita Silvers, a philosopher at San Francisco State University in California and a disability scholar and activist, has described such treatments as a "tyranny of the normal" (Silvers, 1998), designed to adjust people who are deaf to a world designed by the hearing, ultimately implying the inferiority of deafness.

Although many have expressed excitement at the expanded development and testing of brain–machine interface devices to enhance otherwise deficient abilities, Silvers suspects that prostheses could be used for a "policy of normalizing". We should take these concerns seriously, but they should not prevent further research on brain–machine interfaces. Brain technologies should be presented as one option, but not the only solution, for paralysis or deafness. Still, whether brain-technological applications are a proper option remains dependent on technological developments and on addressing important safety issues.

One issue that is perhaps more pressing is how to ensure that risks are minimized during research. Animal experimentation will probably not address the full extent of psychological and neurological effects that implantable brain–machine interfaces could have. Research on human subjects will be needed, but testing neuronal motor prostheses in healthy people is ethically unjustifiable because of the risk of bleeding, swelling, inflammation and other, unknown, long-term effects.

People with paralysis, who might benefit most from this research, are also not the most appropriate research subjects. Because of the very limited medical possibilities and often severe disabilities, such individuals may be vulnerable to taking on undue risk. Most suitable for research into brain–machine interface devices are patients who already have an electrode implanted for other reasons, as is sometimes the case in presurgical diagnosis for epilepsy. Because they face the lowest additional risk of the research setting and will not rest their decision on false hopes, such patients should be the first to be considered for research (Clausen, 2008).

Brain–machine interfaces promise therapeutic benefit and should be pursued. Yes, the technologies pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy. Ethics is well prepared to deal with the questions in parallel to and in cooperation with the neuroscientific research.

References:

Appleby, B. S., Duggan, P. S., Regenberg, A. & Rabins, P. V. Mov. Disord. 22, 1722–1728 (2007).

Beauchamp, T. L. & Childress, J. F. Principles of Biomedical Ethics (Oxford Univ. Press, 2009).

Birbaumer, N. et al. Nature 398, 297–298 (1999).

Boorse, C. Phil. Sci. 44, 542–573 (1977).

Clausen, J. Biotechnol. J. 3, 1493–1501 (2008).

Daniels, N. Just Health Care (Cambridge Univ. Press, 1985).

Greely, H. T. Minn. J. Law, Sci. Technol. 7, 599–637 (2006).

Hochberg, L. R. et al. Nature 442, 164–171 (2006).

Mandat, T. S., Hurwitz, T. & Honey, C. R. Acta Neurochir. (Wien) 148, 895–897 (2006).

Silvers, A. in Enhancing Human Traits: Ethical and Social Implications (ed. Parens, E.) 95–123 (Georgetown Univ. Press, 1998).

Synofzik, M. & Schlaepfer, T. E. Biotechnol. J. 3, 1511–1520 (2008).

Velliste, M., Perel, S., Spalding, M. C., Whitford, A. S. & Schwartz, A. B. Nature 453, 1098–1101 (2008).

Weaver, F. M. et al. J. Am. Med. Assoc. 301, 63–73 (2009).

February 19, 2009

Encephalon #64

The 64th edition of Encephalon - the neuroscience and psychology blog carnival - is now online and hosted by The Neurocritic. Head on over there to check out the contributions!

February 15, 2009

What Makes You Uniquely "You"?

Some of the most profound questions in science are also the least tangible. What does it mean to be sentient? What is the self? When issues become imponderable, many researchers demur, but neuro­scientist Gerald Edelman dives right in.

A physician and cell biologist who won a 1972 Nobel Prize for his work describing the structure of antibodies, Edelman is now obsessed with the enigma of human consciousness—except that he does not see it as an enigma. In Edelman’s grand theory of the mind, consciousness is a biological phenomenon and the brain develops through a process similar to natural selection. Neurons proliferate and form connections in infancy; then experience weeds out the useless from the useful, molding the adult brain in sync with its environment. Edelman first put this model on paper in the Zurich airport in 1977 as he was killing time waiting for a flight. Since then he has written eight books on the subject, the most recent being Second Nature: Brain Science and Human Knowledge. He is chairman of neurobiology at the Scripps Research Institute in San Diego and the founder and director of the Neurosciences Institute, a research center in La Jolla, California, dedicated to unconventional “high risk, high payoff” science.

In his conversation with DISCOVER contributing editor Susan Kruglinski, Edelman delves deep into this untamed territory, exploring the evolution of consciousness, the narrative power of memory, and his goal of building a humanlike artificial mind.

This year marks the 150th anniversary of The Origin of Species, and many people are talking about modern interpretations of Charles Darwin’s ideas. You have one of your own, which you call Neural Darwinism. What is it?

Many cognitive psychologists see the brain as a computer. But every single brain is absolutely individual, both in its development and in the way it encounters the world. Your brain develops depending on your individual history. What has gone on in your own brain and its consciousness over your lifetime is not repeatable, ever—not with identical twins, not even with conjoined twins. Each brain is exposed to different circumstances. It’s very likely that your brain is unique in the history of the universe. Neural Darwinism looks at this enormous variation in the brain at every level, from biochemistry to anatomy to behavior.

How does this connect to Darwin’s idea of natural selection?

If you have a vast population of animals and each one differs, then under competition certain variants will be fitter than others. Those variants will be selected, and their genes will go into the population at a higher rate. An analogous process happens in the brain. As the brain forms, starting in the early embryo, neurons that fire together wire together. So for any individual, the microconnections from neuron to neuron within the brain depend on the environmental cues that provoke the firing. We have the extraordinary variance of the brain reacting to the extraordinary variance of the environment; all of it contributes to making that baby’s brain change. And when you figure the numbers—at least 30 billion neurons in the cortex alone, a million billion connections—you have to use a selective system to maintain the connections that are needed most. The strength of the connections or the synapses can vary depending on experience. Instead of variant animals, you have variant microcircuits in the brain.

Before talking about how this relates to consciousness, I’d like to know how you define consciousness. It’s hard to get scientists even to agree on what it is.

William James, the great psychologist and philosopher, said consciousness has the following properties: It is a process, and it involves awareness. It’s what you lose when you fall into a deep, dreamless slumber and what you regain when you wake up. It is continuous and changing. Finally, consciousness is modulated or modified by attention, so it’s not exhaustive. Some people argue about qualia, which is a term referring to the qualitative feel of consciousness. What is it like to be a bat? Or what is it like to be you or me? That’s the problem that people have argued about endlessly, because they say, “How can it be that you can get that process—the feeling of being yourself experiencing the world—from a set of squishy neurons?”

What is the evolutionary advantage of consciousness?

The evolutionary advantage is quite clear. Consciousness allows you the capacity to plan. Let’s take a lioness ready to attack an antelope. She crouches down. She sees the prey. She’s forming an image of the size of the prey and its speed, and of course she’s planning a jump. Now suppose I have two animals: One, like our lioness, has that thing we call consciousness; the other only gets the signals. It’s just about dusk, and all of a sudden the wind shifts and there’s a whooshing sound of the sort a tiger might make when moving through the grass, and the conscious animal runs like hell but the other one doesn’t. Well, guess why? Because the animal that’s conscious has integrated the image of a tiger. The ability to consider alternative images in an explicit way is definitely evolutionarily advantageous.

I’m always surprised when neuroscientists question whether an animal like a lion or a dog is conscious.

There is every indirect indication that a dog is conscious—its anatomy and its nervous system organization are very similar to ours. It sleeps and its eyelids flutter during REM sleep. It acts as if it’s conscious, right? But there are two states of consciousness, and the one I call primary consciousness is what animals have. It’s the experience of a unitary scene in a period of seconds, at most, which I call the remembered present. If you have primary consciousness right now, your butt is feeling the seat, you’re hearing my voice, you’re smelling the air. Yet there’s no consciousness of consciousness, nor any narrative history of the past or projected future plans.
+++
How does this primary consciousness contrast with the self-consciousness that seems to define people?

Humans are conscious of being conscious, and our memories, strung together into past and future narratives, use semantics and syntax, a true language. We are the only species with true language, and we have this higher-order consciousness in its greatest form. If you kick a dog, the next time he sees you he may bite you or run away, but he doesn’t sit around in the interim plotting to remove your appendage, does he? He can have long-term memory, and he can remember you and run away, but in the interim he’s not figuring out, “How do I get Kruglinski?” because he does not have the tokens of language that would allow him narrative possibility. He does not have consciousness of consciousness like you.

How did these various levels of consciousness evolve?

About 250 million years ago, when therapsid reptiles gave rise to birds and mammals, a neuronal structure probably evolved in some animals that allowed for interaction between those parts of the nervous system involved in carrying out perceptual categorization and those carrying out memory. At that point an animal could construct a set of discriminations: qualia. It could create a scene in its own mind and make connections with past scenes. At that point primary consciousness sets in. But that animal has no ability to narrate. It cannot construct a tale using long-term memory, even though long-term memory affects its behavior. Then, much later in hominid evolution, another event occurred: Other neural circuits connected conceptual systems, resulting in true language and higher-order consciousness. We were freed from the remembered present of primary consciousness and could invent all kinds of images, fantasies, and narrative streams.

So if you take away parts of perception, that doesn’t necessarily take away the conceptual aspects of consciousness.

I’ll tell you exactly—primitively, but exactly. If I remove parts of your cortex, like the visual cortex, you are blind, but you’re still conscious. If I take out parts of the auditory cortex, you’re deaf but still conscious.

But consciousness still resides in the brain. Isn’t there a limit to how much we can lose and still lay claim to qualia—to consciousness—in the human sense?

The cortex is responsible for a good degree of the contents of consciousness, and if I take out an awful lot of cortex, there gets to be a point where it’s debatable as to whether you’re conscious or not. For example, there are some people who claim that babies born without much cortex—a condition called hydran­encephaly—are still conscious because they have their midbrain. It doesn’t seem very likely. There’s a special interaction between the cortex and the thalamus, this walnut-size relay system that maps all senses except smell into the cortex. If certain parts of the thalamo­cortical system are destroyed, you are in a chronic vegetative state; you don’t have consciousness. That does not mean consciousness is in the thalamus, though.

If you touch a hot stove, you pull your finger away, and then you become conscious of pain, right? So the problem is this: No one is saying that consciousness is what causes you to instantly pull your finger away. That’s a set of reflexes. But consciousness sure gives you a lesson, doesn’t it? You’re not going to go near a stove again. As William James pointed out, consciousness is a process, not a thing.

Can consciousness be artificially created?

Someday scientists will make a conscious artifact. There are certain requirements. For example, it might have to report back through some kind of language, allowing scientists to test it in various ways. They would not tell it what they are testing, and they would continually change the test. If the artifact corresponds to every changed test, then scientists could be pretty secure in the notion that it is conscious.

At what level would such an artifact be conscious? Do you think we could make something that has consciousness equivalent to that of a mouse, for example?

I would not try to emulate a living species because—here’s the paradoxical part—the thing will actually be nonliving.

Yes, but what does it mean to be alive?

Living is—how shall I say?—the process of copying DNA, self-replication under natural selection. If we ever create a conscious artifact, it won’t be living. That might horrify some people. How can you have consciousness in something that isn’t alive? There are people who are dualists, who think that to be conscious is to have some kind of special immaterial agency that is outside of science. The soul, floating free—all of that.

There might be people who say, “If you make it conscious, you just increase the amount of suffering in this world.” They think that consciousness is what differentiates you or allows you to have a specific set of beliefs and values. You have to remind yourself that the body and brain of this artifact will not be a human being. It will have a unique body and brain, and it will be quite different from us. If you could combine a conscious artifact with a synthetic biological system, could you then create an artificial consciousness that is also alive?Who knows? It seems reasonably feasible. In the future, once neuroscientists learn much more about consciousness and its mechanism, why not imitate it? It would be a transition in the intellectual history of the human race.

Do you believe a conscious artifact would have the value of a living thing?

Well, I would hope it would be treated that way. Even if it isn’t a living thing, it’s conscious. If I actually had a conscious artifact, even though it was not living, I’d feel bad about unplugging it. But that’s a personal response.

+++

By proposing the possibility of artificial consciousness, are you comparing the human brain to a computer?

No. The world is unpredictable, and thus it is not an unambiguous algorithm on which computing is based. Your brain has to be creative about how it integrates the signals coming into it. And computers don’t do that. The human brain is capable of symbolic reference, not just syntax. Not just the ordering of things as you have in a computer, but also the meaning of things, if you will.

There’s a neurologist at the University of Milan in Italy named Edoardo Bisiach who’s an expert on a neuropsychological disorder known as anosognosia. A patient with anosognosia often has had a stroke in the right side, in the parietal cortex. That patient will have what we call hemineglect. He or she cannot pay attention to the left side of the world and is unaware of that fact. Shaves on one side. Draws half a house, not the whole house, et cetera. Bisiach had one patient who had this. The patient was intelligent. He was verbal. And Bisiach said to him, “Here are two cubes. I’ll put one in your left hand and one in my left hand. You do what I do.” And he went through a motion.

And the patient said, “OK, doc. I did it.”

Bisiach said, “No, you didn’t.”

He said, “Sure I did.”

So Bisiach brought the patient’s left hand into his right visual field and said, “Whose hand is this?”

And the patient said, “Yours.”

Bisiach said, “I can’t have three hands.”

And the patient very calmly said, “Doc, it stands to reason, if you’ve got three arms, you have to have three hands.” That case is evidence that the brain is not a machine for logic but in fact a construction that does pattern recognition. And it does it by filling in, in ambiguous situations.

How are you pursuing the creation of conscious artifacts in your work at the Neurosciences Institute?

We construct what we call brain-based devices, or BBDs, which will be increasingly useful in understanding how the brain works and modeling the brain. They may also be the beginning of the design of truly intelligent machines.

What exactly is a brain-based device?

It looks like maybe a robot, R2-D2 almost. But it isn’t a robot, because it’s not run by an artificial intelligence [AI] program of logic. It’s run by an artificial brain modeled on the vertebrate or mammalian brain. Where it differs from a real brain, aside from being simulated in a computer, is in the number of neurons. Compared with, let’s say, 30 billion neurons and a million billion connections in the human cortex alone, the most complex brain-based devices presently have less than a million neurons and maybe up to 10 million or so synapses, the space across which nerve impulses pass from one neuron to another.

What is interesting about BBDs is that they are embedded in and sample the real world. They have something that is equivalent to an eye: a camera. We give them microphones for the equivalent of ears. We have something that matches conductance for taste. These devices send inputs into the brain as if they were your tongue, your eyes, your ears. Our BBD called Darwin 7 can actually undergo conditioning. It can learn to pick up and “taste” blocks, which have patterns that can be identified as good-tasting or bad-tasting. It will stay away from the bad-tasting blocks, which have images of blobs instead of stripes on them —rather than pick them up and taste them. It learns to do that all on its own.

Why is this kind of machine better than a robot controlled by traditional artificial intelligence software?

An artificial intelligence program is algorithmic: You write a series of instructions that are based on conditionals, and you anticipate what the problems might be. AI robot soccer players make mistakes because you can’t possibly anticipate every possible scenario on a field. Instead of writing algorithms, we have our BBDs play sample games and learn, just the way you train your dog to do tricks.

At the invitation of the Defense Advanced Research Projects Agency, we incorporated a brain of the kind that we were just talking about into a Segway transporter. And we played a match of soccer against Carnegie Mellon University, which worked with an AI-based Segway. We won five games out of five. That’s because our device learned to pick up a ball and kick it back to a human colleague. It learned the colors of its teammates. It did not just execute algorithms.

It’s hard to comprehend what you are doing. What is the equivalent of a neuron in your brain-based device?

A biological neuron has a complex shape with a set of diverging branches, called dendrites, coming from one part of the center of the cell, and a very long single process called an axon. When you stimulate a neuron, ions like sodium and potassium and chloride flow back and forth, causing what’s called an action potential to travel down the neuron, through the axon, to a synapse. At the synapse, the neuron releases neurotransmitters that flow into another, postsynaptic neuron, which then fires too. In a BBD, we use a computer to simulate these properties, emulating everything that a real neuron does in a series of descriptions from a computer. We have a set of simple equations that describe neuron firing so well that even an expert can’t tell the difference between our simulation spikes and the real thing.

All these simulations and equations sound a lot like the artificial intelligence ideas that haven’t been very successful so far. How does your concept for a conscious artifact differ?

The brain can be simulated on a computer, but when you interface a BBD with the real world, it has the same old problem: The input is ambiguous and complex. What is the best way for the BBD to respond? Neural Darwinism explains how to solve the problem. On our computers we can trace all of the simulated neuronal connections during anything the BBD does. Every 200 milliseconds after the behavior, we ask: What was firing? What was connected? Using mathematical techniques we can actually see the whole thing converge to an output. Of course we are not working with a real brain, but it’s a hint as to what we might need to do to understand real brains.

When are we going to see the first conscious artifact emerge from your laboratory?

Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortex—what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you don’t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who aren’t thinking of anything.

In other words, our device has some lovely properties that are necessary to the idea of a conscious artifact. It has that property of indwelling activity. So the brain is already speaking to itself. That’s a very important concept for consciousness.

February 13, 2009

Bacterial 'Evolution'

Well, I can't really let Darwin Day pass without saying something, and with the entire blogosphere going wild about it I thought I may as well jump on the bandwagon, so here's a quick review of an interesting article I read by Carl Zimmer:

"Zimmer absorbingly describes the work of Richard Lenski and his experiments with E-coli bacteria. One ancient (and also current) criticism of evolution/Darwinism is that as it supposedly takes place over millions of years, how can anyone say for sure if the principle is operating if no one can see it happen? Lenski's experiments have negated this by his bacterial experiments.

"E-coli is a common microbe in the human gut that survives by the consumption of sugar (glucose). Lenski wanted to observe what happened to the bacteria as it underwent continual feast and famine cycles. He kept records by periodically collecting samples of the mixture and freezing them. After a while one flask developed a change; E-coli needs trace amounts of iron to survive but cannot consume free iron atoms, and the mixture contains citrate (a compound that can bind iron atoms) which the E-coli can absorb but which the citrate cannot actually enter the microbe. In normal circumstances E-coli microbes also cannot consume citrate in the presence of oxygen. In this particular flask it was found that the E-coli was consuming the citrate. In other words, the E-coli bacteria had evolved into a type that could feast on the citrate and thus didn't have to starve when the glucose supply ran out!

"This was proof that the manipulation had generated a set of circumstances by which the original microbe mutated in order to adapt to it's new circumstance. By consulting his records of samples, he could determine that the mutation occurred after 31,000 generations but before 31,500 generations. The microbes continue to evolve."

More details in a special digimag of BBC Focus magazine to commemorate Darwin's bicentennial. It definitely represents a middle-finger in the Creationist direction. Other interesting articles discuss whether evolution is 'dead' - Steve Jones says 'yea' while PZ Myers says 'nay'.

February 5, 2009

How Your Brain Creates God

Great article in the latest New Scientist (04 February 2009):
----------------------------------------------------------
Born believers: How your brain creates God

No wonder religion is so prevalent in human society – our brains are primed for it, says Michael Brooks

While many institutions collapsed during the Great Depression that began in 1929, one kind did rather well. During this leanest of times, the strictest, most authoritarian churches saw a surge in attendance.

This anomaly was documented in the early 1970s, but only now is science beginning to tell us why. It turns out that human beings have a natural inclination for religious belief, especially during hard times. Our brains effortlessly conjure up an imaginary world of spirits, gods and monsters, and the more insecure we feel, the harder it is to resist the pull of this supernatural world. It seems that our minds are finely tuned to believe in gods.

Religious ideas are common to all cultures: like language and music, they seem to be part of what it is to be human. Until recently, science has largely shied away from asking why. "It's not that religion is not important," says Paul Bloom, a psychologist at Yale University, "it's that the taboo nature of the topic has meant there has been little progress."

The origin of religious belief is something of a mystery, but in recent years scientists have started to make suggestions. One leading idea is that religion is an evolutionary adaptation that makes people more likely to survive and pass their genes onto the next generation. In this view, shared religious belief helped our ancestors form tightly knit groups that cooperated in hunting, foraging and childcare, enabling these groups to outcompete others. In this way, the theory goes, religion was selected for by evolution, and eventually permeated every human society (New Scientist, 28 January 2006, p 30) The religion-as-an-adaptation theory doesn't wash with everybody, however. As anthropologist Scott Atran of the University of Michigan in Ann Arbor points out, the benefits of holding such unfounded beliefs are questionable, in terms of evolutionary fitness. "I don't think the idea makes much sense, given the kinds of things you find in religion," he says. A belief in life after death, for example, is hardly compatible with surviving in the here-and-now and propagating your genes. Moreover, if there are adaptive advantages of religion, they do not explain its origin, but simply how it spread.

An alternative being put forward by Atran and others is that religion emerges as a natural by-product of the way the human mind works. That's not to say that the human brain has a "god module" in the same way that it has a language module that evolved specifically for acquiring language. Rather, some of the unique cognitive capacities that have made us so successful as a species also work together to create a tendency for supernatural thinking. "There's now a lot of evidence that some of the foundations for our religious beliefs are hard-wired," says Bloom.

Much of that evidence comes from experiments carried out on children, who are seen as revealing a "default state" of the mind that persists, albeit in modified form, into adulthood. "Children the world over have a strong natural receptivity to believing in gods because of the way their minds work, and this early developing receptivity continues to anchor our intuitive thinking throughout life," says anthropologist Justin Barrett of the University of Oxford.

So how does the brain conjure up gods? One of the key factors, says Bloom, is the fact that our brains have separate cognitive systems for dealing with living things - things with minds, or at least volition - and inanimate objects. This separation happens very early in life. Bloom and colleagues have shown that babies as young as five months make a distinction between inanimate objects and people. Shown a box moving in a stop-start way, babies show surprise. But a person moving in the same way elicits no surprise. To babies, objects ought to obey the laws of physics and move in a predictable way. People, on the other hand, have their own intentions and goals, and move however they choose.

Mind and Matter

Bloom says the two systems are autonomous, leaving us with two viewpoints on the world: one that deals with minds, and one that handles physical aspects of the world. He calls this innate assumption that mind and matter are distinct "common-sense dualism". The body is for physical processes, like eating and moving, while the mind carries our consciousness in a separate - and separable - package. "We very naturally accept you can leave your body in a dream, or in astral projection or some sort of magic," Bloom says. "These are universal views."

There is plenty of evidence that thinking about disembodied minds comes naturally. People readily form relationships with non-existent others: roughly half of all 4-year-olds have had an imaginary friend, and adults often form and maintain relationships with dead relatives, fictional characters and fantasy partners. As Barrett points out, this is an evolutionarily useful skill. Without it we would be unable to maintain large social hierarchies and alliances or anticipate what an unseen enemy might be planning. "Requiring a body around to think about its mind would be a great liability," he says.

Useful as it is, common-sense dualism also appears to prime the brain for supernatural concepts such as life after death. In 2004, Jesse Bering of Queen's University Belfast, UK, put on a puppet show for a group of pre-school children. During the show, an alligator ate a mouse. The researchers then asked the children questions about the physical existence of the mouse, such as: "Can the mouse still be sick? Does it need to eat or drink?" The children said no. But when asked more "spiritual" questions, such as "does the mouse think and know things?", the children answered yes.

Default to God

Based on these and other experiments, Bering considers a belief in some form of life apart from that experienced in the body to be the default setting of the human brain. Education and experience teach us to override it, but it never truly leaves us, he says. From there it is only a short step to conceptualising spirits, dead ancestors and, of course, gods, says Pascal Boyer, a psychologist at Washington University in St Louis, Missouri. Boyer points out that people expect their gods' minds to work very much like human minds, suggesting they spring from the same brain system that enables us to think about absent or non-existent people. The ability to conceive of gods, however, is not sufficient to give rise to religion. The mind has another essential attribute: an overdeveloped sense of cause and effect which primes us to see purpose and design everywhere, even where there is none. "You see bushes rustle, you assume there's somebody or something there," Bloom says.

This over-attribution of cause and effect probably evolved for survival. If there are predators around, it is no good spotting them 9 times out of 10. Running away when you don't have to is a small price to pay for avoiding danger when the threat is real. Again, experiments on young children reveal this default state of the mind. Children as young as three readily attribute design and purpose to inanimate objects. When Deborah Kelemen of the University of Arizona in Tucson asked 7 and 8-year-old children questions about inanimate objects and animals, she found that most believed they were created for a specific purpose. Pointy rocks are there for animals to scratch themselves on. Birds exist "to make nice music", while rivers exist so boats have something to float on. "It was extraordinary to hear children saying that things like mountains and clouds were 'for' a purpose and appearing highly resistant to any counter-suggestion," says Kelemen.

In similar experiments, Olivera Petrovich of the University of Oxford asked pre-school children about the origins of natural things such as plants and animals. She found they were seven times as likely to answer that they were made by god than made by people. These cognitive biases are so strong, says Petrovich, that children tend to spontaneously invent the concept of god without adult intervention: "They rely on their everyday experience of the physical world and construct the concept of god on the basis of this experience." Because of this, when children hear the claims of religion they seem to make perfect sense.

Our predisposition to believe in a supernatural world stays with us as we get older. Kelemen has found that adults are just as inclined to see design and intention where there is none. Put under pressure to explain natural phenomena, adults often fall back on teleological arguments, such as "trees produce oxygen so that animals can breathe" or "the sun is hot because warmth nurtures life". Though she doesn't yet have evidence that this tendency is linked to belief in god, Kelemen does have results showing that most adults tacitly believe they have souls. Boyer is keen to point out that religious adults are not childish or weak-minded. Studies reveal that religious adults have very different mindsets from children, concentrating more on the moral dimensions of their faith and less on its supernatural attributes.

Even so, religion is an inescapable artefact of the wiring in our brain, says Bloom. "All humans possess the brain circuitry and that never goes away." Petrovich adds that even adults who describe themselves as atheists and agnostics are prone to supernatural thinking. Bering has seen this too. When one of his students carried out interviews with atheists, it became clear that they often tacitly attribute purpose to significant or traumatic moments in their lives, as if some agency were intervening to make it happen. "They don't completely exorcise the ghost of god - they just muzzle it," Bering says. The fact that trauma is so often responsible for these slips gives a clue as to why adults find it so difficult to jettison their innate belief in gods, Atran says. The problem is something he calls "the tragedy of cognition". Humans can anticipate future events, remember the past and conceive of how things could go wrong - including their own death, which is hard to deal with. "You've got to figure out a solution, otherwise you're overwhelmed," Atran says. When natural brain processes give us a get-out-of-jail card, we take it.

That view is backed up by an experiment published late last year (Science, vol 322, p 115). Jennifer Whitson of the University of Texas in Austin and Adam Galinsky of Northwestern University in Evanston, Illinois, asked people what patterns they could see in arrangements of dots or stock market information. Before asking, Whitson and Galinsky made half their participants feel a lack of control, either by giving them feedback unrelated to their performance or by having them recall experiences where they had lost control of a situation. The results were striking. The subjects who sensed a loss of control were much more likely to see patterns where there were none. "We were surprised that the phenomenon is as widespread as it is," Whitson says. What's going on, she suggests, is that when we feel a lack of control we fall back on superstitious ways of thinking. That would explain why religions enjoy a revival during hard times.

So if religion is a natural consequence of how our brains work, where does that leave god? All the researchers involved stress that none of this says anything about the existence or otherwise of gods: as Barratt points out, whether or not a belief is true is independent of why people believe it. It does, however, suggests that god isn't going away, and that atheism will always be a hard sell. Religious belief is the "path of least resistance", says Boyer, while disbelief requires effort.

These findings also challenge the idea that religion is an adaptation. "Yes, religion helps create large societies - and once you have large societies you can outcompete groups that don't," Atran says. "But it arises as an artefact of the ability to build fictive worlds. I don't think there's an adaptation for religion any more than there's an adaptation to make airplanes." Supporters of the adaptation hypothesis, however, say that the two ideas are not mutually exclusive. As David Sloan Wilson of Binghamton University in New York state points out, elements of religious belief could have arisen as a by-product of brain evolution, but religion per se was selected for because it promotes group survival. "Most adaptations are built from previous structures," he says. "Boyer's basic thesis and my basic thesis could both be correct."

Robin Dunbar of the University of Oxford - the researcher most strongly identified with the religion-as-adaptation argument - also has no problem with the idea that religion co-opts brain circuits that evolved for something else. Richard Dawkins, too, sees the two camps as compatible. "Why shouldn't both be correct?" he says. "I actually think they are." Ultimately, discovering the true origins of something as complex as religion will be difficult. There is one experiment, however, that could go a long way to proving whether Boyer, Bloom and the rest are onto something profound. Ethical issues mean it won't be done any time soon, but that hasn't stopped people speculating about the outcome.

It goes something like this. Left to their own devices, children create their own "creole" languages using hard-wired linguistic brain circuits. A similar experiment would provide our best test of the innate religious inclinations of humans. Would a group of children raised in isolation spontaneously create their own religious beliefs? "I think the answer is yes," says Bloom.

-------------------------------------
God of the gullible

In The God Delusion, Richard Dawkins argues that religion is propagated through indoctrination, especially of children. Evolution predisposes children to swallow whatever their parents and tribal elders tell them, he argues, as trusting obedience is valuable for survival. This also leads to what Dawkins calls "slavish gullibility" in the face of religious claims.

If children have an innate belief in god, however, where does that leave the indoctrination hypothesis? "I am thoroughly happy with believing that children are predisposed to believe in invisible gods - I always was," says Dawkins. "But I also find the indoctrination hypothesis plausible. The two influences could, and I suspect do, reinforce one another." He suggests that evolved gullibility converts a child's general predisposition to believe in god into a specific belief in the god (or gods) their parents worship.

February 4, 2009

Upcoming 'New Scientist' Feature

The next issue of New Scientist magazine will run a feature on 'God And The Mind: What Happens When Neuroscience Meets Religion?'

This looks right up my street! Though I sigh at the thought of the vested interests taking part in the same old 'science vs. religion' debates that this will generate.

Anyhow, stay tuned. :-)

February 3, 2009

Why Turning Out Brilliant Scientists Isn't Enough

A brilliant article by Prof. Robert Winston from New Scientist magazine (31 January 2009):

----------------------------------------------------

THIS year sees the 50th anniversary of C. P. Snow's influential Rede lecture on the "two cultures", in which he argued that the breakdown of communication between the sciences and the humanities was a major hindrance to solving the world's problems. One of his premises - that those problems would be solved by better science - now seems a little naive. However, his point that the sciences and humanities need to learn to communicate better, and people to understand each other better across the divide, is as pertinent as ever.

In the UK, the issue of how scientists engage with - and, crucially, listen to - the public has become increasingly prominent since the House of Lords Select Committee on Science and Technology held an inquiry into Science and Society in 1999. Before this, many believed that for people to trust more in the value of science, it would be enough for scientists simply to educate the public. These days it is widely understood that fostering public engagement - rather than just mere public understanding - is of key importance.

This makes sense. Most scientific research in the UK is paid for by the taxpayer, and when technologies have a negative impact the consequences can be profound for everyone. The scientific knowledge we pursue is public property. We scientists have a duty not merely to tell people what we are doing (a skill not taught as well as it should be in most universities), but also to listen to people's fears and hopes and respond to them, even when we feel their antagonism to be ill-founded. Being open in this way has been shown to have real advantages. A good example is the success of the ScienceWise project set up by Kathy Sykes at the University of Bristol, UK, which uses public dialogue to help policy-makers reach better decisions about science and technology issues.

A two-way dialogue - communication in the fullest sense - seems more likely than a one-way lecture to lead to a maturing of views and resolution of conflict. It can help scientists to accept that some public concerns may be justified, and that recognising them can improve their science; and it makes the public aware of the good intentions of scientists. If we show that we care about the ethical implications of our work, people are likely to be more sympathetic. Dialogue has been shown to be a much more constructive and valuable process than the web-based consultations and opinion polls that policy-makers previously relied on, and has been very successful in the public discussion about embryology and nanotechnology.

Science organisations have started to recognise that people need to think about these issues early in their careers. Many of the programmes run by the British Association for the Advancement of Science, which this month relaunched as the British Science Association (BSA), increasingly encourage improved scientific literacy among school students.

Indeed the science community as a whole is starting to acknowledge that it must interact with the public more fully. When I started making science television programmes, I was frequently accused of dumbing down. After the BBC transmitted The Human Body series 10 years ago, I was painfully ostracised at scientific meetings and at the Royal Society, even though the series was viewed by around 19 million people in its first weeks and widely used as teaching material in schools. Now it is a delight that TV science programmes by colleagues such as Jim Al-Khalili of the University of Surrey, Marcus du Sautoy of the University of Oxford and Kathy Sykes are seen by many scientists as valuable contributions to public engagement.

We need to do much more. We have a duty to conduct research to ensure that the ways we attempt to engage really do have an impact, yet there is still no consensus on the best way to conduct such studies. In the UK we must make certain that the increasing sums of money that bodies such as the research councils and the Wellcome Trust are prepared to spend on public engagement are not wasted.

University science education also needs to improve. We turn out excellent chemists, physicists and biologists, but their education is not always well-rounded. Too few science undergraduates explore the ethical issues of their subject, and young scientists often seem to think they deal in certainty and "the truth". The nature of science is much more complex. In this respect, the Beacons for Public Engagement initiative run by the UK Higher Education Funding Council and the research councils should be valuable, encouraging university students to be more involved with societal issues and researchers more open about their science and its implications.

C. P. Snow may have been right in arguing for better connection between science and the arts, but not necessarily about identifying two distinct cultures. The remarkable creativity of science is an integral part of human culture and it needs to be thought of in this way. We scientists can help bring this about by engaging with the wider world about what we do and its implications for society. We need to show that we too have human values. Snow would surely have approved.

Robert Winston is Professor of Science and Society and Emeritus Professor of Fertility Studies at Imperial College, London.