Showing posts with label Whoa. Show all posts
Showing posts with label Whoa. Show all posts

March 19, 2010

Foot Discovered In Baby Brain

Ok guys, for the first time I have something that's

NSFW!

Pity that isn't my own work, I'm lazy right now and nicked this via Joanne Manaster and KevinMD.com. Still, you gotta admit this story treads the fine line between Whoa and Pass-Me-That-Buckettttt. The Denver Channel is running a story about a medical first - a foot found growing inside a newborn's brain. I have nothing to add so I'm just gonna post the entire article here. Graphic image ahead, you have been warned:

COLORADO SPRINGS, Colo. -- A Colorado Springs family is part of one of the strangest cases in medical history.

Dr. Paul Grabb, a pediatric brain surgeon, said he was surprised when he discovered a small foot growing inside the brain of 3-day-old Sam Esquibel. "The foot literally popped out of the brain," Grabb told TheDenverChannel Wednesday.

The appendage threatened the newborn's life.

When Grabb performed the life-saving surgery at Memorial Hospital for Children in Colorado Springs, he was in for another surprise: he also found what appeared to be parts of an intestine in the folds of the infant's tiny brain, in addition to another developing foot, hand and thigh. "I've never seen anything like it before," Grabb told the Colorado Springs Gazette. "It looked like the breach delivery of a baby coming out of the brain."


Sam was delivered on Oct. 1, within hours of an ultrasound that showed what appeared to be a tumor developing in the brain of the fetus. Three days later, Grabb performed the surgery to remove it. The reason for the strange growth was not clear at first. It was thought to be a teratoma -- a congenital brain tumor composed of foreign tissue such as muscle, hair or teeth -- or a fetus in fetu, which is a developmental abnormality in which a fetal twin begins to form within the other.

Grabb, the only pediatric brain surgeon in southern Colorado, said that the formal pathology report identified the mass as a teratoma because of how perfectly formed the structure was but there is a fine line between that and the fetus in fetu. "So it's unclear if a fetal twin began to form within another," Grabb said.

Grabb said he sees a teratoma once every few years but it doesn't compare to Sam's. Teratoma tumors do not usually grow as complex as a foot. "You show those pictures to the most experienced pediatric neurosurgeons in the world, and they've never seen anything like it," Grabb told the Gazette. "This is completely abnormal."

Grabb said neurologically, Sam is expected to do well. Sam's brain tumor can come back so he will be monitoring that in the months and years to come.

Mom Says Baby A Miracle

Sam's mom, Tiffnie, told TheDenverChannel on Wednesday that her son is doing well but that she didn't want to appear on camera because she doesn't want to exploit her child and make him appear like a freak in the eyes of the world.

"This is our baby," Tiffnie explained, in tears.

She said when she first talked to the Gazette, she thought the story would only appear in the small-town paper. But it has circled the globe and she is getting calls from national and international media outlets. "I am so overwhelmed right now ... We've been bombarded with calls," Tiffnie said.

DenverChannel.com Reporter Jane Slater held the infant with the baby blue eyes and round face and said "he is the cutest baby I've ever seen." Sam's at a healthy weight -- as evidenced by his pudgy arms and legs -- which explains why family members call him the "Michelin Man," Slater said.

He was alert and happy, with a barely visible inch-long scar which stretched from his hairline to the top of his cheek. Sam is still recovering from the surgery and shows weakness on one side of his body and some trouble with higher-level eye functions. He is already undergoing rehabilitation.

Tiffnie had said that her pregnancy was easy and there were no signs of complications until the ultrasound on Oct. 1. She and her husband had given up on the idea of having any children after years of trying and then Sam was conceived. Tiffnie said she doesn't mind driving to the hospital every week or month for Sam's MRI and blood checks, considering that he is healthy and happy.

"It's a miracle," she said.

In the meantime, Grabb wonders about the possibilities for medical science. "How does the body form complete extremities? Who is to say we can't grow a heart, leg or foot?" Grabb asked the Denver Post earlier. "This could show a window of what's possible."

"It's always impressive to see these sorts of things but it's not as unsual as you would think," said Dr. Rich Gustafson, with Cherry Creek Pediatrics. "Teratomas can be found in abdomens or other parts of the body ... what made this case so unusual is how perfectly formed the foot was and being in the skull as well. Usually, it's a totally safe and benign tumor. Often, it gets picked up in adulthood but now with ultrasound, you're actually picking more up as they are getting fetal ultrasounds."

July 23, 2009

If You Had Half a Brain..

ResearchBlogging.orgA great story made its way onto the interwebz lately. The Daily Mail reports:
"A 10-year-old girl born with half a brain has both fields of vision in one eye, scientists said today. The youngster, from Germany, has the power of both a right and left eye in the single organ in the only known case of its kind in the world.

"University of Glasgow researchers used Functional Magnetic Resonance Imaging (fMRI) to reveal how the girl’s brain had rewired itself in order to process information from the right and left visual fields in spite of her not having a whole brain."
BBC News goes further with:
"In the case of the German girl, her left and right field vision is almost perfect in one eye. Scans on the girl showed that the retinal nerve fibres carrying visual information from the back of the eye which should have gone to the right hemisphere of the brain diverted to the left ... 'Despite lacking one hemisphere, the girl has normal psychological function and is perfectly capable of living a normal and fulfilling life. She is witty, charming and intelligent.'"
Get that? The only known case in the world where brain plasticity (the ability of the brain to reorganise itself after injury) is displayed for all to see. Plasticity doesn't always work this way, there are many cases where plasticity effects haven't achieved the mark of restoring all or most of the impaired brain function. Epilepsy patients, for example, who undergo a hemispherectomy (removal of a half of a brain) in order to prevent the onset of severe seizures, among other things tend to lose an entire field of vision in both eyes; they only see people and objects in one half of their visual field, as in the illustration below:



Neither was this a case of brain injury; the anonymous girl (known only as 'AH') failed to adequately develop her cerebral right hemisphere in the womb. As a result, she is without a right-brain and also without the use of her right eye. She also has a slight left-hemiparesis (weakness affecting half of the body) but close to normal vision in both hemifields of her normal left eye.

In a study published by the Proceedings of the National Academy of Sciences (PNAS), a team led by Lars Muckli of the University of Glasgow used fMRI to investigate how the visual cortex had remapped itself. In a healthy individual, the cerebral cortex contains "maps" for vision, sound, motion and touch, which develop and modify over time dependent on several factors including genetic cues and neural activity. In the mammalian brain (that is, human brain) the visual cortex is made up of distinct sections dealing with vision, the main one being an area known simply as 'V1', the primary visual cortex. 'V2' deals with quarterfield representations in the area of vision, effectively dealing with the 'up' and 'down' areas of both the right and left hemispheres of vision, while 'V3' is a structure in front of V2 that, among other things, performs a supporting role for V2. There is also the question of retinotopic maps, a direct mapping of the spatial arrangement of the retina, located in visual structures including the cortex and thalamus.

As per materials provided by the University of Glasgow, "visual information is gathered by the retina at the back of the eye and images are inverted when they pass through the lens of the pupil so that images in your left field of vision are received on the right side of the retina, and images from the right are received on the left." The part of the retina close to the nose is known as the nasal retina whereas the other part is referred to as the temporal retina, being in proximity to the temples. Both halves transmit received information through separate nerve fibres. In a normal situation, the nerve fibres of the nasal retina cross over in the optic chiasm, a brain structure located at the bottom of the brain near the hypothalamus, and are processed by the hemisphere on the opposite side. The nerve fibres of the temporal retina remain in the same hemisphere (ipsilateral), meaning that the left and right visual fields described earlier are processed by opposite sides of the brain.



[DIGRESSION]Vision is not the only modality to be processed in this strange way. It actually reflects the larger processing activities of the intact brain which tends to process all other modalities in opposite sides of the brain. To wit, touch and hearing for example that is "entered" into the right side of the body (right body, right ear) are processed by the left-brain, and touch/hearing entered into the left body/ear is processed by the right-brain. This is generally referred to as contralateral processing, when input is processed by the 'opposite' half of the brain. Those inputs processed by the 'same' side of the brain is known as ipsilateral processing. For more information, please read about Basic Visual Pathways.[/DIGRESSION]

The MRI scan displays the complete lack of a right-hemisphere: The optic chiasm is shown here (top l-r) in the transverse and enlarged transverse planes, and (bottom l-r) in the coronal and saggital planes. A rudimentary optic nerve is pointed out in the enlargement by the green arrow but with no discernible optic tract, and it can also be seen how the left-hemisphere is spilling over into the right-domain. The vacant right-hemisphere is filled with cerebrospinal fluid (CSF).

In AH's fascinating case, it was found that the nasal retinal nerve had connected to her left-brain. A possible interpretation for AH's condition is suggested by the authors: The lack of a right-brain prevented an opposite connection from being made, which led the optic nerve fibers to "connect" with ipsilateral structures instead.

Remembering that normal cases require a crossing in the optic chiasm, and AH's connections were essentially ipsilateral, how exactly does AH see both visual fields with only one eye? After all, if the entire right hemisphere is missing, AH should see only the left hemifield. The answer lies with the Lateral Geniculate Nucleus (LGN), a structure that is embedded deep in the thalamus and which processes visual information from the retina. In AH, both the nasal and temporal retina would need to be mapped onto the LGN to allow for the processing of both hemifields. Again a similar suggestion of ipsilateral projections were presented as being the solution, instead of the usual contralateral connections, and that a mirror-symmetric representation of the hemifields would be received and processed by the thalamus. Similar cases have been seen in achiasmatic dogs where optic nerve fibres terminated in the ipsilateral LGN.

'Islands' were also found to have formed in the left-hemisphere to deal especially with processing of the left hemifield, to compensate for the missing right-brain activity.

The loss of AH's right-hemisphere was discovered at age 3 when she was treated for brief seizures and twitching taking place on her left side. It is speculated that the right-brain failed to develop between Day 28 and Day 49 of embryonic development. Despite the situation, she is able to engage quite capably in activities that require a fair amount of balance, such as riding a bicycle or roller-skating. Truly an extraordinary case in more ways than one.

For a professional view, please see Dr. Steven Novella's entry on this case.
---------------------------------------------------------------------
Muckli, L., Naumer, M., & Singer, W. (2009). Bilateral visual field maps in a patient with only one hemisphere Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0809688106

July 8, 2009

Michael Jackson: Buried Without His Brain

After witnessing the media blaze related to Michael Jackson's recent death, I started hearing a number of reports that Jackson would be buried without his brain. Rather an odd thought as I watched the live memorial along with millions of other people around the globe, that the body in the gold-plated coffin wasn't an entire specimen. According to one such report, Jackson's brain is being held for further testing to determine the extent to which it was damaged by years of painkillers and other medications.

After scouting through the interwebz for a more scientific explanation, I discovered that Vaughan Bell had written up a good explanation on his excellent Mind Hacks blog. I hope he won't mind me nicking it, but I think it's that good that it deserves repetition:

According to press reports Michael Jackson will be buried without his brain because it is still 'hardening'. Although this may seem unusual, the 'hardening' process is actually a standard part of any post-mortem examination where the brain is thought to be important in the cause of death, such as in suspected overdose.

It involves removing the brain from the skull and leaving it to soak in a diluted mixture of formaldehyde and water called formalin. This soaking process usually takes four weeks and the brain genuinely does harden.

A 'fresh' brain is a pinkish colour and has the consistency of jelly, gello or soft tofu meaning it is difficult to examine and the various internal structures are often hard to make out.

After soaking the brain, it has the consistency and colour of canned mushrooms making it easier to slice, examine and photograph. However, because the brain is so soft to start with, it can't just be dropped in a tank of fixing solution, because it will deform under its own weight.

To solve the problem it is usually suspended upside down in a large bucket of formalin by a piece of string which is tied to the basilar artery.

After it has 'hardened' or 'fixed' it is sliced to look for clear damage to either the tissue or the arteries. Small sections can also be kept to examine under the microscope.

Because this part of the post-mortem takes several weeks preparation it is usually only carried out with the family's permission as the body may need to be buried without it, or the burial delayed until the procedure is finished.

This also means that this form of post-mortem brain examination is usually only carried out where there is a feeling that examining the brain can help clarify the cause of death - which is what pathologists are often most concerned with.

In cases such as Michael Jackson's, where the effects of drugs are suspected to play a part, pathologists will be looking for evidence of both sudden-onset and long-term brain damage. If they find it, they'll be trying to work out how much it could have been caused by drug use and how much it contributed to the death.


So now you know.

March 10, 2009

Oh-Oh-Obama!

Looks like the entire blogosphere is abuzz with the news of President Obama's reversal of the foolhardy Bush philosophy of restricting stem cell research. So there isn't much to say that everyone else hasn't already said, and will just join everyone in the celebrations and jubilations.

After the signing of this Executive Order, the federal funding ban is now lifted and scientist researchers will now have government support and tax dollars to carry out lines of research that will bring advances as amazing as the growing of new organs for transplantation. As for the neuroscience field, stem cells (from monkey teeth!) will achieve fabulous things such as stimulating the growth and regeneration of brain cells. In this one example, Huang et al. (2008) at Emory University implanted dental pulp stem cells from the teeth of rhesus macaque monkeys were placed in a murine hippocampus. Cells born 7 days after the implantation went on to form neurons and neural progenitor cells (NPCs), and by 30 days indications of astrogliosis were observed. Astrogliosis refers to an increase in the number of astrocytes, a type of glial cell that performs many supportive functions to the brain including tissue regeneration following injury, as well as maintenance of the blood-brain barrier. In short, monkey stem cells promoted growth, cell recruitment and maturation of of repair responses in mice brains. How great is that?!

There is already talk of stem cells being used in connection with Alzheimer's, Huntington's and Parkinson's Disease. Who knows what the future will bring? Scientists have been seething that such positive and encouraging research has been stifled, if not blocked altogether, by the Bush administration's myopic and misguided view that has more concerns with, surprise surprise, political ideology and the religious right. And now, thanks to President Obama's move, research can go on and the (US) National Institutes of Health have four months to set guidelines. Not bad at all.

And what more, Obama has issued a presidential memorandum that protects scientific research from political influence. So hopefully no one will think of messing around in the future. Below is the official text of the memorandum:

Science and the scientific process must inform and guide decisions of my Administration on a wide range of issues, including improvement of public health, protection of the environment, increased efficiency in the use of energy and other resources, mitigation of the threat of climate change, and protection of national security.

The public must be able to trust the science and scientific process informing public policy decisions. Political officials should not suppress or alter scientific or technological findings and conclusions. If scientific and technological information is developed and used by the Federal Government, it should ordinarily be made available to the public. To the extent permitted by law, there should be transparency in the preparation, identification, and use of scientific and technological information in policymaking. The selection of scientists and technology professionals for positions in the executive branch should be based on their scientific and technological knowledge, credentials, experience, and integrity.

By this memorandum, I assign to the Director of the Office of Science and Technology Policy (Director) the responsibility for ensuring the highest level of integrity in all aspects of the executive branch's involvement with scientific and technological processes. The Director shall confer, as appropriate, with the heads of executive departments and agencies, including the Office of Management and Budget and offices and agencies within the Executive Office of the President (collectively, the "agencies"), and recommend a plan to achieve that goal throughout the executive branch.

Specifically, I direct the following:

1. Within 120 days from the date of this memorandum, the Director shall develop recommendations for Presidential action designed to guarantee scientific integrity throughout the executive branch, based on the following principles:
(a) The selection and retention of candidates for science and technology positions in the executive branch should be based on the candidate's knowledge, credentials, experience, and integrity;
(b) Each agency should have appropriate rules and procedures to ensure the integrity of the scientific process within the agency;
(c) When scientific or technological information is considered in policy decisions, the information should be subject to well-established scientific processes, including peer
review where appropriate, and each agency should appropriately and accurately reflect that information in complying with and applying relevant statutory standards;
(d) Except for information that is properly restricted from disclosure under procedures established in accordance with statute, regulation, Executive Order, or Presidential Memorandum, each agency should make available to the public the scientific or technological findings or conclusions considered or relied on in policy decisions;
(e) Each agency should have in place procedures to identify and address instances in which the scientific process or the integrity of scientific and technological information may be compromised; and
(f) Each agency should adopt such additional procedures, including any appropriate whistleblower protections, as are necessary to ensure the integrity of scientific and technological information and processes on which the agency relies in its decisionmaking or otherwise uses or prepares.

2. Each agency shall make available any and all information deemed by the Director to be necessary to inform the Director in making recommendations to the President as requested by this memorandum. Each agency shall coordinate with the Director in the development of any interim procedures deemed necessary to ensure the integrity of scientific decisionmaking pending the Director's recommendations called for by this memorandum.

3. (a) Executive departments and agencies shall carry out the provisions of this memorandum to the extent permitted by law and consistent with their statutory and regulatory authorities and their enforcement mechanisms.
(b) Nothing in this memorandum shall be construed to impair or otherwise affect:
(i) authority granted by law to an executive department, agency, or the head thereof; or
(ii) functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.
(c) This memorandum is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity, by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

4. The Director is hereby authorized and directed to publish this memorandum in the Federal Register.

BARACK OBAMA


Sounds fairly reasonable.

February 26, 2009

A Beautiful 'Brainbow'

(Inspired by Encephalon #64)

Neurons are clever little cells, the very material that processes what we think, see, hear, feel, understand, and so much more. Has anyone considered if they look as artistic as they are artful? In 2007 a team of Harvard neuroscientists found a way to activate multiple fluorescent proteins in neurons and which allows over 90 distinct colours to be 'tagged'. Similar to television, a palette of colours and hues can be generated from three primary colours such as red, green and blue. As one might expect, the activity generated by brain activity causes an explosion of colours, referred to as 'brainbows', and not only does this technique present an impressive light show but also allows researchers to gain an insight into the mechanics by which neurons receive and transmit information. Below are my favourite images:


Auditory portion of a mouse brainstem. A special gene (extracted from coral and jellyfish) was inserted into the mouse in order to map intricate connection. As the mouse thinks, fluorescent proteins spread out along neural pathways. Mammals in general have very thick axons in this region which enables sound to be processed very quickly.


A single neuron (red) in the brainstem. The helter-skelter of lines that criss-cross through the image are representative of signal traffic from other neurons. In this image, one brainstem neuron is surrounded by the remnants of signals from other neurons (mainly blue and yellow-coloured). When viewed with a special microscope, cyan, red and yellow lasers can cause each neuron to shine a specific colour, enabling researchers to track the activity of individual neurons.


This view of the hippocampus shows the smaller glial cells (small ovals) in the proximity of neurons (larger with more filaments). The hippocampus is an important brain structure that plays a major role in memory formation, and is also an essential component of the limbic system which is responsible for a variety of functions including emotion.

See all of the images at Wired.

February 15, 2009

What Makes You Uniquely "You"?

Some of the most profound questions in science are also the least tangible. What does it mean to be sentient? What is the self? When issues become imponderable, many researchers demur, but neuro­scientist Gerald Edelman dives right in.

A physician and cell biologist who won a 1972 Nobel Prize for his work describing the structure of antibodies, Edelman is now obsessed with the enigma of human consciousness—except that he does not see it as an enigma. In Edelman’s grand theory of the mind, consciousness is a biological phenomenon and the brain develops through a process similar to natural selection. Neurons proliferate and form connections in infancy; then experience weeds out the useless from the useful, molding the adult brain in sync with its environment. Edelman first put this model on paper in the Zurich airport in 1977 as he was killing time waiting for a flight. Since then he has written eight books on the subject, the most recent being Second Nature: Brain Science and Human Knowledge. He is chairman of neurobiology at the Scripps Research Institute in San Diego and the founder and director of the Neurosciences Institute, a research center in La Jolla, California, dedicated to unconventional “high risk, high payoff” science.

In his conversation with DISCOVER contributing editor Susan Kruglinski, Edelman delves deep into this untamed territory, exploring the evolution of consciousness, the narrative power of memory, and his goal of building a humanlike artificial mind.

This year marks the 150th anniversary of The Origin of Species, and many people are talking about modern interpretations of Charles Darwin’s ideas. You have one of your own, which you call Neural Darwinism. What is it?

Many cognitive psychologists see the brain as a computer. But every single brain is absolutely individual, both in its development and in the way it encounters the world. Your brain develops depending on your individual history. What has gone on in your own brain and its consciousness over your lifetime is not repeatable, ever—not with identical twins, not even with conjoined twins. Each brain is exposed to different circumstances. It’s very likely that your brain is unique in the history of the universe. Neural Darwinism looks at this enormous variation in the brain at every level, from biochemistry to anatomy to behavior.

How does this connect to Darwin’s idea of natural selection?

If you have a vast population of animals and each one differs, then under competition certain variants will be fitter than others. Those variants will be selected, and their genes will go into the population at a higher rate. An analogous process happens in the brain. As the brain forms, starting in the early embryo, neurons that fire together wire together. So for any individual, the microconnections from neuron to neuron within the brain depend on the environmental cues that provoke the firing. We have the extraordinary variance of the brain reacting to the extraordinary variance of the environment; all of it contributes to making that baby’s brain change. And when you figure the numbers—at least 30 billion neurons in the cortex alone, a million billion connections—you have to use a selective system to maintain the connections that are needed most. The strength of the connections or the synapses can vary depending on experience. Instead of variant animals, you have variant microcircuits in the brain.

Before talking about how this relates to consciousness, I’d like to know how you define consciousness. It’s hard to get scientists even to agree on what it is.

William James, the great psychologist and philosopher, said consciousness has the following properties: It is a process, and it involves awareness. It’s what you lose when you fall into a deep, dreamless slumber and what you regain when you wake up. It is continuous and changing. Finally, consciousness is modulated or modified by attention, so it’s not exhaustive. Some people argue about qualia, which is a term referring to the qualitative feel of consciousness. What is it like to be a bat? Or what is it like to be you or me? That’s the problem that people have argued about endlessly, because they say, “How can it be that you can get that process—the feeling of being yourself experiencing the world—from a set of squishy neurons?”

What is the evolutionary advantage of consciousness?

The evolutionary advantage is quite clear. Consciousness allows you the capacity to plan. Let’s take a lioness ready to attack an antelope. She crouches down. She sees the prey. She’s forming an image of the size of the prey and its speed, and of course she’s planning a jump. Now suppose I have two animals: One, like our lioness, has that thing we call consciousness; the other only gets the signals. It’s just about dusk, and all of a sudden the wind shifts and there’s a whooshing sound of the sort a tiger might make when moving through the grass, and the conscious animal runs like hell but the other one doesn’t. Well, guess why? Because the animal that’s conscious has integrated the image of a tiger. The ability to consider alternative images in an explicit way is definitely evolutionarily advantageous.

I’m always surprised when neuroscientists question whether an animal like a lion or a dog is conscious.

There is every indirect indication that a dog is conscious—its anatomy and its nervous system organization are very similar to ours. It sleeps and its eyelids flutter during REM sleep. It acts as if it’s conscious, right? But there are two states of consciousness, and the one I call primary consciousness is what animals have. It’s the experience of a unitary scene in a period of seconds, at most, which I call the remembered present. If you have primary consciousness right now, your butt is feeling the seat, you’re hearing my voice, you’re smelling the air. Yet there’s no consciousness of consciousness, nor any narrative history of the past or projected future plans.
+++
How does this primary consciousness contrast with the self-consciousness that seems to define people?

Humans are conscious of being conscious, and our memories, strung together into past and future narratives, use semantics and syntax, a true language. We are the only species with true language, and we have this higher-order consciousness in its greatest form. If you kick a dog, the next time he sees you he may bite you or run away, but he doesn’t sit around in the interim plotting to remove your appendage, does he? He can have long-term memory, and he can remember you and run away, but in the interim he’s not figuring out, “How do I get Kruglinski?” because he does not have the tokens of language that would allow him narrative possibility. He does not have consciousness of consciousness like you.

How did these various levels of consciousness evolve?

About 250 million years ago, when therapsid reptiles gave rise to birds and mammals, a neuronal structure probably evolved in some animals that allowed for interaction between those parts of the nervous system involved in carrying out perceptual categorization and those carrying out memory. At that point an animal could construct a set of discriminations: qualia. It could create a scene in its own mind and make connections with past scenes. At that point primary consciousness sets in. But that animal has no ability to narrate. It cannot construct a tale using long-term memory, even though long-term memory affects its behavior. Then, much later in hominid evolution, another event occurred: Other neural circuits connected conceptual systems, resulting in true language and higher-order consciousness. We were freed from the remembered present of primary consciousness and could invent all kinds of images, fantasies, and narrative streams.

So if you take away parts of perception, that doesn’t necessarily take away the conceptual aspects of consciousness.

I’ll tell you exactly—primitively, but exactly. If I remove parts of your cortex, like the visual cortex, you are blind, but you’re still conscious. If I take out parts of the auditory cortex, you’re deaf but still conscious.

But consciousness still resides in the brain. Isn’t there a limit to how much we can lose and still lay claim to qualia—to consciousness—in the human sense?

The cortex is responsible for a good degree of the contents of consciousness, and if I take out an awful lot of cortex, there gets to be a point where it’s debatable as to whether you’re conscious or not. For example, there are some people who claim that babies born without much cortex—a condition called hydran­encephaly—are still conscious because they have their midbrain. It doesn’t seem very likely. There’s a special interaction between the cortex and the thalamus, this walnut-size relay system that maps all senses except smell into the cortex. If certain parts of the thalamo­cortical system are destroyed, you are in a chronic vegetative state; you don’t have consciousness. That does not mean consciousness is in the thalamus, though.

If you touch a hot stove, you pull your finger away, and then you become conscious of pain, right? So the problem is this: No one is saying that consciousness is what causes you to instantly pull your finger away. That’s a set of reflexes. But consciousness sure gives you a lesson, doesn’t it? You’re not going to go near a stove again. As William James pointed out, consciousness is a process, not a thing.

Can consciousness be artificially created?

Someday scientists will make a conscious artifact. There are certain requirements. For example, it might have to report back through some kind of language, allowing scientists to test it in various ways. They would not tell it what they are testing, and they would continually change the test. If the artifact corresponds to every changed test, then scientists could be pretty secure in the notion that it is conscious.

At what level would such an artifact be conscious? Do you think we could make something that has consciousness equivalent to that of a mouse, for example?

I would not try to emulate a living species because—here’s the paradoxical part—the thing will actually be nonliving.

Yes, but what does it mean to be alive?

Living is—how shall I say?—the process of copying DNA, self-replication under natural selection. If we ever create a conscious artifact, it won’t be living. That might horrify some people. How can you have consciousness in something that isn’t alive? There are people who are dualists, who think that to be conscious is to have some kind of special immaterial agency that is outside of science. The soul, floating free—all of that.

There might be people who say, “If you make it conscious, you just increase the amount of suffering in this world.” They think that consciousness is what differentiates you or allows you to have a specific set of beliefs and values. You have to remind yourself that the body and brain of this artifact will not be a human being. It will have a unique body and brain, and it will be quite different from us. If you could combine a conscious artifact with a synthetic biological system, could you then create an artificial consciousness that is also alive?Who knows? It seems reasonably feasible. In the future, once neuroscientists learn much more about consciousness and its mechanism, why not imitate it? It would be a transition in the intellectual history of the human race.

Do you believe a conscious artifact would have the value of a living thing?

Well, I would hope it would be treated that way. Even if it isn’t a living thing, it’s conscious. If I actually had a conscious artifact, even though it was not living, I’d feel bad about unplugging it. But that’s a personal response.

+++

By proposing the possibility of artificial consciousness, are you comparing the human brain to a computer?

No. The world is unpredictable, and thus it is not an unambiguous algorithm on which computing is based. Your brain has to be creative about how it integrates the signals coming into it. And computers don’t do that. The human brain is capable of symbolic reference, not just syntax. Not just the ordering of things as you have in a computer, but also the meaning of things, if you will.

There’s a neurologist at the University of Milan in Italy named Edoardo Bisiach who’s an expert on a neuropsychological disorder known as anosognosia. A patient with anosognosia often has had a stroke in the right side, in the parietal cortex. That patient will have what we call hemineglect. He or she cannot pay attention to the left side of the world and is unaware of that fact. Shaves on one side. Draws half a house, not the whole house, et cetera. Bisiach had one patient who had this. The patient was intelligent. He was verbal. And Bisiach said to him, “Here are two cubes. I’ll put one in your left hand and one in my left hand. You do what I do.” And he went through a motion.

And the patient said, “OK, doc. I did it.”

Bisiach said, “No, you didn’t.”

He said, “Sure I did.”

So Bisiach brought the patient’s left hand into his right visual field and said, “Whose hand is this?”

And the patient said, “Yours.”

Bisiach said, “I can’t have three hands.”

And the patient very calmly said, “Doc, it stands to reason, if you’ve got three arms, you have to have three hands.” That case is evidence that the brain is not a machine for logic but in fact a construction that does pattern recognition. And it does it by filling in, in ambiguous situations.

How are you pursuing the creation of conscious artifacts in your work at the Neurosciences Institute?

We construct what we call brain-based devices, or BBDs, which will be increasingly useful in understanding how the brain works and modeling the brain. They may also be the beginning of the design of truly intelligent machines.

What exactly is a brain-based device?

It looks like maybe a robot, R2-D2 almost. But it isn’t a robot, because it’s not run by an artificial intelligence [AI] program of logic. It’s run by an artificial brain modeled on the vertebrate or mammalian brain. Where it differs from a real brain, aside from being simulated in a computer, is in the number of neurons. Compared with, let’s say, 30 billion neurons and a million billion connections in the human cortex alone, the most complex brain-based devices presently have less than a million neurons and maybe up to 10 million or so synapses, the space across which nerve impulses pass from one neuron to another.

What is interesting about BBDs is that they are embedded in and sample the real world. They have something that is equivalent to an eye: a camera. We give them microphones for the equivalent of ears. We have something that matches conductance for taste. These devices send inputs into the brain as if they were your tongue, your eyes, your ears. Our BBD called Darwin 7 can actually undergo conditioning. It can learn to pick up and “taste” blocks, which have patterns that can be identified as good-tasting or bad-tasting. It will stay away from the bad-tasting blocks, which have images of blobs instead of stripes on them —rather than pick them up and taste them. It learns to do that all on its own.

Why is this kind of machine better than a robot controlled by traditional artificial intelligence software?

An artificial intelligence program is algorithmic: You write a series of instructions that are based on conditionals, and you anticipate what the problems might be. AI robot soccer players make mistakes because you can’t possibly anticipate every possible scenario on a field. Instead of writing algorithms, we have our BBDs play sample games and learn, just the way you train your dog to do tricks.

At the invitation of the Defense Advanced Research Projects Agency, we incorporated a brain of the kind that we were just talking about into a Segway transporter. And we played a match of soccer against Carnegie Mellon University, which worked with an AI-based Segway. We won five games out of five. That’s because our device learned to pick up a ball and kick it back to a human colleague. It learned the colors of its teammates. It did not just execute algorithms.

It’s hard to comprehend what you are doing. What is the equivalent of a neuron in your brain-based device?

A biological neuron has a complex shape with a set of diverging branches, called dendrites, coming from one part of the center of the cell, and a very long single process called an axon. When you stimulate a neuron, ions like sodium and potassium and chloride flow back and forth, causing what’s called an action potential to travel down the neuron, through the axon, to a synapse. At the synapse, the neuron releases neurotransmitters that flow into another, postsynaptic neuron, which then fires too. In a BBD, we use a computer to simulate these properties, emulating everything that a real neuron does in a series of descriptions from a computer. We have a set of simple equations that describe neuron firing so well that even an expert can’t tell the difference between our simulation spikes and the real thing.

All these simulations and equations sound a lot like the artificial intelligence ideas that haven’t been very successful so far. How does your concept for a conscious artifact differ?

The brain can be simulated on a computer, but when you interface a BBD with the real world, it has the same old problem: The input is ambiguous and complex. What is the best way for the BBD to respond? Neural Darwinism explains how to solve the problem. On our computers we can trace all of the simulated neuronal connections during anything the BBD does. Every 200 milliseconds after the behavior, we ask: What was firing? What was connected? Using mathematical techniques we can actually see the whole thing converge to an output. Of course we are not working with a real brain, but it’s a hint as to what we might need to do to understand real brains.

When are we going to see the first conscious artifact emerge from your laboratory?

Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortex—what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you don’t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who aren’t thinking of anything.

In other words, our device has some lovely properties that are necessary to the idea of a conscious artifact. It has that property of indwelling activity. So the brain is already speaking to itself. That’s a very important concept for consciousness.

October 5, 2008

Brain Basics!

An excellent article by Michael Craig Miller M.D. that I found in Newsweek that serves as a basic introduction into the false mind-brain dichotomy and other issues:


The brain is the mind is the brain. One hundred billion nerve cells, give or take, none of which individually has the capacity to feel or to reason, yet together generating consciousness. For about 400 years, following the ideas of French philosopher René Descartes, those who thought about its nature considered the mind related to the body, but separate from it. In this model—often called “dualism” or the mind-body problem—the mind was “immaterial,” not anchored in anything physical. Today neuroscientists are finding abundant evidence of an idea that even Freud played with more than 100 years ago, that separating mind from brain makes no sense. Nobel Prize-winning psychiatrist-neuroscientist Eric Kandel stated it directly in a watershed paper published in 1998: “All mental processes, even the most complex psychological processes, derive from operations of the brain.”

Neuroscientists consider it settled that the mind arises from the cooperation of billions of interconnected cells that, individually, are no smarter than amoebae. But it’s a shocking idea to some that the human mind could arise out of such an array of mindlessness. Many express amazement that emotions, pain, sexual feelings or religious belief could be a product of brain function. They are put off by the notion that such rich experiences could be reduced to mechanical or chemical bits. Or they worry that scientific explanations may seduce people into a kind of moral laziness that provides a ready excuse for any human failing: “My brain made me do it.” Our brains indeed do make us do it, but that is nonetheless consistent with meaningful lives and moral choices. Writing for the President’s Council on Bioethics earlier this year, philosopher Daniel Dennett made the point that building knowledge about the biology of mental life may improve our decision making, even our moral decision making. And it could enhance our chances of survival as a species, too.

Your heart, lungs, kidneys and digestive tract keep you alive. But your brain is where you live. The brain is responsible for most of what you care about—language, creativity, imagination, empathy and morality. And it is the repository of all that you feel. The endeavor to discovery the biological basis for these complex human experiences has given rise to a relatively new discipline: cognitive neuroscience. It has recently exploded as a field, thanks, in part, to decades of advances in neuroimaging technology that enable us to see the brain at work. As Dr. Joel Yager, professor of psychiatry at the University of Colorado, has said, “We can now watch the mind boggle!”

Certainly, you won’t find an entry for “mind-boggling” in the index of a modern neuroscience textbook. You will also have a hard time finding the words “happiness” or “sadness,” “anger” or “love.” Neuroscientists do, however, have a rapidly growing appreciation of the emotional brain and are beginning to look closely at these subjective states, which were formerly the province of philosophers and poets. It is complex science that holds great promise for improving the quality of life. Fortunately, understanding basic principles does not require an advanced degree.

Fear Itself
Fear is a good place to start, because it is one of the emotions that cognitive neuroscientists understand well. It is an unpleasant feeling, but necessary to our survival; humans would not have lasted very long in the wilderness without it. Two deep brain structures called the amygdalae manage the important task of learning and remembering what you should be afraid of.

Each amygdala, a cluster of nerve cells named after its almond shape (from the Greek amugdale), sits under its corresponding temporal lobe on either side of the brain. Like a network hub, it coordinates information from several sources. It collects input from the environment, registers emotional significance and—when necessary—mobilizes a proper response. It gets information about the body’s response to the environment (for example, heart rate and blood pressure) from the hypothalamus. It communicates with the reasoning areas in the front of the brain. And it connects with the hippocampus, an important memory center.

The fear system is extraordinarily efficient. It is so efficient that you don’t need to consciously register what is happening for the brain to kick off a response. If a car swerves into your lane of traffic, you will feel the fear before you understand it. Signals travel between the amygdala and your crisis system before the visual part of your brain has a chance to “see.” Organisms with slower responses probably did not get the opportunity to pass their genetic material along. Fear is contagious because the amygdala helps people not only recognize fear in the faces of others, but also to automatically scan for it. People or animals with damage to the amygdala lose these skills. Not only is the world more dangerous for them, the texture of life is ironed out; the world seems less compelling to them because their “excitement” anatomy is impaired.

Anger Management
Until recently, there was relatively little research showing how the brain processes anger. But that has begun to change. Recent studies indicate that anger may trigger activity in a part of the brain not named as poetically as the amygdala—the dorsal anterior cingulate cortex (abbreviated dACC). Like the amygdala, the dACC’s function makes sense, given its connections to areas of the brain involved in recognizing an offense (he just stole my iPod), registering a feeling (I’m angry) and acting on it (I’m going to …). It also links to the reasoning centers in the front part of the brain, as well as memory centers, which play a role in angry rumination or stewing after the fact.

Researchers, however, have been more focused on one of the consequences of anger—aggression—probably because it can be observed through behavior. It’s known, for example, that men are overtly more aggressive than women because of differences in male and female hormones. But the brains of men and women are also different, and some of those differences may affect aggression. In the front of the brain, the orbitofrontal cortex is recruited to help make decisions and temper emotional responses. It lights up when people are making judgments. Adrian Raine and colleagues at the University of Southern California note that, on average, men have a lower volume of gray matter (the bodies of nerve cells) in the orbitofrontal cortex than women. According to their analysis, this brain difference accounts for a healthy portion of the gender gap seen in the frequency of antisocial behavior.

Even a neuroscientist can see that murder and mayhem are undesirable. But a neuroscientist can also see why that trait might still be in the gene pool. The gene for sickle cell anemia survived because it provided protection against another disease, malaria. Similarly, aggression is often an advantage. Until recently in historical terms, a readiness to fight and the ability to kill was a way to consolidate control over resources for survival. Fortunately, diplomats have also evolved. Some of our ancestors who understood that aggression carried risks as well as advantages used their creative human brains to devise better solutions for resolving conflicts. Our predecessors also originated symbolic diversions for aggression, like sports and chess.

So Happy (and Sad) Together
The common emotions of sadness and happiness are a problem for researchers. Depression and mania are core areas of study for a neuroscientist. But everyday ups and downs are so broadly defined that researchers have a hard time figuring out what exactly to study. They note activity in virtually every part of the brain. Last year Drs. Peter J. Freed and J. John Mann, publishing in The American Journal of Psychiatry, reported on the literature of sadness and the brain. In 22 studies, brain scans were performed on nondepressed but sad volunteers. Sadness was mostly induced (subjects were shown sad pictures or films, asked to remember a sad event), although, in a couple of studies, subjects had recently experienced a loss. In the aggregate, sadness appeared to cause altered activity in more than 70 different brain regions. The amygdala and hippocampus both show up on this list, as do the front part of the brain (prefrontal cortex) and the anterior cingulate cortex. A structure called the insula (which means “island") also appears here—it is a small region of cortex beneath the temporal lobes that registers body perceptions and taste.

The authors believe this complicated picture makes sense. The brain regions on their list process conflict, pain, social isolation, memory, reward, attention, body sensations, decision making and emotional displays, all of which can contribute to feeling sad. Sadness triggers also vary—for example, the memory of a personal loss; a friend stressing over a work conflict; seeing a desolate film.

In the brain, happiness is as widely distributed as sadness. In his book “This Is Your Brain on Music,” Dr. Daniel Levitin (page 58) notes that music simultaneously enlists many parts of the brain. We listen and respond to sounds and rhythms (auditory, sensory and motor cortex, cerebellum). We interpret (sensory cortex) and reason (prefrontal cortex). Music pulls on memories for experience and emotion (amygdala and hippocampus). If the music is working for you, it is probably triggering the reward system (nucleus accumbens). And if you’re playing it, as Dr. Levitin does, you also get to throw satisfaction into the mix. This may or may not be a description of happiness, but it certainly coincides with the notion of flow, described by the author Dr. Mihály Csíkszentmihályi: concentrated attention and the absence of self-consciousness. A neuroscientist might say that a life that fully engages your brain in these ways is a life worth living.

Faith, Love and Understanding
The challenge to cognitive neuroscientists becomes greater as we go up the ladder to more-complicated emotional states. And the stakes become higher, too, because research into such highly valued and personal mental processes can be easily misunderstood.

Empathy is more than being nice. It is the ability to feel what another person feels, and in its most refined form it is the capacity to deeply understand another person’s point of view. The brain’s empathic powers actually begin with fear detection. Most of us are extraordinarily skilled face readers. We readily act on the emotions communicated to us through facial expression. And the grammar of facial expression, in some instances, is plain. We are masters at telling when a smile is insincere by the absence of wrinkles (called Duchenne lines) around the smiler’s eyes. In a spontaneous smile, the corners of the mouth curl up and muscles around the eyes contract. Duchenne lines are almost impossible to fake.

In the Marx Brothers movie “Duck Soup,” Groucho encounters his brother Chico in a doorway, dressed like him in a nightshirt and cap and a fake mustache. They perform a famous version of the mirror routine, Chico copying Groucho’s actions. The humor may derive, at least in part, from humans’ highly developed skill as copycats. When you observe someone eating ice cream or stubbing a toe, the brain regions that are activated in the eater and the stubber are also activated in you.

But empathy depends on more than an ability to mirror actions or sensations. It also requires what some cognitive neuroscientists call mentalizing, or a “theory of mind.” Simon Baron-Cohen, a leading researcher in the study of autism, has identified the inability to generate a theory of mind as a central deficit in that illness. He has coined the term “mindblindness” to designate that problem. The corollary, “mindsightedness,” requires healthy function in several areas of the brain. The processing and remembering of subtle language cues take place toward the ends of the temporal lobes. At the junction of the temporal and parietal lobes, the brain handles memory for events, moral judgment and biological motion (what we might call body language). And the prefrontal cortex handles many complex reasoning functions involved in feelings of empathy.

Not surprisingly, love also engages a whole lot of brain. Areas that are deeply involved include the insula, anterior cingulate, hippocampus and nucleus accumbens— in other words, parts of the brain that involve body and emotional perception, memory and reward. There is also an increase in neurotransmitter activity along circuits governing attachment and bonding, as well as reward (there’s that word again). And there’s scientific evidence that love really is blind; romantic love turns down or shuts off activity in the reasoning part of the brain and the amygdala. In the context of passion, the brain’s judgment and fear centers are on leave. Love also shuts down the centers necessary to mentalize or sustain a theory of mind. Lovers stop differentiating you from me.

Faith is also being studied. Earlier this year the Annals of Neurology published an article by Sam Harris and colleagues exploring what happens in the brain when people are in the act of either believing or disbelieving. In an accompanying editorial, Oliver Sachs and Joy Hirsch underscored the significance of what the researchers found. Belief and disbelief activated different regions of the brain. But in the brain, all belief reactions looked the same, whether the stimulus was relatively neutral: an equation like (2+6)+8=16, or emotionally charged: “A Personal God exists, just as the Bible describes.”

By putting a big religious idea next to a small math equation, some readers might think the researchers intend to glibly dismiss it. But a discovery about brain function does not imply a value judgment. And understanding the reality of the natural world—how the brain works—shouldn’t muddle the big questions about human experience. It should help us answer them.

July 5, 2008

Project Tidbits

It ain't really good practice to blog about the research one is doing in order to avoid the risk of it being stolen from right under your nose, so I'm not gonna spill any details of the project I'm involved in now. Except just to say that it is going to be very exciting to break new ground in a relatively under-researched area of neuropsychology. :-)

Nothing to stop me from dishing out a few tidbits with some details changed though, and here's one:

"Neurophysiologically, a recent study by Eisenberger, Lieberman, and Williams (2003) demonstrated that the social pain of ostracism is similar to physical pain at the neurophysiological level ... The exclusion of participants led to increased activity in their anterior cingulate cortex (ACC), and the right ventral prefrontal cortex (RVPFC). The ACC is the same region active during physical pain, suggesting that ostracism taps primal reactions of hurt. Self-reports of distress were highly correlated with ACC activity. The activity in the RVPFC moderated feelings of distress for the intentional (but not intentional) ostracism, suggesting that this regulates the ACC activity."
Cool, huh? ;-)

To think that intentionally or unintentionally excluding people from activities activates the same neural pain centres as when you're getting punched. Something to think about there....