September 20, 2009
September 14, 2009
Thugs left man with half a head
By STAFF REPORTER
Published: 11 Sep 2009
VICIOUS thugs who punched this man so hard he was left with HALF A HEAD have got off scot free.
Horrified Steve Gator had to have the front of his skull removed by stunned surgeons after his head was smashed against a pavement in the sickening attack. And now the 26-year-old has been told that the teen attackers who disfigured him will escape justice after his case was dropped.
Steve, of Romford, Essex, was attacked after confronting one of the yobs who had been taunting him about his cousin. Another of the violent louts hit him so hard that he was sent flying and struck his head on the path. Steve plunged into a coma for two weeks as his shattered mum and distraught family kept a bedside vigil at Queen's Hospital, Romford.
His brain quickly began swelling and surgeons were forced to remove the front half of his skull just hours after he was admitted.
Grief-stricken mum Nina Gator was warned her son had just a terrifying 15 per cent chance of survival. Two days later cops charged a pair of teenage boys with the savage attack which shocked the neighbourhood. Steve, who has had to quit his job, was left seriously brain damaged and now suffers frequent seizures, has difficulty talking, and his memory is seriously impaired. Mrs Gator, who is his main carer, last night blasted the shock move. The 47-year-old said: "I can't believe it. Everyone is entitled to their day in court."
CPS lawyers claim they needed more proof before going ahead with the case. But Mrs Gator stormed: "Our boy is walking around with half a head - what more evidence do they need? "His sparkle is totally gone. He used to be so independent but he can't work any more and he can't drive." She added: "He's got half a head and he's completely lost his confidence. There's absolutely nothing protecting his brain now it's just under his skin."
Just from looking at the picture, it seems obvious that with this traumatic brain injury (TBI) his frontal lobes are practically destroyed and quite possibly the front parts of his midbrain. The frontal lobe is an extremely important structure responsible for a variety of functions. It is the 'Command HQ' for emotions, and controls and regulates functions such as memory, language, movement, and problem-solving. It is also responsible for more subtle things like judgment, planning, reasoning, spontaneity or impulse control, and some effects on social and sexual behaviour. As such, the frontal lobe administrates much of our very personality and sense of identity. It is also the largest 'lobe' structure, meaning that there is more of it to carry a greater risk of damage. As the story mentions, Gator's "sparkle is totally gone". It is tempting to draw parallels with the tale of Phineas Gage, another individual dubiously famed for frontal lobe damage.
A friend, The Neurocritic, pointed out that Gator may need several cranioplasties in order to rebuild his skull, and highlighted a recent Neurosurgical Focus literature review that discusses the types of post-operative complications associated with the surgical procesure underwent by Gator. Known as a decompressive craniectomy, and consisting of a partial removal of the skull in order to allow the swelling brain to expand without being squeezed, we start with contusion blossoming; the surgery leaves massive bruises which can be observed via pre-op and post-op CT scans.
Lesions - a mass lesion may develop on the opposite side of the brain to the injury or elsewhere in the brain. As Gator's frontal lobes were destroyed, it is possible that a lesion may develop around the back end and possibly affect the parietal lobes, which deals generally with perception, orientation and recognition.
Herniation - a small protrusion (or more) of neural tissue may remain in the early period after swelling subsides, sometimes through the cranial defect as is observed with 'normal' skin hernias. Gator has no such defect though, as the front of the skull was smashed.
Subdural Effusions - a collection of pus beneath the outer lining of the brain. This condition usually results from bacterial meningitis, but because craniectomies affect the circulation of cerebrospinal fluid (CSF) it is possible that buildups may accumulate. Similar to blood clots. Hygromas may also occur, which are buildups of CSF without blood. To counteract these, a craniectomy should be accompanied with a duraplasty, a reconstructive operation on the dura mater, the outermost and fibrous membrance covering the brain and spinal cord. Duraplasties have been observed to lower the incidence of subdural effusions occurring.
Infection - this may seem a rather obvious effect of any medical procedure, to guard against, but craniectomies (bone removal) will necessitate cranioplasties (bone reconstruction). As such, opening old scars and exposing the brain upto or after a month after the incident runs the risk of contracting infection and delaying healing. The review suggests a minimum wait of 3 months before replacing the bone, and that storage of the bone in a freezer can also increase the risk of infection.
Hydrocephalus - "water on the brain", refers to accumulations of CSF in neural cavities. This is unfortunately a common occurrence beyond a month after the injury, and will need specialised procedures (shunt treatment) to deal with it if it occurs.
Syndrome of the Trephined - another unfortunate common occurrence after decompressive craniectomies, of which common symptoms include dizziness, headaches, concentration difficulties, mood disturbances, irritability, and memory problems. Because Gator's particular situation involved the destruction of his frontal lobes, he will unfortunately suffer much worse symptoms than these. However, in general terms when the motor functions are affected, this then becomes known as motor trephine syndrome.
Bone resorption - when one undergoes a decompressive craniectomy, you're likely to have stray bone fragments swimming around and there's around a 50% chance that bone resorption will occur, which is when bone cells (known as osteoclasts) break down the bone and release minerals like calcium directly into the blood.
Persistent vegetative state - clearly the saddest effect of all extreme brain injuries. While decompressive craniectomies are effective at ameliorating intra-cranial pressure and reducing the risk of death, they offer no guarantee of restoring brain function once the patient suffers a TBI. The risks of surviving into a vegetative or minimally conscious state after undergoing craniectomy range upwards of 15-20%.
It may be that Steve Gator's clinicians need to be vigilant and ensure that his treatment is as risk-free as possible. And of course, wishing him all the best to recover well.
Stiver, S. (2009). Complications of decompressive craniectomy for traumatic brain injury Neurosurgical FOCUS, 26 (6) DOI: 10.3171/2009.4.FOCUS0965
September 8, 2009
An interesting report in New Scientist magazine suggests that insults are handled better when lying down rather than sitting or standing up. According to the article, University students who were insulted while seated exhibited neural activity consonant with "approach motivation", which describes to desire to approach and explore. This activity appeared absent in a control group insulted while lying down. Eddie Harmon-Jones, a cognitive scientist at Texas A&M University, interprets this as suggesting that one might be more inclined to attack if one were in the upright state, whereas while lying down we may be more inclined to brood.
At first glance this seems a little odd to me. Brooding is quite different to receiving insults and possibly reacting to them. Brooding means a certain amount of thinking and contemplation is occurring. It isn't the done thing to offer or accept anecdotal evidence as important fact, but from personal experience I've sometimes become more enraged over an incident by brooding about it (while lying down) than I have reacted to insults while sitting or standing upright. Would that mean my reactions contradict this research? The real value of psychological research lies in the ability to translate insights and findings into our lives and observe how relevant or useful they are, and I also have to consider these things personally. I downloaded and read the paper for this experiment; technically it is not an actual paper but a 'short report', a brief description of the subject and experimental method followed by conclusions. A mini-paper. Here's an extract:
"Body movements affect emotional processes. For example, adopting the facial expressions of specific emotions (even via unobtrusive manipulations) affects emotional judgments and memories (Laird, 2007). Manipulated body postures can affect behavior: slumped postures lead to more ‘‘helpless behaviors’’ (Riskind & Gotay, 1982). Simple body postures may also affect other emotive responses and the neural activations associated with them."
That's from the very first paragraph, and to me it seems to get more unreal every time I think about it. I don't dispute that body postures can affect neural activation (anything can affect neural activation, that's kind of what the brain does in the first place, reacting and responding to stimuli) but it seems overstated a bit much. The link between body posture and affectability on emotional reaction looks tenuous when compared with something as fundamental as the availability of oxygen and the human requirement to inhale it to live. But let's take a look at the study: 23 females and 23 males (n = 46) were randomly assigned to write a polemical essay featuring their views on a hot topic (e.g. smoking in public, abortion, etc.) and were told assessment would be carried out by another participant. After attaching EEG sensors, participants were randomly assigned to the upright or lying positions on a reclining chair while hearing themselves being rated on six characteristics including intelligence (1 = unintelligent, 9 = intelligent). Needless to say, participants heard negative reviews of themselves and fumed.
To be more specific, all 'reclined' participants heard negative reviews of themselves while only half 'uprights' heard negative. The other half heard slightly positive reviews. It's good to add a little variety to these things to account for different causes and effects, but I think the total sample size here was too small. Gender effects were accounted for too; males and females were randomly assigned to the two conditions, and male participants heard male-voiced feedback with females hearing female-voiced feedback. For future research, switching gender-voice feedback would make an interesting manipulation.
The results showed that for those in the upright position, the left prefrontal cortex (PFC) was substantially activated more than those who were reclining. Even though both sets of participants expressed similar levels of anger in response to the negative feedback, the left PFC has been linked to anger and approach motivation. This suggests a marked reduction in approach motivation when lying down.
What this means in reality remains under question: Does body posture really affect emotional reactions that much? Similar levels of anger existed between both groups, but those who were lying down appeared less inclined to do something about it? How might those students have reacted with the absence of inhibitory factors? I know that this is preliminary research but these are just some of the questions that need to be researched and accounted for.
Why? Because although some people may consider a study like this to be "fluff psychology" and a little boring, clinicians need to take these types of things a little more seriously when you consider that a large proportion of serious neuroscience is carried out with reclining participants in fMRI-scanners. So I agree with the conclusion of Harmon-Jones' paper; that research is required to help evaluate neuroimaging techniques requiring supine positions. There may not be much to it, but it's worth an exploration.
Harmon-Jones, E., & Peterson, C. (2009). Supine Body Position Reduces Neural Response to Anger Evocation Psychological Science DOI: 10.1111/j.1467-9280.2009.02416.x
July 23, 2009
"A 10-year-old girl born with half a brain has both fields of vision in one eye, scientists said today. The youngster, from Germany, has the power of both a right and left eye in the single organ in the only known case of its kind in the world.BBC News goes further with:
"University of Glasgow researchers used Functional Magnetic Resonance Imaging (fMRI) to reveal how the girl’s brain had rewired itself in order to process information from the right and left visual fields in spite of her not having a whole brain."
"In the case of the German girl, her left and right field vision is almost perfect in one eye. Scans on the girl showed that the retinal nerve fibres carrying visual information from the back of the eye which should have gone to the right hemisphere of the brain diverted to the left ... 'Despite lacking one hemisphere, the girl has normal psychological function and is perfectly capable of living a normal and fulfilling life. She is witty, charming and intelligent.'"Get that? The only known case in the world where brain plasticity (the ability of the brain to reorganise itself after injury) is displayed for all to see. Plasticity doesn't always work this way, there are many cases where plasticity effects haven't achieved the mark of restoring all or most of the impaired brain function. Epilepsy patients, for example, who undergo a hemispherectomy (removal of a half of a brain) in order to prevent the onset of severe seizures, among other things tend to lose an entire field of vision in both eyes; they only see people and objects in one half of their visual field, as in the illustration below:
Neither was this a case of brain injury; the anonymous girl (known only as 'AH') failed to adequately develop her cerebral right hemisphere in the womb. As a result, she is without a right-brain and also without the use of her right eye. She also has a slight left-hemiparesis (weakness affecting half of the body) but close to normal vision in both hemifields of her normal left eye.
In a study published by the Proceedings of the National Academy of Sciences (PNAS), a team led by Lars Muckli of the University of Glasgow used fMRI to investigate how the visual cortex had remapped itself. In a healthy individual, the cerebral cortex contains "maps" for vision, sound, motion and touch, which develop and modify over time dependent on several factors including genetic cues and neural activity. In the mammalian brain (that is, human brain) the visual cortex is made up of distinct sections dealing with vision, the main one being an area known simply as 'V1', the primary visual cortex. 'V2' deals with quarterfield representations in the area of vision, effectively dealing with the 'up' and 'down' areas of both the right and left hemispheres of vision, while 'V3' is a structure in front of V2 that, among other things, performs a supporting role for V2. There is also the question of retinotopic maps, a direct mapping of the spatial arrangement of the retina, located in visual structures including the cortex and thalamus.
As per materials provided by the University of Glasgow, "visual information is gathered by the retina at the back of the eye and images are inverted when they pass through the lens of the pupil so that images in your left field of vision are received on the right side of the retina, and images from the right are received on the left." The part of the retina close to the nose is known as the nasal retina whereas the other part is referred to as the temporal retina, being in proximity to the temples. Both halves transmit received information through separate nerve fibres. In a normal situation, the nerve fibres of the nasal retina cross over in the optic chiasm, a brain structure located at the bottom of the brain near the hypothalamus, and are processed by the hemisphere on the opposite side. The nerve fibres of the temporal retina remain in the same hemisphere (ipsilateral), meaning that the left and right visual fields described earlier are processed by opposite sides of the brain.
[DIGRESSION]Vision is not the only modality to be processed in this strange way. It actually reflects the larger processing activities of the intact brain which tends to process all other modalities in opposite sides of the brain. To wit, touch and hearing for example that is "entered" into the right side of the body (right body, right ear) are processed by the left-brain, and touch/hearing entered into the left body/ear is processed by the right-brain. This is generally referred to as contralateral processing, when input is processed by the 'opposite' half of the brain. Those inputs processed by the 'same' side of the brain is known as ipsilateral processing. For more information, please read about Basic Visual Pathways.[/DIGRESSION]
The MRI scan displays the complete lack of a right-hemisphere: The optic chiasm is shown here (top l-r) in the transverse and enlarged transverse planes, and (bottom l-r) in the coronal and saggital planes. A rudimentary optic nerve is pointed out in the enlargement by the green arrow but with no discernible optic tract, and it can also be seen how the left-hemisphere is spilling over into the right-domain. The vacant right-hemisphere is filled with cerebrospinal fluid (CSF).
In AH's fascinating case, it was found that the nasal retinal nerve had connected to her left-brain. A possible interpretation for AH's condition is suggested by the authors: The lack of a right-brain prevented an opposite connection from being made, which led the optic nerve fibers to "connect" with ipsilateral structures instead.
Remembering that normal cases require a crossing in the optic chiasm, and AH's connections were essentially ipsilateral, how exactly does AH see both visual fields with only one eye? After all, if the entire right hemisphere is missing, AH should see only the left hemifield. The answer lies with the Lateral Geniculate Nucleus (LGN), a structure that is embedded deep in the thalamus and which processes visual information from the retina. In AH, both the nasal and temporal retina would need to be mapped onto the LGN to allow for the processing of both hemifields. Again a similar suggestion of ipsilateral projections were presented as being the solution, instead of the usual contralateral connections, and that a mirror-symmetric representation of the hemifields would be received and processed by the thalamus. Similar cases have been seen in achiasmatic dogs where optic nerve fibres terminated in the ipsilateral LGN.
'Islands' were also found to have formed in the left-hemisphere to deal especially with processing of the left hemifield, to compensate for the missing right-brain activity.
The loss of AH's right-hemisphere was discovered at age 3 when she was treated for brief seizures and twitching taking place on her left side. It is speculated that the right-brain failed to develop between Day 28 and Day 49 of embryonic development. Despite the situation, she is able to engage quite capably in activities that require a fair amount of balance, such as riding a bicycle or roller-skating. Truly an extraordinary case in more ways than one.
For a professional view, please see Dr. Steven Novella's entry on this case.
Muckli, L., Naumer, M., & Singer, W. (2009). Bilateral visual field maps in a patient with only one hemisphere Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0809688106
July 21, 2009
July 8, 2009
After scouting through the interwebz for a more scientific explanation, I discovered that Vaughan Bell had written up a good explanation on his excellent Mind Hacks blog. I hope he won't mind me nicking it, but I think it's that good that it deserves repetition:
According to press reports Michael Jackson will be buried without his brain because it is still 'hardening'. Although this may seem unusual, the 'hardening' process is actually a standard part of any post-mortem examination where the brain is thought to be important in the cause of death, such as in suspected overdose.
It involves removing the brain from the skull and leaving it to soak in a diluted mixture of formaldehyde and water called formalin. This soaking process usually takes four weeks and the brain genuinely does harden.
A 'fresh' brain is a pinkish colour and has the consistency of jelly, gello or soft tofu meaning it is difficult to examine and the various internal structures are often hard to make out.
After soaking the brain, it has the consistency and colour of canned mushrooms making it easier to slice, examine and photograph. However, because the brain is so soft to start with, it can't just be dropped in a tank of fixing solution, because it will deform under its own weight.
To solve the problem it is usually suspended upside down in a large bucket of formalin by a piece of string which is tied to the basilar artery.
After it has 'hardened' or 'fixed' it is sliced to look for clear damage to either the tissue or the arteries. Small sections can also be kept to examine under the microscope.
Because this part of the post-mortem takes several weeks preparation it is usually only carried out with the family's permission as the body may need to be buried without it, or the burial delayed until the procedure is finished.
This also means that this form of post-mortem brain examination is usually only carried out where there is a feeling that examining the brain can help clarify the cause of death - which is what pathologists are often most concerned with.
In cases such as Michael Jackson's, where the effects of drugs are suspected to play a part, pathologists will be looking for evidence of both sudden-onset and long-term brain damage. If they find it, they'll be trying to work out how much it could have been caused by drug use and how much it contributed to the death.
So now you know.
June 25, 2009
P. Murali Doraiswamy is the head of biological psychiatry at Duke University and is a Senior Fellow at Duke’s Center for the Study of Aging. He’s also the co-author of The Alzheimer’s Action Plan, a guide for patients and family members struggling with the disease. Mind Matters editor Jonah Lehrer chats with Doraiswamy about recent advances in Alzheimer’s research and what people can do to prevent memory loss.
What do you think are the biggest public misconceptions of Alzheimer's disease?
The two biggest misconceptions are “It’s just aging” and “It’s untreatable, so we should just leave the person alone.” Both of these misconceptions are remnants of an outdated view that hinders families from getting the best diagnosis and best care. They were also one of the main reasons I wanted to write this book.
Although old age is the single biggest risk for dementia, Alzheimer’s is not a normal part of aging. Just ask any family member who has cared for a loved one with Alzheimer’s and they will tell you how different the disease is from normal aging. Alzheimer’s can strike people as young as their forties; there are some half a million individuals in the United States with early-onset dementia. Recent research has pinpointed disruptions in specific memory networks in Alzheimer’s patients, such as those involving the posteromedial cortex and medial temporal lobe, that appear distinct from normal aging.
The larger point is that while Alzheimer’s is still incurable it’s not untreatable. There are four FDA-approved medications available for treating Alzheimer symptoms and many others in clinical trials. Strategies to enhance general brain and mental wellbeing can also help people with Alzheimer’s. That’s why early detection is so important.
Given the rapid aging of the American population - by 2050, the Alzheimer's Association estimates there will be a million new cases annually - what are the some preventative steps that people can take to prevent or delay the onset of the disease?
Unfortunately, there isn’t yet a magic bullet for prevention. You can pop the most expensive anti-aging pills, drink the best red wine, and play all the brain games that money can buy, and you still might get Alzheimer’s. While higher education is clearly protective, even Nobel Laureates have been diagnosed with the disease, although it’s likely their education helped them stave off the symptoms for a little bit.
My approach is more pragmatic - it’s about recognizing risks and designing your own brain health action plan. The core of our program is to teach people about the growing links between cardiovascular markers (blood pressure, blood sugar, body weight and BMI, blood cholesterol, C-reactive protein) and brain health. A population study from Finland has developed a fascinating scale that can predict 20-year risk for dementia – sort of a brain aging speedometer. Obesity, smoking, lack of physical activity, high blood pressure, and high cholesterol are some of the culprits this study identified. So keeping these under control is crucial.
Depression is another risk factor for memory loss, so managing stress and staying socially connected is also important. B vitamins may prevent dementia in those who are deficient and there are some simple blood tests that can detect this. For the vast majority of people, however, there are no prescription medications that have been proven to prevent dementia. This means that a brain-healthy lifestyle is really our best bet for delaying the onset of memory loss.
In the near future we will likely have prevention plans that are personalized based on genetic, metabolic and neurological information. In familial Alzheimer’s disease, pre-implantation genetic diagnosis has already been used to successfully deliver babies free of a deadly Alzheimer causing mutation—though only time will tell if deleting such dementia risk genes in humans has other consequences.
Your book talks about a new technique that allows doctors to image amyloid plaques in the brain. How will these change the diagnosis of the disease?
Amyloid PET scans are in the late stages of validation testing to see if they can improve the accuracy of clinical diagnosis. The Alzheimer’s brain is defined by beta-amyloid plaques and tangles but, at present, these can only be definitively diagnosed with an autopsy. If an amyloid PET scan is “plaque negative” that will tell a doctor that Alzheimer’s is unlikely to be the diagnosis and help reassure the family. Early findings suggest that people who carry risk genes are more likely to have plaque positive scans even before they develop symptoms - suggesting that the scans could possibly be useful for predicting future risk. If true, this might eventually lead to a change in diagnostic terminology where “preclinical” Alzheimer’s is diagnosed purely based on biomarker and scan findings long before memory symptoms start. Therapies to treat Alzheimer’s by blocking amyloid plaques are already in trials but are currently given blindly to patients without knowing their brain plaque status—raising their risk for side effects and treatment failure. So this scan may also help drug development by helping select the most appropriate subjects for treatment and then monitoring treatment effects. Amyloid accumulation with aging is seen in many animal species and the scan offers us a tool to study what role plaque plays in normal brain aging. So this could do for the brain what colonoscopy did for the gut!
Will science ever find a cure for Alzheimer's?
It’s an incredibly tough puzzle to crack but the pace of research is so great that new drug targets are being reported daily. I think a form of cure is more likely to come from delaying the onset rather than by growing new brain cells to repair lost tissue. Realistically speaking there are several fundamental questions we don’t fully understand and have yet to answer: What causes the disease? Why do plaques and tangles form? Why are the memory centers the first to be destroyed? On the positive side, there are several dozen drugs in clinical trials.
What recent scientific advances in treating or understanding Alzheimer's are you most excited about?
I’m most excited about diagnostic advances. By using a combination of biomarkers, genetic tests and new brain scans, we are inching very close to predicting not only who will develop Alzheimer’s but the exact age when they may start developing symptoms. This offers huge opportunities for conducting prevention trials. Of course, it also brings a whole host of ethical challenges, since our diagnostic and predictive abilities are advancing far faster than our ability to prevent Alzheimer’s.
On the treatment side, there are several developments that I am excited about. The interactions between vascular disease and memory loss suggest that at least some aspects of Alzheimer’s may be modifiable through diet and exercise. Dimebon, a drug that improves mitochondrial function, has yielded promising results and is in final stages of testing. In addition, therapeutic strategies which target the brain’s own ability to repair itself – for example, by delivering nerve growth factor through viral vectors – are in clinical trials. Until we have a cure, however, it’s really important to focus on improving the quality of life of people with Alzheimer’s.
June 17, 2009
June 11, 2009
Those of us who are familiar with scientific research in the area of paranormal phenomena are keenly aware that experiments into the same have almost always reported nothing of substance, lending credibility to the idea that when tested under sufficient scrutiny, these psychic powers always tend to fail. This has always been a consistent finding when testing various instances of so-called psychic ability, and based on that it isn't too much to expect this experiment to generate interesting results either. However, informal experiment that it is, the application of stringent scientific principles to a wholly randomised and sufficiently chaotic source such as Twitter was an interesting exercise. I don't know if a journal paper will come out of this but it should make interesting reading.
Wiseman carried out his experiment in the following way. At 15:00 (GMT) each day he travelled to a randomly selected location and sent a 'tweet' (message) on Twitter, asking his participants to tweet back their inclinations about his location. Thirty minutes later, he posted another tweet that linked to a website containing photographs of five different locations (the actual location of Wiseman and four decoy locations) and arbitrarily labelled 'A' to 'E'. Participants would be asked to see the photographs, concentrate their abilities and then vote on the location they believed was correct. They would also be asked their gender, rate their belief in the paranormal, and whether they believed they had psychic ability. Voting would remain open for 1 hour. If the majority of people selected the correct location the trial would count as a success. But before carrying it out, Wiseman carried out a test trial to test the procedure and also familiarise the participants with the procedure of the experiment. After some necessary ironing out of the details, Wiseman proceeded to carry out four experimental trials on four successive days, with three or more successful trials considered as evidence of ESP.
The experiment was carried out as outlined above and the results of the trials were posted at the end of each day at Wiseman's blog (Trial 1, Trial 2, Trial 3, Trial 4). More than a thousand participants were reported to have taken part, with believers in paranormal phenomena claiming a high level of correspondence between their thoughts and the actual locations.
The results of the experiment were also posted on Wiseman's blog, essentially stating no differences in choice between paranormal believer and non-believers. The experiment thus failed to support the existence of remote viewing, and suggested that participants claiming paranormal belief were only proficient at claiming illusory correspondences between their thoughts and actual targets.
Certainly this is not an experiment conducted under orthodox means and there are a number of uncontrolled variables operating that were uncatered for. However, it seems that even an informal study using basic scientific procedures and relying on user input is capable of generating interesting results, even non-significant ones. Wiseman states that he hopes to provide further post-hoc analyses of his results such as the difference between paranormal believers and sceptics, males and females, etc., but one update so far states that the data from those who claimed psychic ability and also a high confidence in their choice of target location scored a zero out of four. Surprise surprise.
As I mentioned, it is unknown if a serious analysis can be made of this strategy or if a journal paper will be published, but I think that that even without the stamp of authority given to 'orthodox' experiments this study is still consistent with those orthodox studies of paranormal phenomena that reported insubstantial results. Not a good day for psychics.
May 28, 2009
Epidermoids tend to have a smooth grey surface and contain friable waxy material inside. It is different to a dermoid cyst in that it tends to connect to and envelop adjacent structures whereas dermoid structures usually have defined boundaries. Their presence can be ascertained 'outside' as a mobile, rubbery mass that presents as a cosmetic deformity.
Operative removal must be undertaken with care, as spillage of the tumour can occur which may lead to forms of meningitis or ventriculitis.
- Source: NeuroWiki.
May 27, 2009
The composition and interpretation of music through song, dance, and playing a musical instrument, are complex and high-level tasks of the creative brain. Indeed, the 'creative' aspects of personality are thought to constitute a particular division of intelligence in itself. Although it is possible to gain a certain level of proficiency in playing the works of Beethoven and Mozart through social and/or environmental factors (parental support, music school), the phenomenon of the child prodigy does in fact suggest an innate genetic basis for talent. Creativity itself is a complex process that draws largely from areas of the right hemisphere, not activating the frontal lobes or cortices very much. And since we are talking mainly of cognitive processes,we can expect hormones such as arginine vasopressin (AVP), which helps to control higher functions such as memory and learning, to take a lead role. Given that this hormone is mediated by the AVP receptor 1A (AVPR1A) gene, that affects many behavioural, social and emotional traits such as male aggression, pair bonding, altruism, parenting, sibling relationships, love etc., it stands to reason that this key gene is the one to watch.
A team of researchers at Helsinki University, headed by Liisa Ukkola, carried out a study purporting to investigate the neurobiological basis of music in human evolution by analysing the role of the AVPR1A gene and five others and their effects on general creativity and musical aptitude by testing 343 multigenerational participants from 19 Finnish families, professional and amateur musicians alike. Ages varied from 9 to 93 (mean age 43) and DNA was obtained by 298 (86.9%) of those over age 15. Three measures were administered: an extensive online questionnaire to assess creativity in those who composed, improvised or arranged music; Carl Seashore's pitch and time discrimination subtests (SP and ST respectively); and a Karma Music Test (KMT) designed by one of the research team. The results showed that high scores on the music tests associated well with high levels of creativity, and also higher in creative individuals than non-creative individuals. Genetic testing confirmed that creativity was a heritable trait.
Wait a minute - what does all this have to do with the brain?
This study showed how auditory structuring ability (gleaned from the KMT test) were associated with the AVPR1A gene, with the strongest effect found in the RS1+RS3 haplotype. The ST and SP tests also suggested this association, and this was further confirmed when the associations were replicated with combined music test score (COMB). The kicker is that the AVPR1A gene is instrumental in modulating social and cognitive behaviours, and music is certainly a medium that initiates, enhances and accelerates certain behaviours! We all know about the peculiar social customs of singing songs of romantic content in order to attract the opposite sex, music played to enhance group cohesion and initiate vigorous hip-spinning activity, and mothers singing soothing lullabies to their offspring in order to induce a state of quietness.
But aside from all of that, the genetic studies provided interesting tidbits of information relating to the homologies of the AVPR1A gene as various alleles were recognised to associate with either composing, arranging and performing music. Higher spatial scores were found among musicians than non-musicians, a possible explanation being that musicians tend to need to read and memorise notes and/or sheet music. Research into the recently discovered TPH2 gene may uncover the details behind the numerical sense necessary to perceive rhythm. The A1 allele associated with the dopamine receptor D2 (DRD2) gene is suggested to be linked to courtship.
The releases related to this story hyped up the evolutionary implications in a big way but I can find very little basis for that in this paper. As usual, evolutionary extrapolations are mainly speculative but interesting nevertheless. The text specifically mentions that evolutionary contributions are speculated on the basis of PET imaging that show partial overlapping between music and language-related areas of the brain. As improvising music usually consists of collaboration with other musicians or between a performer and their audience it makes sense that the role of these brain areas and the genes associated with musical talent be highlighted as it has. As the paper itself says:
"Creativity is a multifactorial genetic trait involving a complex network made up of a number of genes."And it is because of that and the connections to social/cognitive areas of the brain that there is justification for the idea that music enables and enhances social communication in a way that increases attachments. This can explain why people automatically feel closer when they find they share the same types of music.
Ukkola, L., Onkamo, P., Raijas, P., Karma, K., & Järvelä, I. (2009). Musical Aptitude Is Associated with AVPR1A-Haplotypes PLoS ONE, 4 (5) DOI: 10.1371/journal.pone.0005534
May 15, 2009
March 31, 2009
That's a ventral (upside down) view of a fresh brain before processing at the Allen Institute. The folks there are engaged in an impressive project ("Allen Brain Atlas") to map the entire brain with all its individual neurons so as to aid future neurological research. Call it a "Neural Genome" if you like. It's due for completion in 2012, after which it is expected that the constructions of our neural networks will be discovered, analysed and explained.
Fresh brains have to be collected soon after the donor's death, else nucleic acids beging to work and dissolve the cell membranes. Researchers have a limited window in which to cut the brain into slices and photograph each of them before and quickly packing them away in ice for storage before future RNA analysis.
Read Jonah Lehrer's article at Wired, and view the full gallery of images.
March 27, 2009
It's unbelievable what's uncovered sometimes. A recent survey of British psychologists and psychiatrists has uncovered that a sizeable amount have attempted to "convert" homosexual patients or clients to heterosexual orientations!
It's a well-known axiom that (biological) homosexuality is an orientation that cannot be changed, what to speak of the scientific consensus on the matter, and what do you think might happen if any such changes are encouraged? Psychological harm and damage.
After all, what is "normal"? Anyone with even a layman understanding of psychology and/or neuroscience will know that definitions of normality are as subjective as one's colour preferences. And when you have a discriminating society that is ever-willing to ostracise on the slightest grounds of anything perceived as different, it isn't that hard to imagine how seriously this counts as psychological abuse especially concerning a topic so fundamental to someone's 'personhood' as sexual identity.
Annie Bartlett and her colleagues sent postal questions to members of the British Psychological Society, the British Association for Counselling and Psychotherapy, the United Kingdom Council for Psychotherapy and the Royal College of Psychiatrists, in which they were asked to give their views on "conversion treatment" and to describe up to six patients they may have treated accordingly. Of the 1328 examinable anonymous responses received, a flabbergasting 17% reported having assisted in reducing, changing, suppressing their gay or lesbian desires. Of these 17% (222 practitioners), 159 of them (72%) thought that a "service" should be available for homosexuals who wish to change their orientation.
Am I missing something here? Did I suddenly enter the Twilight Zone and wind up in Iran or something? This is England 2009! And it was back in 1973 that homosexuality was removed as a mental disorder from the Diagnostic and Statistical Manual of Mental Disorders (DSM), so why are these attitudes still prevailing in psychotherapeutic practice? Do old habits die hard? Because of the anonymous nature of the survey there is no information provided as to the average age of the sample, even when the authors selected a random sample of responses from the members of each organisation. Even though only 4% (55 respondents) of the total sample said they would consider therapy to change patient orientations upon requests for such therapy, it is much more worrying that the aforementioned 17% have actually attempted to do so. Considering the absense of compelling evidence that patients can even be successfully treated, trying to force or encourage such a change can only heighten and intensify the emotional conflicts that homosexuals may undergo (due to peer pressures, etc.) and cause lasting psychological damage.
This study appears to follow on from earlier 2004 research (also by the same authors) in which an oral history of homosexual patients was gained. 29 homosexuals who had received treatment for their "disorder" were interviewed about their experiences, which revealed a nominal amount of coercive and peer pressures and also resulted in lasting emotional distress.
What can I say? It's sad that these professionals appear to have no real knowledge of social identity issues. And I'd hate to be cynical, but what's the betting that serious conflict of interest issues are responsible for this grave failure of psychotherapeutic services? The type of conflict of interest that arises from personal convictions and beliefs?
Bartlett, A., Smith, G., & King, M. (2009). The response of mental health professionals to clients seeking help to change or redirect same-sex sexual orientation BMC Psychiatry, 9 (1) DOI: 10.1186/1471-244X-9-11
Smith, G., Bartlett, A., & King, M. (2004). Treatments of homosexuality in Britain since the 1950s--an oral history: the experience of patients BMJ, 328 (7437) DOI: 10.1136/bmj.37984.442419.EE
March 23, 2009
Andrea Phelps and colleagues acknowledge that religion and beliefs account for a high amount of coping strategies employed by patients with advanced cancer, as it affords them a sense of "meaning, comfort, control, and personal growth while facing life-threatening illness." Rather understandably, positive strategies are employed that highlight God's "loving care" rather than the negative strategies that view the condition as "divine punishment", which are said to be uncommon. Aside from simply coping with disease, faith is said to be a major factor in medical decisions; Other research in similar areas found that after oncologist recommendations it was faith that was said to be the second most important factor in deciding the course of the treatment, and also that 68% of a sample of a thousand individuals explicitly stated that their faith would guide their medical decisions if they experienced a critical injury, with 57% believing that a divine cure could be obtained in the case of medical incapability to resolve the issue. So while there is evidence that religion is associated with a preference for receiving intense treatment, Phelps and her colleagues wanted to find out whether patients who relied heavily on their religious faith were more likely to receive intensive medical care, such as cardiopulmonary resuscitation or being placed on a mechanical ventilator, before death.
The study was longitudinal, and recruited 345 patients (out of a total of 941 eligibles) from 2003 up till 2007, and were interviewed (at baseline) in either English or Spanish by Yale students, with follow-ups until their deaths. Demographic (ethnic) considerations were accounted for due to the diversity of religious beliefs, and typical measures were undertaken in order to code beliefs accordingly. To avoid selection bias, patients were not told that religion/spirituality was the focus of the study. Other measures of coping strategy were employed; most curiously, patients were asked to rate how much their religious beliefs were supported by the medical staff (doctors, nurses, even hospital chaplains!) and those who rated it highly were coded as having support for their spiritual needs. I can understand chaplains, but what are doctors and nurses doing to support patient beliefs? Did this occur in a sympathetic/empathising context just to keep the patients' spirits up? The study doesn't mention.
Here come the brief stats: 79% stated that religion helped them to cope to a moderate extent, while 32% endorsed the statement that it was the "most important thing that kept them going". 56% engaged in daily prayer or meditation. Positive coping strategies were correlated highly with being black or Hispanic (p < .001). Patients with higher levels of religious coping were younger, less educated, less likely to be insured, less likely to be married, and were more likely to be recruited from Texas (!!) than those who had negative styles of coping. Overall, patients with 'high' religion preferred medical interventions such as being put on a ventilator, resuscitation, transport to the Intensive Care Unit, approved 'heroic' measures by doctors to save lives, than those with 'low' religion. They also didn't think much of advance care planning, Do-Not-Resuscitate orders, making a will or giving anyone power of attorney over their affairs. Even after controlling for other alternatives, 'high' religion remained a significant predictor of preference for life-prolonging measures.
I'm not trying to be deliberately sarcastic because I know that this is a sensitive issue that is especially painful for those who have experienced cancer, or know someone who has suffered it and died etc., but those were really silly things Phelps said. It may be that patients themselves articulated such things in their interviews, but we can never know unless we look through the data. Sensibility returns when other research is cited suggesting that patients do not seem to understand what a DNR is (perhaps due to cultural/language barriers) or thought that it ws morally wrong to institute one (if they think it is God's decision for their "time to die"). It is also noted that believers tend to think illness as a "trial" from God, and it is possible that they deliberately opt to endure further suffering and this might explain their enlisting of life-saving measures.
However, at the end of the day, the study is clear on one thing: terminally patients with high religiosity prefer intensive life-saving care over and above all other forms of coping strategies or medical treatment, and that the decision to opt for this type of care is influenced and mediated by religiosity. The authors pre-empt criticism of misinterpreting their findings as evidence of religiosity accounting for insecurity and/or crises of faith which may lead to the opting for aggressive care, by saying that it cannot "completely account" for the observed associations. Why not? By their own admission they controlled for other eventualities including self-acknowledgement of having a terminal illness and it made no difference at all to the overall results, and only further research can look deeper into the reasons as to why this takes place, but it is understandable if people look to the obvious inference.
March 19, 2009
So head on over there and find out how YOU can become a Science Blogger!
March 17, 2009
I am simply aware that there may be many in my audience who are religious, and so I treat the subject neutrally as I see no need to offend. But then again, some might take offence at my describing ID proponents as 'IDiots' as I just did above, and before I know it I'll be wallowing in qualifications and disclaimers and everyone'll have forgotten what I came to say. Blah blah, this is my space in the end. In seriousness, however, a neutral attitude is the best attitude to take. To be properly academically trained in matters scientific means to maintain a neutral - and yet sceptical - attitude. Apart from the fact that it helps you save face at a later date when your assertions turn out to be wrong ("Oh well I was always neutral about it anyway"), it is really the only position you can take with any measure of comfort given the extremely fast pace at which scientific research is being carried out and announced. Maintaining an attitude of scepticism is also important as it helps promote an attitude of critical thinking, which itself helps to spot numerous errors in studies (if any) as well as gaps and drawbacks in any research by which further endeavours can plug up.
That said, the Kapogiannis study is being touted by some as "proof" that religious faith is "deeply embedded" in the brain which is "programmed for religious experiences". You know this is a media article when you hear the word 'proof', for only they can give masterclasses in sensationalist articles and headlines. However in my last post I showed that it isn't quite that simple. I spoke of the earlier "God Spot" research that I encountered in 2004/2005 and how this new study seemed to contradict the idea of a single spot in the brain that mediated almost all religious feeling.
I also mentioned how I had not kept up with the research specifically investigating this "God Spot" and that this represents a gap in my knowledge that I'll have to catch up on. (By the way, if anyone has any good links to sites or papers I can read, it'd be appreciated.) But from what I recall of it, God Spot research mainly focused on the capacity of the brain that enabled sufferers of temporal lobe epilepsy to have regular spiritual experiences in the form of "religious visions", which we know as visual hallucinations. In the wider context of the limbic system, it was thought that various elements of the limbic architecture combined together along with amgydala and hippocampal functions (and vague links to the autonomic nervous system, ANS) in order for a visual hallucination to be produced. To me, it seemed like a workable theory that explained several instances of the physiological and emotional phenomena that sometimes characterises the incidence of deeply held faith. However, there are obvious gaps in this argument: Not all temporal-lobe epileptics are religious, and also, not all religious people are temporal-lobe epileptics.
This is why I clearly mentioned that this latest Kapogiannis paper simply set out to understand how religious beliefs and feelings are modulated in "normal" brains. Indeed, you do not get many religious people going around making claims of receiving divine and prophetic visions and the Vatican ain't deluged with nominations for sainthood. Most religious people are "normal" in the sense of going to church on Sundays, scriptural study, prayer, and just having a general religious worldview that is satisfying for them. And this is what Kapogiannis and his colleagues wanted to understand: Do their brains used specialised "God Spot" circuitry to modulate all these feelings, or do they use normal processes?
That the answer turned out to be the latter option does not necessarily contradict previous God-Spot research, in my opinion. I personally find it interesting that studies take place on different ends of the spectrum; how the 'normal' and 'visionary' brains are functionally activated for religious processing.
I still don't think much of the criticism referring to the drawback of testing only 'thinking' participants, those who agreed or disagreed with the statements being read out to them, instead of analysing a 'visionary' brain. How exactly would that work? Who made such a criticism like this? Do they even know what fMRI scans involve and how difficult or expensive they are to do? Ask Andrew Newberg (MD), the guy who apparently thinks it's sooo easy to scan a brain in the middle of a religious experience that he hasn't tried to do it himself. Or has he? Looking over his website and the research papers he's come out with, I notice that most of them were written for Zygon. If you've been following my blog for a while, you'll know exactly what I think of Zygon. Now I don't want to seem like I'm unnecessarily attacking some poor guy without provocation, but it comes to something when a titan like PZ Myers doesn't think much of him either. It is incidences like this that make it so hard for genuine science to reach the public and educate their little cotton socks, because Templeton yes-men like Newberg tend to pop up when you're least expecting them and feed something silly into the public imagination which, when investigated, turns out to be an overblown exaggeration.
That is why an attitude of neutrality and scepticism is needed. It is indeed hard to maintain neutrality especially in a world where the Creationist/ID movement have drawn 'first blood' in an unwinnable war, but by being on guard through the critical examination of new research (especially hyped research) it may be possible to score a few points in the service of scientific endeavour and public education.
Kapogiannis, D., Barbey, A., Su, M., Zamboni, G., Krueger, F., & Grafman, J. (2009). Cognitive and neural foundations of religious belief Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0811717106
March 16, 2009
I do love brains. In fact, I love brains a lot. Heck, I love everyone's brains! And you know, when you were a kid and you wanted to buy a packet of cereal because it let you know there was a cool toy FREE INSIDE?!
It works kinda the same way with me and brains. You have this large lump of greyish white wrinkled goop that's responsible for all of your thoughts, feelings, desires, aspirations, and all of that wouldn't be complete without a look inside. So I do love looking at cool images of neurons, glial cells, and even the occasional fMRI scan once in a while. It never ceases to fascinate and amaze me.
(Neurons and glial cells)
March 15, 2009
Now I haven't followed the progress of this line of research in the intervening years, and I'm unsure as to where the research stands on that particular point, but I understand through recent developments such as a recent announcement by Scientific American that a much more lateral theory has been developed that basically says that religious feelings co-opt different brain circuits, those that are engaged in more more mundane pursuits such as politics, music, food, and so on. On the face of it, this theory makes much more sense. Religion, like many things, has many facets including contemplation, group activities, dietary requirements, social obligations, and many others, and so it stands to reason that these activites are moderated by the same neural circuits that moderate them in non-religious contexts! In fact the paper, reported in the Proceedings of the National Academy of Sciences this week, claims to reveal three psychological dimensions of religious belief (God's perceived level of involvement, God's perceived emotion, and doctrinal/experiential religious knowledge) in networks that process Theory of Mind (ToM) regarding intent and emotion, abstract semantics and imagery. ToM, in short, is the ability of individuals to understand their own and other people's mental states in terms of beliefs, intents, desires, knowledge, etc. You know your own thoughts, and you know that other people have their thoughts too, because you have a Theory of Mind.
Dimitrios Kapogiannis et al. discuss the aim of their research; to define the psychological structure of religious belief and to reveal the brain areas activated by the cognitive processes involved. They give a nod to previous "God Spot" research by acknowledging how they have largely focused on the neural correlates of rather vivid and unusual experiences, sufferers of temporal-lobe epilepsy (which was mainly responsible for linking religiosity with the limbic structures), executive/prosocial aspects of religion being linked to the frontal lobes, and mystical religious experiences being linked with decreased parietal lobe activity. They mention that all these findings rarely corresponded with each other and generally didn't succeed at discovering a psychological architecture that underlies religious belief. Regarding the dimensions, the authors mention that factor analytic studies showed how the perception of God's involvement and anger are key components of belief, and this formed their first hypothesis that these concepts would quite naturally be related with the prefrontal and posterior regions of the ToM structure that deal with intent and emotion. Remember, ToM is the ability to understand one's own and others' mental states, so it stands to reason that understanding God's involvment in world affairs or his being angry for some reason or other fits nicely under ToM conceptions of intent and emotion. The second hypothesis proposed to test doctrinal knowledge being mediated by neural circuits processing abstract semantics, and that experiential knowledge engages circuits that process memory recall and imagery. The third hypothesis proposed that adoption of religious belief uses networks used in cognitive-emotional processing.
Just so we're clear, this study isn't aspiring to make any kind of statement on religious belief one way or the other. All it's trying to do is figure out whether religious belief uses "normal" neural circuits that are used for a variety of everyday things or thoughts, or whether "specialised" circuits are being used solely to process religious thoughts and ideas.
Multidimensional scaling (MDS, similar to factor analysis in concept) was applied to ratings of conceptual dissimilarity so that they appropriate correlated within the structures of the three aforementioned dimensions. They don't appear to have mentioned the use of any standardised scale measuring religious beliefs so I can only assume they created their own list of statements (can be seen in supplementary data). 26 particpants with variable levels of self-reported religiosity performed the ratings. Interestingly, Dimension 1 (D1) correlated negatively with God's perceived level of involvement (-0.994), D2 correlated negatively with God's perceived anger (-0.953) but positively with God's perceived love (0.953), and D3 correlated positively with doctrinal (0.993) and negatively with experiential (-0.993) religious content. After that, they fMRI-scanned 40 new participants and measured their brain activity while they listened to the statements being read out to them.
Excuse me for getting bogged down in the details, but at the end of the day this is what was discovered: "the neural correlates of these psychological dimensions were revealed to be well-known brain networks, mediating evolutionary adaptive cognitive functions." In other words, religious beliefs and feelings use the same brain networks as beliefs and feelings about politics, food, martial arts, music and whatever else form your hobbies and interests.
D1 indeed hit ToM circuits in order to "understand God’s intent and resolve the negative emotional significance of his lack of involvement." D2 indeed hit the more emotional ToM circuits when, considering God's emotions of love or anger, activated the same areas that respond to fear and happiness respectively. And finally, D3 engaged areas dealing with the decoding of metaphorical meaning and abstractness (doctrinal), and areas that generate memory and language-based projections of oneself (experiential).
Regarding the last hypothesis dealing with the acceptance or rejection of religious beliefs, some interesting results were reported for the religious group. Disagreement with religious statements among the religious group activated anterior insulae areas commonly associated with emotional-cognitive integration, suggesting that rejection of religious beliefs involves a larger role for emotions in that process. The researchers explain away this finding as saying that negative emotions (such as aversion, guilt, fear of loss) in the religious participants may have been triggered by disagreeing with the statements, and that this could be viewed as a normal event where one encounters statements that go against one's belief system. Results were not reported for the non-religious participants, implying nothing worthy of reporting, so this activation amongst the religious must have been pretty high to merit a mention. The lady doth protest too much, methinks! ;-)
The researchers conclude their paper by relating their findings to their hypotheses and basically patting themselves on the back for a job done well. An important disclaimer relates to the fact that the measured religiosity was that of a sample of modern Western society, and the findings may differ with respect to other cultures. Quite an obvious experimental drawback but it remains to be seen whether the results will remain relatively consistent if or when replicated.
But what does all of this mean for research on the neuropsychological effects of religion? For a start, it shows rather well that religious feeling cannot strictly be said to be located in a single area (as per "God Spot"), but employs general neural circuitry. One may dream of a blissful holiday in quite the same way as one dreams of a blissful afterlife, one may respond well to smiles or bullying in quite the same way as one may respond to "God's love" or "God's anger", and one may interpret theological metaphors and relate them to one's life in quite the same way as one may enjoy reading and analysing a good piece of literature, and relating that to one's life!
Another interesting consideration relates to the evolutionary development of the brain. As the authors suggest that their findings now provide a "psychological and neuroanatomical framework for the processing of religious belief" that may be a specialised human function. Kalanit Grill-Spector, an assistant professor of neuroscience at Stanford University, notes that other primates share the same brain structures although it is debatable as to whether they use them in the same capacity. Other critics note that this study simply analyses a "thinking" brain as opposed to a brain actively undergoing a religious experience. This is such a lazy methodological criticism that I wonder how it can even be considered seriously. It is facetious to think or even expect a "visionary" or prayerful brain can be appropriately analysed, difficult as the process is already, what to speak of asking others to do so. I suggest that this type of armchair logic is unsuitable and unhelpful in further understanding this already very complex topic.
The final sentence is worth a quote: "Regardless of whether God exists or not, religious beliefs do exist and can be experimentally studied, as shown in this study."
This article carries further thoughts.
March 14, 2009
Neuroscience and the Soul
Science and religion have had a long relationship, by turns collegial and adversarial. In the 17th century Galileo ran afoul of the Church's geocentrism, and in the 19th century Darwin challenged the biblical account of creation. The breaches that open at such times often close again, as religions determine that the doctrine in question is not an essential part of faith. This is precisely what happened with geocentrism and, outside of certain American fundamentalist Christian sects, evolution. A new challenge to the science-religion relationship is currently at hand. We hope that, with careful consideration by scientists and theologians, it will not become the latest front in what some have called the "culture war" between science and religion. The challenge comes from neuroscience and concerns our understanding of human nature.
Most religions endorse the idea of a soul (or spirit) that is distinct from the physical body. Yet as neuroscience advances, it increasingly seems that all aspects of a person can be explained by the functioning of a material system. This first became clear in the realms of motor control and perception (1, 2). Yet, models of perceptual and motor capacities such as color vision and gait do not directly threaten the idea of the soul. You can still believe in what Gilbert Ryle called "the ghost in the machine" (3) and simply conclude that color vision and gait are features of the
machine rather than the ghost.
However, as neuroscience begins to reveal the mechanisms underlying personality, love, morality, and spirituality, the idea of a ghost in the machine becomes strained. Brain imaging indicates that all of these traits have physical correlates in brain function. Furthermore, pharmacologic influences on these traits, as well as the effects of localized stimulation or damage, demonstrate that the brain processes in question are not mere correlates but are the physical bases of these central aspects of our personhood. If these aspects of the person are all features of the machine, why have a ghost at all?
By raising questions like this, it seems likely that neuroscience will pose a far more fundamental challenge than evolutionary biology to many religions. Predictably, then, some theologians and even neuroscientists are resisting the implications of modern cognitive and affective neuroscience. "Nonmaterialist neuroscience" has joined "intelligent design" as an alternative interpretation of scientific data (4). This work is counterproductive, however, in that it ignores what most scholars of the Hebrew and Christian scriptures now understand about biblical views of human nature. These views were physicalist, and body-soul dualism entered Christian thought around a century after Jesus' day (5, 6).
To be sure, dualism is intuitively compelling. Yet science often requires us to reject otherwise plausible beliefs in the face of evidence to the contrary. A full understanding of why Earth orbits the Sun (as a consequence of the way the solar system was formed) took another century after Galileo's time to develop. It may take even longer to understand why certain material systems give rise to consciousness. In the meantime, just as Galileo's view of Earth in the heavens did not render our world any less precious or beautiful, neither does the physicalism of neuroscience detract from the value or meaning of human life.
Martha J. Farah*
Center for Cognitive NeuroscienceDepartment of Psychology
University of Pennsylvania
Philadelphia, PA 19104, USA
*To whom correspondence should be addressed. E-mail: email@example.com
School of Theology
Fuller Theological Seminary
1. M. Jeannerod, The Cognitive Neuroscience of Action (Wiley-Blackwell, Hoboken, NJ, 1997).
2. M. J. Farah, The Cognitive Neuroscience of Vision (Wiley-Blackwell, Hoboken, NJ, 2000).
3. G. Ryle, The Concept of Mind (Univ. of Chicago Press, Chicago, 1949).
4. M. Beauregard, D. O'Leary, The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul (HarperCollins, New York, 2007).
5. N. Murphy, Bodies and Souls, or Spirited Bodies? (Cambridge Univ. Press, Cambridge, 2006).
6. J. B. Green, Body, Soul, and Human Life (Baker, Grand Rapids, MI, 2008).