Friday, 31 January 2014

Research Briefing: Does TMS-induced ‘blindsight’ rely on ancient reptilian pathways?

Source Article: Allen C.P.G., Sumner P., & Chambers C.D. (2014). Timing and neuroanatomy of conscious vision as revealed by TMS-induced blindsight. Journal of Cognitive Neuroscience, in press.  [pdf] [study data]  


One of the things I find most fascinating about cognitive neuroscience is the way it is shaping our understanding of unconscious sensory processing: brain activity and behaviour caused by imperceptible stimuli. Lurking below the surface of awareness is an army of highly organised activity that influences our thoughts and actions.

Unconscious systems are, by definition, invisible to our own introspection but that doesn’t make them invisible to science. One simple way to unmask them is to gradually weaken an image on a computer screen until a person reports seeing nothing. Then, when the stimulus is imperceptible, you ask the person to guess what type of stimulus it is, for instance, whether it is “<” or “>”. What you find is that people are remarkably good at telling the difference. They’ll insist they see nothing yet correctly discriminate invisible stimuli much higher than predicted by chance – often at 70-80% correct. It’s really quite head-scratching.

Back in the 1970s, a psychologist named Larry Weiskrantz found that this contrast between conscious and unconscious processing was thrown into sharp relief following damage to a part of the brain called the primary visual cortex (V1). Weiskrantz (and later others) found that patients with damage to V1 would report being blind to one part of their visual field, yet, when push came to shove, they could discriminate stimuli above chance or even navigate successfully around invisible objects in a room. He coined this intriguing phenomenon “blindsight”.

Since then, blindsight has drawn the attention of psychologists, neurologists and philosophers. One of the major debates in the literature has centred on the neurophysiology of the phenomenon: how, exactly, is this unconscious vision achieved? Blindsight proved that information was somehow influencing behaviour without being processed by V1.

Two schools of thought took shape. One argued that, during blindsight, unconscious information reached higher brain systems by activating spared islands of cortex near the damaged V1. An opposing school argued that the information was taking a different road altogether: an ancient reptilian route known as the retinotectal pathway, which bypasses visual cortex to reach frontal and parietal regions.

In our latest study, published in the Journal of Cognitive Neuroscience, we sought to pit these accounts against each other by generating blindsight in healthy people with transcranial magnetic stimulation (TMS). The study was originally conceived by Chris Allen, then a PhD student in my lab and now a post-doctoral researcher. We hadn’t used TMS like this before but we knew from the work of Tony Ro’s lab that it could be done with a particularly powerful type of TMS coil.

Knocking out conscious awareness with TMS was one thing – and apparently doable – but how could we tell which brain pathways were responsible for whatever visual ability was left over? Fortunately I’d recently moved to Cardiff University where Petroc Sumner is based. Some years earlier, Petroc had developed a clever technique to isolate the role of different visual pathways by manipulating colour. When presented under specific conditions, these coloured stimuli activated a type of cell on the retina that has no colour-opponent projections to the superior colliculus. These stimuli, known as “s-cone stimuli”, were invisible to the retinotectal pathway (1). We teamed up with Petroc, and Chris set about learning how to generate these stimuli.

Now that we had a technique for dissociating conscious and unconscious vision (TMS), and a type of stimulus that bypassed the retinotectal pathway, we could bring them together to contrast the competing theories of blindsight. Our logic was this: if the retinotectal pathway is a source of unconscious vision then blindsight should not be possible for s-cone stimuli because, for these stimuli, the retinotectal pathway isn’t available. On the other hand, if blindsight arises via cortical routes at (or near) V1 then blocking the retinotectal route should be inconsequential: we should find the same level of blindsight for s-cone stimuli as for normal stimuli (2).

There were other aspects to the study too (including an examination of the timecourse of TMS interference), but our main result is summarised in the figure below. When we delivered TMS to visual cortex about a tenth of a second after the onset of a normal stimulus, we found textbook blindsight: TMS reduced awareness of the stimuli while leaving unaffected the ability to discriminate them on ‘unaware’ trials. 

Crucially, we found the same thing for s-cone stimuli: blindsight occurred even for these specially coloured stimuli that bypass the retinotectal route. Since blindsight occurred for stimuli that weren’t processed by the retinotectal pathway, our results allow us to reject the retinotectal hypothesis in favour of the cortical hypothesis. This suggests that blindsight in our study arose from unperturbed cortical systems rather than the reptilian route.

Our key results. The upper plot shows conscious detection performance when TMS was applied to visual cortex at 90-130 milliseconds after a stimulus appeared. Compared to "sham" (the control TMS condition), active TMS reduced conscious detection for both the normal stimuli and S-cone stimuli that bypass the retinotectal pathway. The lower plot shows the corresponding results for discrimination of unaware stimuli; that is, how accurately people could distinguish "<" from ">" when also reporting that they didn't see anything. For for both normal stimuli and S-cone, this unconscious ability was unaffected by the TMS. And because this TMS-induced blindsight was found for stimuli that bypass the retinotectal route, we can conclude that the retinotectal pathway isn't crucial for blindsight found here.

While the results are quite clear there are nevertheless several caveats to this work. There is evidence from other sources that the retinotectal pathway can be important and our results don’t explain all of the discrepancies in the literature. What we do show is that blindsight can arise in the absence of afferent retinotectal processing, which disconfirms a strong version of the retinotectal hypothesis.

Also, we don’t know whether the results will translate to blindsight in patients following permanent injury. TMS is a far cry from a brain lesion – unlike brain damage, it is transient, safe and reversible, which of course makes it highly attractive for this kind of research but also distances it from work in clinical patients. Furthermore, even though we can rule out a role of the retinotectal pathway in producing blindsight as shown here, we don’t know which cortical pathways did produce the effect. 

Finally, our paper reports a single experiment that has yet to be replicated – so appropriate caution is warranted as always.

Still, I’m rather proud of this study. I take little of the intellectual credit, which belongs chiefly to Chris Allen. Chris brought together the ideas and tackled the technical challenges with a degree of thoroughness and dedication that he’s become well known for in Cardiff. This paper – his first as primary author – is a nice way to kick off a career in cognitive neuroscience.

1. By “afferent” I mean the initial “feedforward” flow of information from the retina. It’s entirely possible (and likely) that s-cone stimuli activate retinotectal structures such as the superior colliculus after being processed by the visual cortex and then feeding down into the midbrain. What’s important here is that s-cone stimuli are invisible to the retinotectal pathway in that initial forward sweep. 

2. Stats nerds will note that we are attempting to prove a version of the null hypothesis. To enable us to show strong evidence for the null hypothesis, we used Bayesian statistical techniques developed by Zoltan Dienes that assess the relative likelihood of H0 and H1.

Thursday, 16 January 2014

Tough love for fMRI: questions and possible solutions

Let me get this out of the way at the beginning so I don’t come across as a total curmudgeon. I think fMRI is great. My lab uses it. We have grants that include it. We publish papers about it. We combine it with TMS, and we’ve worked on methods to make that combination better. It’s the most spatially precise technique for localizing neural function in healthy humans. The physics (and sheer ingenuity) that makes fMRI possible is astonishing.

But fMRI is a troubled child. On Tuesday I sent out a tweet: “fMRI = v expensive method + chronically under-powered designs + intense publication pressure + lack of data sharing = huge fraud incentive.” This was in response to the news that a post doc in the lab of Hans Op de Beeck has admitted fraudulent behaviour associated with some recently retracted fMRI work. This is a great shame for Op de Beeck, who it must be stressed is entirely innocent in the matter. Fraud can strike at the heart of any lab, seemingly at random. The thought of unknowingly inviting fraud into your home is the stuff of nightmares for PIs. It scares the shit out of me.

I got some interesting responses to my tweet, but the one I want to deal with here is from Nature editor Noah Gray, who wrote: “I'd add ‘too easily over-interpreted.’ So what to do with this mess? Especially when funding for more subjects is crap?”

There is a lot we can do. We got ourselves into this mess. Only we can get ourselves out. But it will require concerted effort and determination from researchers and the positioning of key incentives by journals and funders.

The tl;dr version of my proposed solutions: work in larger research teams to tackle bigger questions, raise the profile of a priori statistical power, pre-register study protocols and offer journal-based pre-registration formats, stop judging the merit of science by the journal brand, and mandate sharing of data and materials.

Problem 1: Expense. The technique is expensive compared to other methods. In the UK it costs about £500 per hour of scanner time, sometimes even more.

Solution in brief: Work in larger research teams to divide the cost.

Solution in detail: It’s hard to make the technique cheaper. The real solution is to think big. What do other sciences do when working with expensive techniques? They group together and tackle big questions. Cognitive neuroscience is littered with petty fiefdoms doing one small study after another – making small, noisy advances. The IMAGEN fMRI consortium is a beautiful example of how things could be if we worked together.

Problem 2: Lack of power. Evidence from structural brain imaging implies that most fMRI studies have insufficient sample sizes to detect meaningful effects. This means they not only have little chance of detecting true positives, there is also a high probability that any statistically significant differences are false. It comes as no surprise that the reliability of fMRI is poor.

Solution in brief: Again, work in larger teams, combining data across centres to furnish large sample sizes. We need to get serious about statistical power, taking some of the energy that goes into methods development and channeling it into developing a priori power analysis techniques.

Solution in detail: Anyone who uses null hypothesis significance testing (NHST) needs to care about statistical power. Yet if we take psychology and cognitive neuroscience as a whole, how many studies motivate their sample size according to a priori power analysis? Very few, and you could count the number of basic fMRI studies that do this on the head of a pin. There seem to be two reasons why fMRI researchers don’t care about power. The first is cultural: to get published, the most important thing is for authors to push a corrected p value below .05. With enough data mining, statistical significance is guaranteed (regardless of truth) so why would a career-minded scientist bother about power? The second is technical: there are so many moving parts to an fMRI experiment, and so many little differences in the way different scanners operate, that power analysis itself is very challenging. But think about it this way: if these problems make power analysis difficult then they necessarily make the interpretation of p values just as difficult. Yet the fMRI community happily embraces this double standard because it is p<.05, not power, that gets you published.

Problem 3: Researcher ‘degrees of freedom’. Even the simplest fMRI experiment will involve dozens of analytic options, each which could be considered legal and justifiable. These researcher degrees of freedom provide an ambiguous decision space for analysts to try different approaches and see what “works” best in producing results that are attractive, statistically significant, or fit with prior expectations. Typically only the outcome that "worked" is then published. Exploiting these degrees of freedom also enables researchers to present “hypotheses” derived from the data as though they were a priori, a questionable practice known as HARKing. It’s ironic that the fMRI community has put so much effort into developing methods that correct for multiple comparisons while completely ignoring the inflation of Type I error caused by undisclosed analytic flexibility. It’s the same problem in different form.

Solution in brief: Pre-registration of research protocols so that readers can distinguish hypothesis testing from hypothesis generation, and thus confirmation from exploration.

Solution in detail: By pre-specifying our hypotheses and analysis protocol we protect the outcome of experiments from our own bias. It’s a delusion to pretend that we aren’t biased, that each of us is somehow a paragon of objectivity and integrity. That is self-serving nonsense. To incentivize pre-registration, all journals should offer pre-registered article formats, such as Registered Reports at Cortex. This includes prominent journals like Nature and Science, which have a vital role to play in driving better science. At a minimum, fMRI researchers should be encouraged to pre-register their designs on the Open Science Framework. It’s not hard to do. Here’s an fMRI pre-registration from our group.

Arguments for pre-registration should not be seen as arguments against exploration in science – instead they are a call for researchers to care more about the distinction between hypothesis testing (confirmation) and hypothesis generation (exploration). And to those critics who object to pre-registration, please don’t try to tell me that fMRI is necessarily “exploratory” and “observational” and that “science needs to be free, dude” while in same breath submitting papers that state hypotheses or present p values. You can't have it both ways.

Problem 4: Pressure to publish. In our increasingly chickens-go-in-pies-come-out culture of academia, “productivity” is crucial. What exactly that means or why it should be important in science isn’t clear – far less proven. Peter Higgs made one of the most important discoveries in physics yet would have been marked as unproductive and sacked in the current system. As long as we value the quantity of science that academics produce we will necessarily devalue quality. It’s a see saw. This problem is compounded in fMRI because of the problems above: it’s expensive, the studies are underpowered, and researchers face enormous pressure to convert experiments into positive, publishable results. This can only encourage questionable practices and fraud.

Solution in brief: Stop judging the quality of science and scientists by the number of publications they spew out, the “rank” of the journal, or the impact factor of the journal. Just stop.

Solution in detail: See Solution in brief.

Problem 5: Lack of data sharing. fMRI research is shrouded in secrecy. Data sharing is unusual, and the rare cases where it does happen are often made useless by researchers carelessly dumping raw data without any guidance notes or consideration of readers. Sharing of data is critical to safeguard research integrity – failure to share makes it easier to get away with fraud.

Solution in brief: Share and we all benefit. Any journal that publishes fMRI should mandate the sharing of raw data, processed data, analysis scripts, and guidance notes. Every grant agency that funds fMRI studies should do likewise.

Solution in detail: Public data sharing has manifold benefits. It discourages and helps unmask fraud, it encourages researchers to take greater care in their analyses and conclusions, and it allows for fine-grained meta-analysis. So why isn’t it already standard practice? One reason is that we’re simply too lazy. We write sloppy analysis scripts that we’d be embarrassed for our friends to see (let alone strangers); we don’t keep good records of the analyses we’ve done (why bother when the goal is p<.05?); we whine about the extra work involved in making our analyses transparent and repeatable by others. Well, diddums, and fuck us – we need to do better.

Another objection is the fear that others will “steal” our data, publishing it without authorization and benefiting from our hard work. This is disingenuous and tinged by dickishness. Is your data really a matter of national security? Oh, sorry, did I forget how important you are? My bad.

It pays to remember that data can be cited in exactly the same way papers can – once in the public domain others can cite your data and you can cite theirs. Funnily enough, we already have a system in science for using the work of others while still giving them credit. Yet the vigor with which some people object to data sharing for fear of having their soul stolen would have you think that the concept of “citation” is a radical idea.

To help motivate data sharing, journals should mandate sharing of raw data, and crucially, processed data and analysis scripts, together with basic guidance notes on how to repeat analyses. It’s not enough just to share the raw MR images – the Journal of Cognitive Neuroscience tried that some years ago and it fell flat. Giving someone the raw data alone is like handing them a few lumps of marble and expecting them to recreate Michelangelo’s David.


What happens when you add all of these problems together? Bad practice. It begins with questionable research practices such as p-hacking and HARKing. It ends in fraud, not necessarily by moustache-twirling villains, but by desperate young scientists who give up on truth. Journals and funding agencies add to the problem by failing to create the incentives for best practice.

Let me finish by saying that I feel enormously sorry for anyone whose lab has been struck by fraud. It's the ultimate betrayal of trust and loss of purpose. If it ever happens to my lab, I will know that yes the fraudster is of course responsible for their actions and is accountable. But I will also know that the fMRI research environment is a damp unlit bathroom, and fraud is just an aggressive form of mould.

Saturday, 16 November 2013

Bringing study pre-registration home to roost

Earlier this year I committed my research group to pre-registering all studies in our recent BBSRC grant, which includes fMRI, TMS and TMS-fMRI studies of human cognitive control. We will also publicly share our raw data and analysis scripts, consistent with the principles of open science. As part of this commitment I’m glad to report that we have just published our first pre-registered study protocol at the Open Science Framework.

For those unfamiliar with study pre-registration, the rationale is simply this: that to prevent different forms of human bias creeping into hypothesis-testing we need to decide before starting our research what our hypotheses are and how we plan to test them. The best way to achieve this is to publicly state the research questions, hypotheses, outcome measures, and planned analyses in advance, accepting that anything we add or change after inspecting our data is by definition exploratory rather than pre-planned.

To many scientists (and non-scientists) this may seem like the bleeding obvious, but the  truth is that the life sciences are suffering a crisis in which research that is purely exploratory and non-hypothesis-driven masquerades as hypothetico-deductive. That’s not to say that confirmatory (hypothesis-driven) research is necessarily worth any more than exploratory (non-hypothesis driven) research. The point is that we need to be able to distinguish one from the other, otherwise we build a false certainty in the theories we produce. Psychology and cognitive neuroscience are woeful at making this distinction clear, in part because they ascribe such a low priority to purely exploratory research.

Pre-registration helps solve a number of specific problems inherent in our publishing culture, including p-hacking (mining data covertly for statistical significance) and HARKing (reinventing hypotheses to predict unexpected results). These practices are common in psychology because it is difficult to publish anything in ‘top journals’ where the main outcome was p >.05 or isn’t based on a clear hypothesis.

Evidence of such practices can be found in the literature and all around us. Just last week at the Society for Neuroscience conference in San Diego, I had at least three conversations where presenters at posters would say something like: “Look at this cool effect. We tested 8 subjects and it looked interesting so we added another 8 and it became significant”. Violation of stopping rules is just one example of how we think like Bayesians while being tied to frequentist statistical methods that don’t allow us to do so. This bad marriage between thought and action endangers our ability to draw unbiased inferences and, without appropriate Type I correction, elevates the rate of false discoveries.

In May, the journal Cortex launched a new format of article that attempts to solve these problems by incentivising pre-registration. Unlike conventional publishing models, Registered Reports are peer reviewed before authors conduct their experiments and the journal offers provisional acceptance of final papers based solely on the proposed protocol. The model at Cortex not only prevents p-hacking and HARKing – it also solves problems caused by low statistical power, lack of data transparency, and publication bias. Similar initiatives have been launched or approved by several other journals, including Perspectives on Psychological Science, Attention Perception & Psychophysics, and Experimental Psychology. I’m glad to say that 10 other journals are currently considering similar formats, and so far no journal to my knowledge has decided against offering pre-registration.

In June, I wrote an open letter to the Guardian with Marcus Munafò and >80 of our colleagues who sit on editorial boards. Together we called for all journals in the life sciences to offer pre-registered article formats. The response to the article was overall neutral or positive, but as expected not everyone agreed. One of the most striking features of the negative responses to pre-registration was how the critics targeted a version of pre-registration we did not propose. For instance, some felt that the Cortex model would prevent publication of serendipitous findings or exploratory analyses (it doesn't), that authors would be “locked” into publishing with Cortex (they aren’t), or that the model we proposed was suggested as mandatory or universal (it is explicitly neither). I would ask those who responded negatively to reconsider the details of the Cortex initiative because we don’t disagree nearly as much as it seems. In regular seminars I give on Registered Reports at Cortex I include a 19-point list of FAQs and response to these points, which you can read here. I will regularly update this link as new FAQs are added.
I believe we are in the early stages of a revolution in the way we do research – one not driven by pre-registration per se, and certainly not by me, but by the combination of converging future-oriented approaches, including emphasis on replication (and replicability), open science, open access publishing, and pre-registration. The pace of evolution in scientific practices has shifted up a gear. Clause 35 of the revised Declaration of Helsinki now explicitly requires some form of study pre-registration for medical research involving human participants. Although much work in psychology and cognitive neuroscience isn’t classed as ‘medical’, many of the major journals that publish basic research also ask authors to adhere to the Declaration, including the Journal of Neuroscience, Cerebral Cortex, and Psychological Science.

The revised Declaration of Helsinki has caused some concern among psychologists, and I should make it clear that those of us promoting pre-registration as a new option for journals had no role in formulating these revised ethical guidelines. However we shouldn’t necessarily see them as a problem. There are many simple and non-bureaucratic ways to pre-register research (such as the OSF), even if the journal-based route is the only to reward authors with advance publication.

One valid point that has been made in this debate is that those of us who are promoting pre-registration should practice what we preach, even when there is no journal option currently available (and for me there isn’t another option because Cortex – where I am section editor – is so far the only cognitive neuroscience journal offering pre-registered articles). Some researchers, such as Marcus Munafò, already pre-register on a routine basis and have done for some time. For my group it is newer venture, and here is our first attempt. Our protocol describes an fMRI experiment of response inhibition and action updating that forms the jumping off point for several upcoming studies involving TMS and concurrent TMS-fMRI. We are registering this protocol prior to data collection. All comments and criticisms are welcome.

Writing a protocol for an fMRI experiment was challenging because it required us to nail down in advance our decisions and contingencies at all stages of the analysis. The sheer number of seemingly arbitrary decisions also reinforced my belief that many, if not most, fMRI studies are contaminated by bias (whether conscious or unconscious) and undisclosed analytic flexibility. I found pre-registration rewarding because it helped us refine exactly how we would go about answering our research questions. There is much to be said for taking the time to prepare science carefully, and time spent now will be time saved when it comes to the analysis phase.

Most of the work in our first pre-registration was undertaken by two extremely talented young scientists in my team: PhD student Leah Maizey and post-doctoral researcher Chris Allen. Leah and Chris deserve much praise for having the courage and conviction to take on this initiative while many of our senior colleagues 'wait and see'.

Pre-registration is now a normal part of the culture in my lab and I hope you’ll consider making it a part of yours too. Embracing the hypothetico-deductive method helps protect the outcome of hypothesis-driven research from our inherent weaknesses as human practitioners. It also prompts us to consider deeper questions. As a community we need to reflect on what sort of scientific culture we want future generations to inherit. And when we look at the current status quo of questionable research practices, it leads us to ask one simple question: Who are we serving, us or them?

Wednesday, 7 August 2013

A quick wave to all my new twitter followers

Hello! I really hope you’ll enjoy our new blog over at the Guardian. It’s a privilege to be able to write about psychology for such a broad audience, and to do so alongside such talented colleagues as Pete Etchells, Thalia Gjersoe and Molly Crockett. I'll do my best not to disappoint!

NeuroChambers is my personal blog, where I write mostly about science-related things but occasionally post more personal stuff.

First, a bit about me. I’m a researcher at the Cardiff University School of Psychology. I’m originally from Australia, where I did a PhD about 10 years ago in an area called ‘psychoacoustics’ – the psychology of auditory perception. After that I got interested in the relationship between the brain and cognition, so I moved to an area called cognitive neuroscience, which bridges the gap between neurobiology and traditional experimental psychology. I now run a research group in Cardiff, where we use brain imaging and brain stimulation methods to understand human cognitive control and attention. At the moment I’m particularly interested in the psychology and neuroscience of response inhibition, impulse control, and addiction.

I started NeuroChambers in 2012 after taking part in a debate on science journalism at the Royal Institution. Following some energetic arguments in the press about the good, bad, and ugly of science reporting, we came to the conclusion that scientists and journalists need to cooperate far more constructively in the service of public understanding (you can watch the debate here and read more about it here). One area, in particular, that I feel scientists need to work on is the process of communicating science to non-scientists. And a great way to do this, of course, is through blogging.

There are four main types of article I post here on my personal blog: 

1. Research Briefings: these are (hopefully) accessible summaries of our recent research. Whenever we publish an article in a scientific journal that I think might have broader appeal, I write an overview of the work for a general audience. Here are a few I wrote about human vision, impulse control, and human brain stimulation. I’m not the only scientist to do this – Mark Stokes at Oxford University also does it over at Brain Box (and does it well!) 

2. Calls to Arms: I’m a psychologist and I think psychology is an important and fascinating discipline. But I’m actually quite critical about what passes for acceptable research practices these days, and lately I’ve been working on possible solutions. One approach I’ve been advocating is called study pre-registration. In short, what this means is that scientists should specify the predictions and statistical tests in their experiments before they conduct them. Doing so helps us stay true to the scientific method and avoid fooling ourselves into believing that we’ve discovered something real when in fact we're only staring at the reflection of our own bias. For me, study pre-registration is common sense but not everyone agrees. Psychological science is in the midst of a revolution, and revolutions are never easy. We’ll be writing more about this at Head Quarters as we gradually reform the field.

Another area that I’ve been fairly vocal about recently is the importance of evidence-based policy in government. Last year, Mark Henderson, head of Communications at the Wellcome Trust, published a very important book called the Geek Manifesto, which explains why science is so important and yet so undervalued in modern politics. Mark’s book inspired me and many other scientists to do something proactive to address this issue. Together with Tom Crick and several colleagues – as well as 60 generous donors from across the UK – I helped coordinate a campaign to send one copy of the Geek Manifesto to each elected member of the National Assembly for Wales. I’m also following up on this initiative with Natalia Lawrence at the University of Exeter. Natalia and I are aiming to establish a rapid-response ‘evidence information service’ for politicians and civil servants. 

3. Advice columns for students and junior scientists: These posts will have less general appeal as they're usually written for those already pursuing a career in science. Still, my most popular post on this blog has been a (probably overly) blunt list of do’s and don’ts for the aspiring PhD student. 

4. Whinges: I’ve lived in Britain long enough to cherish the art of a good whinge, and part of being a scientist is challenging bullshit. I occasionally write critical pieces questioning (what I see as) flawed or overegged science, or bad practice. You’ll see more of this style of piece over at Head Quarters as well.

Also, a warning. As you’ll have noted above, I’m a bit sweary at times (for which you can blame my Australian upbringing). Apologies in advance if I write or say something that offends! Don't worry, my Guardian posts will be more civilised - usually!

So that’s a quick overview of me and the things I write about at NeuroChambers. Meanwhile stay tuned for more posts at Head Quarters – we’ve got some exciting topics in the pipeline.

Finally, for no reason whatsoever, here’s a picture of our two cats...doing what cats do best. 

Wednesday, 19 June 2013

Research Briefing: Is there a neural link between ‘neglect’ and ‘pseudoneglect’?

Source Article: Varnava, A., Dervinis, M. & Chambers, C.D. (2013). The predictive nature of pseudoneglect for visual neglect: Evidence from parietal theta burst stimulation. PLOS ONE 8(6): e65851. doi:10.1371/journal.pone.0065851. [pdf] [data and analyses] 

I’m excited about this latest research briefing for several reasons.

First, as I’ll explain below, I think the study tells us something new about how the human brain represents space, with potential clinical applications in neuropsychology. Second, the study represents my group’s first excursion into the world of open access publishing and open science (including open data sharing) – something I feel strongly about and have committed to pursuing in our recently awarded BBSRC project. And finally, the manuscript itself has a rocky history that left me disillusioned with the journal Neuropsychologia and, soon after, motivated me to join others in calling for publishing reform. 

The Research 

Lets start by talking about the science. Our aim in this study was to test for a link between two types of visual spatial bias called ‘neglect’ and ‘pseudoneglect’.

Neglect (also known as ‘unilateral neglect’) is a neurological syndrome that arises after brain injury – most often due to a stroke that permanently damages the right hemisphere. Patients with neglect present with a striking lack of attention and awareness to objects presented on the left side of their midline. Such behaviours may include ignoring food on the left side of a dinner plate or failing to draw the left side of objects. Importantly, the patients aren’t simply blind on their left side. The visual parts of the brain are generally intact while the damage is limited to parietal, temporal, or frontal cortex.

Neglect has been studied for many years and we know a lot about how and why it arises. But one unanswered question is how the spatial bias of neglect relates to other spatial biases that are completely normal. We felt this was an important question because we don’t know enough at a basic level about how the brain represents space, so testing for neurocognitive links between spatial phenomena helps us build better theories. Furthermore, if there happens to be a predictive relationship between neglect and other forms of bias, we may be able to estimate the likely severity of neglect before a person has a stroke. This could have a range of useful applications in clinical therapy and management.

Enter ‘pseudoneglect’. Pseudoneglect is a normal bias in which people ignore a small part of their left or right side of space. One simple way to measure this is to ask someone to cross the centre of a straight horizontal line. Most people will misbisect the line to the left or right of its true centre. This effect is tiny (in the order of millimetres) but reliable.

In this study we wanted to know whether patterns of pre-existing bias, as reflected by pseudoneglect, predict the patterns of actual neglect following neurological interference. Of course, we couldn’t give our participants permanent brain injury, so we decided to use transcranial magnetic stimulation (TMS) to simulate some of the effects of a brain lesion. Using a particular kind of repetitive TMS called ‘theta burst stimulation’, we temporarily suppressed activity in parts of the brain while people did tasks that measured their spatial bias. To see if there was a link between systems, we then related these effects of TMS on spatial bias to people’s intrinsic pseudoneglect.

As expected by previous studies, we found that TMS of the right parietal cortex induced neglect-like behaviour – compared to a sham TMS condition (placebo), people bisected lines more to the right of centre, indicating that TMS caused a subtle neglect of the left side of space. This effect lasted for an hour (upper figure on the left). But what was particularly striking was that the effect only happened in the participants who already showed an intrinsic pattern of left pseudoneglect. In contrast, those with right pseudoneglect at baseline were immune to the effects of TMS (lower figure on the left).

There were a number of other aspects to the study too. We compared the effect of TMS using two different methods of estimating bias, and we also asked whether the TMS influenced people’s eye movements (it didn't). I won’t go into these details here but the paper covers them in depth.

What do these results mean? I think they have two implications. First, they provide evidence that neglect and pseudoneglect arise from linked or common brain systems for representing space – and they provide a biological substrate for this association in the right parietal cortex. Second, the results provide a proof of principle that initial spatial biases can predict subsequent effects of neurological interference. In theory, this could one day lead to pre-diagnostic screening to determine whether a person is at risk of more severe neglect symptoms in the event of suffering a stroke.

All that said, we need to be cautious. There is a world of difference between the subtle and reversible effects of TMS and the dramatic effects of brain injury. We simply don't know whether the predictive relationship found here would translate to patients – that remains to be established. Also, our study had a small sample size, has yet to be replicated, and provides no indication of diagnostic or prognostic utility. But I think these preliminary results provide enough evidence that this avenue is worth pursuing. 

Open Access, Open Science, and Publishing Reform 

Apart from the science, our paper represents a milestone for my group in terms of our publishing practices. This is our first article in PLOS ONE and our first publication in an open access journal. Also, it is our first attempt at open science. Interested readers can download our data and analyses from Figshare (linked here and in the article itself). I increasingly feel that scientists like me who conduct research using public funds have an obligation to make our articles and data publicly available.

This paper also represents a turning point for me in terms of my attitude to scientific publishing. We originally submitted this manuscript in 2012 to the journal Neuropsychologia, where it was rejected because some of our results were statistically non-significant. Rejecting papers on the basis of ‘imperfect’ results is harmful to science because it enforces publication bias and pushes authors to engage in a host of questionable research practices to generate outcomes that are neat and eye-catching. With some ‘finessing’ of the analyses, we could probably have published our paper in a more ‘traditional’ outlet. But we decided to play a straight bat, and when we were penalised for doing so I realised on a very personal level that there was something deeply wrong with our publishing culture. As a consequence I severed my relationship with Neuropsychologia.

A short time later, I was contacted by Sergio Della Sala, the Editor-in-chief of Cortex, who read my open letter to Neuropsychologia. Sergio very kindly offered me an associate editorship (never let it be said that blogging is a waste of time!) and together we built the Registered Reports initiative. Our hope is that this new option for authors will help reform the incentive structure of academic publishing. Since then we’ve been part of a growing movement for change, alongside Perspectives on Psychological Science and their outstanding Registered Replications project, the Open Science Framework, and a special issue at Frontiers in Cognition which has adopted a variant of the Cortex pre-registration model.

In early June this year, Marcus Munafò and I, together with more than 80 of our colleagues, published an article in the Guardian calling for Registered Reports to be offered by journals across the life sciences. I’m delighted to report that the journal Attention, Perception and Psychophysics and two other academic journals are now on the verge of launching their own Registered Reports projects.

My small part in this reform traces back to having this manuscript rejected by Neuropsychologia editor Jennifer Coull in September 2012. So, in a very true sense, I owe Jennifer a debt of gratitude for giving me the kick in the butt I needed. Sometimes rock bottom can be a great launching pad.