Thursday, 13 April 2017

Seven questions about my new book: the Seven Deadly Sins of Psychology

So I wrote a short book about psychology and the open science movement (HOLY CRAP IS THIS REALLY HAPPENING)


Allow me to compose myself. Yes, yes it is.

The book is called The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice. You can find it at Amazon or Princeton University Press.

The illustration at the top of this post is a mosaic of the extraordinary artwork in the book, created by the richly talented Anastasiya Tarasenko (you can see more of her work here, and she is on twitter too). Each woodcut depicts one of the deadly sins, and there is one virtue as well, making up the eight chapters. I was inspired to pursue this general concept by the imagery in a film called the Ninth Gate in which bookdealer Dean Corso goes searching for a book written by the Devil. The final illustrations here are a marriage between that general concept and Anastasiya's creative genius. 

My friend Pete Etchells said I was rubbish at promoting my book, so I've decided to post a 7-part Q&A about it. Um yeah, I realise this is just me asking myself questions, but I'm a bit of a bastard, even to myself.


1. Why did you decide to write this book?
Over the last fifteen years I‘ve become increasingly fed up with the “academic game” in psychology, and I strongly believe we need to raise standards to make our research more transparent and reliable. As a psychologist myself, one of the key lessons I’ve learned is that there is a huge difference between how the public thinks science works and how it actually works. The public have this impression of scientists as objective truth seekers on a selfless mission to understand nature. That’s a noble picture but bears little resemblance to reality. Over time, the mission of psychological science has eroded from something that originally was probably quite close to that vision but has now become a contest for short-term prestige and career status, corrupted by biased research practices, bad incentives and occasionally even fraud.

Many psychologists struggle valiantly against the current system but they are swimming against a tide. I trained within that system. I understand how it works, how to use it, and how it can distort your thinking. After 10 years of “playing the game” I realised I didn’t like the kind of scientist I was turning into, so I decided to try and change the system and my own practices – not only to improve science but to help younger scientists avoid my predicament. At its heart this book lays out my view of how we can reinvigorate psychology by adopting an emerging philosophy called “open science”. Some people will agree with this solution. Many will not. But, above all, the debate is important to have.

2. It sounds like you’re quite skeptical about science generally
Even though I’m quite critical about psychology, the book shouldn’t be seen as anti-science – far from it. Science is without doubt the best way to discover the truth about the world and make rational decisions. But that doesn’t mean it can’t or shouldn’t be improved. We need to face the problems in psychology head-on and develop practical solutions. The stakes are high. If we succeed then psychology can lead the way in helping other sciences solve similar problems. If we fail then I believe psychology will fade into obscurity and become obsolete.

3. Would it matter if psychology disappeared? Is it really that important?
Psychology is a huge part of our lives. We need it in every domain where it is important to understand human thought or behaviour, from treating mental illness, to designing traffic signs, to addressing global problems like climate change, to understanding basic (but extraordinarily complex) mental functions such as how we see or hear. Understanding how our minds work is the ultimate journey of self-discovery and one of the fundamental sciences. And it’s precisely because the world needs robust psychological science that researchers have an ethical obligation to meet the high standards expected of us by the public. If, to some of my colleagues, that sounds rather high-handed and moralistic, well, it is. Suck it up guys.

4. Who do you think will find your book most useful?
I have tried to tailor the content for a variety of different audiences, including anyone who is interested in psychology or how science works. Among non-scientists, I think the book may be especially valuable for journalists who report on psychological research, helping them overcome common pitfalls and identify the signs of bad or weak studies. At another level, I’ve written this as a call-to-arms for my fellow psychologists and scientists in closely aligned disciplines, because we need to act collectively in order to fix these problems. And the most important readers of all are the younger researchers and students who are coming up in the current academic system and will one day inherit psychological science. We need to get our house in order to prepare this generation for what lies ahead and help solve the difficulties we inherited.

5. So what exactly are the problems facing psychology research?
I’ve identified seven major ills, which (a little flippantly, I admit) can be cast as seven deadly sins. In order they are Bias, Hidden Flexibility, Unreliability, Data Hoarding, Corruptibility, Internment, and Bean Counting. I won’t ruin the suspense by describing them in detail, but they all stem from the same root cause: we have allowed the incentives that drive individual scientists to fall out of step with what’s best for scientific advancement. When you combine this with the intense competition of academia, it creates a research culture that is biased, closed, fearful and poorly accountable – and just as a damp bathroom encourages mould, a closed research culture becomes the perfect environment for cultivating malpractice and fraud.

6. It all sounds pretty bad. Is psychology doomed?
No. And I say this emphatically: there is still time to turn this around. Beneath all of these problems, psychology has a strong foundation; we’ve just forgotten about it in the rat race of modern academia. There is a growing movement to reform research practices in psychology, particularly among the younger generation. We can solve many problems by adopting open scientific practices – practices such as pre-registering study designs to reduce bias, making data and study materials as publicly available as possible, and changing the way we assess scientists for career advancement. Many of these problems are common to other fields in the life sciences and social sciences, which means that if we solve them in psychology we can solve them in those areas too. In short, it is time for psychology to grow up, step up, and take the lead.

We'll know we've succeeded in this mission when our published results become reliable and repeatable. As things currently stand, there is a high chance that any new result published in a psychology journal is a false discovery. So we’ll know we’ve cracked these problems when we can start to believe the published literature and truly rely on it. When this happens, and open practices become the norm, the closed practices and weak science that define our current culture will seem as primitive as alchemy.

7. What's wrong with your book?
Probably a lot, and that is for the community to judge. As a matter of necessity, any digestible perspective on this issue is going to contain a lot of the author's personal views. Much of what I've written stems from my own experience training up and working in science, and reasonable people can disagree about the nature of the problems and the best solutions. The field is also moving very quickly, which makes writing a book particularly challenging. On a more specific note, I have a public errata listing misprints that weren't quite caught in time before the first batch of publishing. If you bought a first edition you may come across some or all of these (a rather optimistic economist friend of mine advises you to hang on to those first edition copies, because you never know what they might be worth some day...yes...well...). And of course, if you find an additional one, let me know and it shall be amended at the next available opportunity.

I hope you enjoy it.

Thursday, 16 March 2017

Should all registered clinical trials be published as Registered Reports?

Last night I asked you brilliant folks to give me your strongest counterarguments to the following proposition: that all registered clinical trials should be published in journals only if submitted as Registered Reports (RRs).

It was a really interesting discussion, and thanks to everyone who engaged. Here are the reasons that were put forward. 

My tl;dr verdict: I’m still waiting for a good reason!

1. RRs require presenting more methodological detail than standard clinical trial registration and so expose authors to a potential competitive disadvantage.
This isn’t really a scientific objection (or least, it’s a very weak scientific objection) but I understand the strategic argument. My response is that if all registered trials have to published as RRs then everyone faces the same disadvantage, so there is no relative disadvantage.

2. Clinical trial registration is sufficient for controlling bias.
It’s not. Around 16-66% of clinical trials never report results, ~14% are registered after data collection is complete, and somewhere between 30-85% engage in hidden outcome switching. With depressing statistics like this, how can standard trial registration be seen as anything close to sufficient?

3. OK, clinical trial registration used to be insufficient but it’s sufficient now because requires authors to specify a primary outcome measure.
Still nope. The COMPARE project finds that authors routinely engage in hidden outcome switching even when primary outcome measures are specified. There is no logical reason why requiring something that already fails to prevent hidden outcome switching should prevent hidden outcome switching.

4. Ok fine, but a signed declaration at submission that outcomes haven’t been switched would solve hidden outcome switching.
It would probably have some effect, but then it remains easy to specify an outcome sufficiently vaguely to enable one of several variables to be cherry picked as the primary outcome measure, and so allow researchers to tick this box even when they switched. And even if this measure did reduce outcome switching, it would not reduce publication bias. RRs reduce both hidden outcome switching and publication bias. So why should any kind of declaration be preferable to all registered clinical trials being published as RRs?

5. Small companies often live or die by the results of trials. The RR model presents a risk to their livelihoods if they have to publicly admit that an intervention failed to work.
The model suggested here applies only to registered trials. If companies want to do their own internal unregistered trials and choose what to publish (where they can) based on the results, that’s up to them. The argument here is that the price of attaining credibility within the pages of a reputable peer-reviewed journal should be to register the trial as a RR.

6. RRs involve one paper arising per protocol. But a single protocol may need to produce multiple papers addressing different questions. This is also important to support the careers of early career researchers.
This sounds to me like an argument for salami slicing in the interests of careerism. But I accept that in the reality of academia, careers matter. My initial reaction is that if the research question and method are complementary enough to go in the same protocol, why aren’t the results complementary enough to go in the same paper? The easy solution to this is to separate protocols that address different questions into different RRs. That way there are as many papers to publish as there are separate research questions.

7. The RR model doesn’t force authors to publish their results. Therefore there is no guarantee that it will prevent publication bias.
This is the strongest objection so far, but even so it is virtually guaranteed to be less of a problem for RRs than under the status quo. Authors of RRs are indeed free to withdraw their papers after the protocol is provisionally accepted, but doing so triggers the publication of at least part of the registered protocol plus a reason for the withdrawal. So what is an author going to say, that they withdrew their peer-reviewed RR because the trial outcomes were negative? I suspect the research community (including the reviewers who invested time in assessing the protocol) would take a dim view on such a strategy, and it is probably for this reason that there has yet to be a single case of a withdrawn RR at any journal. In any case, if this were considered to be a serious risk, it would be straightforward to strengthen the RR model for clinical trials so that it requires authors to publish the results. If every journal published registered clinical trials only as RRs, and all RRs bore this mandate, virtually all registered trials would be published.

So that’s seven reasons, none of which I think are particularly strong.

Got anything stronger?