Thursday 27 July 2017

Open-ended, Open Science


In this special guest post, Rob McIntosh, associate editor at Cortex and long-time member of the Registered Reports editorial team, foreshadows a new article type that will celebrate scientific exploration in its native form.

Exploratory Reports will launch next month and, now, we need your input to get it right.

Chris has kindly allowed me to crash his blog, to publicise and to gather ideas and opinions for a new article type at Cortex. The working name is Exploratory Reports. As far back as 2014, in his witterings and twitterings, Chris trailered a plan for Cortex to develop a format for open-ended science, a kind of louche, relaxed half-cousin to the buttoned-up and locked-down Registered Reports. Easier tweeted than done. We are now preparing to launch this brave new format, but even as we do so, we are still wrestling with some basic questions. Does it have a worthwhile role to play in the publishing landscape? Can it make a meaningful contribution to openness in science? What should its boundaries and criteria be? And is there a better name than Exploratory Reports?

Visitors to this blog will have a more-than-nodding familiarity with misaligned incentives in science, with the ‘deadly sin’ of hidden flexibility, and with the damage done to reliability when research conducted in an open-ended, see-what-we-can-find way, is written into the record as a pre-planned test of specific hypotheses. No one doubts that exploratory research has a vital role to play in empirical discovery and hypothesis generation, nor that it can be rigorous and powerful (see recent blog discussions here and here). But severe problems can arise from a failure to distinguish between exploratory and confirmatory modes of enquiry, and most perniciously from the misrepresentation of exploratory research as confirmatory.

A major driver of this misrepresentation is the pervasive idealisation of hypothesis-testing, throughout our scientific training, funding agencies, and journals. Statistical confirmation (or disconfirmation) of prior predictions is inferentially stronger than the ‘mere’ delineation of interesting patterns, and top journals prefer neat packages of strong evidence with firm impactful conclusions, even if our actual science is often more messy and… exploratory. Given a more-or-less-explicit pressure to publish in a confirmatory mode, it is unsurprising that individual scientists more-or-less-wittingly resort to p-hacking, HARKing, and other ‘questionable research practices’.

Regulars of this blog will need no further education on such QRPs, or on the mighty and multi-pronged Open Science movement to reform them. Still less will you need reminding of the key role that study pre-registration can play by keeping researchers honest about what was planned in advance. Pre-registration does not preclude further exploration of the data, but it keeps this clearly distinct from the pre-planned aspects, eliminating p-hacking, HARKing, and several other gremlins, at a stroke. The promise of enhanced truth value earns pre-registered studies an Open Practices badge at a growing number of journals, and it has even been suggested that there should be an automatic bonus star in the UK Government’s Research Excellence Framework (where stars mean money).

This is fine progress, but it does little to combat the perceived pre-eminence of confirmatory research, one of the most distorting forces in our science. Indeed, a privileged status for pre-registered studies could potentially intensify the idealisation of the confirmatory mode, given that pre-registration is practically synonymous with a priori hypothesis testing. A complementary strategy would therefore be for journals to better value and serve more open-ended research, in which data exploration and hypothesis generation can take precedence over hypothesis-testing. A paper that is openly exploratory, which shows its working and shares its data, is arguably as transparent in its own way as a pre-registered confirmatory study. One could even envisage an Open Practices badge for explicitly exploratory studies. 

Some journal editors may believe that it is typically inappropriate to publish exploratory work. But this is not the case at Cortex, where the field of study (brain-and-behaviour) is relatively uncharted, where many research questions are open-ended (e.g. What are the fMRI or EEG correlates of task X? What characterises patient group Y across test battery Z?), and where data collection is often costly because expensive technologies are involved or a rare or fleeting neuropsychological condition is studied. It is hard to estimate how much of the journal’s output is really exploratory because, whilst some authors have the confidence to make exploratory work explicit, others may still dress it in confirmatory clothing. If publication is their aim, then they are wise to do so, because the Action Editor or reviewers could be unsympathetic to an exploratory approach.

Hence, a new article type for exploratory science, where pattern-finding and hypothesis generation are paramount, and where the generative value of a paper can even outweigh its necessary truth value. A dedicated format is a commitment to the centrality of exploratory research in discovery. It also promotes transparency, because the incentives to misrepresentation are reduced, and the claims and conclusions can be appropriate to the methods. Some exploratory work might provide strong enough evidence to boldly assert a new discovery, but most will make provisional cases, seeding testable hypotheses and predictions for further (confirmatory) studies. The main requirements are that the work should be rigorous, novel, and generative.

Or that is the general idea. The devil, as ever, is in the detail. Will scientists – as authors, reviewers and readers - engage with the format? What should exploratory articles look like, and can we define clear guidelines for such an open-ended and potentially diverse format? How do we exercise the quality control to make this a high-status format of value to the field, not a salvage yard for failed experiments, or a soapbox for unfettered speculation? Below, a few of the questions keeping us awake at night are unpacked a little further. Your opinions and suggestions on these questions, and any aspect of this venture, would be most welcome. 

1. Scope of the format. At the most restrictive end, the format would be specific for studies that take an exploratory approach to open-ended questions. Less restrictive definitions might allow for experimental work with no strong a priori predictions, or even for experiments that had prior predictions but in which the most interesting outcomes were unanticipated. At the most inclusive end, any research might be eligible that was willing to waive all claims dependent upon pre-planning. Are there clear boundaries that can be drawn? 

2. Exploration and review. A requirement for submission to this format will be that the full data are uploaded at the point of submission, sufficient to reproduce the analyses reported. To what extent should reviewers, with access to the data, be allowed to recommend/insist that further analyses, of their own suggestion, should be included in the final paper? 

3. Statistical standards. Conventional significance testing is arguably meaningless in the exploratory mode, and it has even been suggested that this format should have no p-values at all. There will be a strong emphasis on clear data visualisation, showing (where feasible) complete observations. But some means of quantifying the strength of apparent patterns will still be required, and it may be just too radical to exclude p values altogether. When using conventional significance testing, should more stringent criteria for suggestive and significant evidence be used? More generally, what statistical recommendations would you make for this format, and what reporting standards should be required (e.g. confidence intervals, effect sizes, adjusted and non-adjusted coefficients etc.)? 

4. Evidence vs. theory. Ideally, a good submission presents a solid statistical case from a large dataset, generating novel hypotheses, making testable predictions. The reality is often liable to be more fragmentary (e.g. data from rare neuropsychological patients may be limited, and not easily increased). Can weaker evidence be acceptable in the context of a novel generative theoretical proposal, provided that the claims do not exceed the data? 

5. The name game. The working title for this format has been Exploratory Reports. The ambition is to ‘reclaim’ the term ‘exploratory’ from a slightly pejorative sense it has acquired in some circles. Let’s make exploration great again! But does this set up too much of an uphill struggle to make this a high-status format; and is there anyway a better, fresher term (Discovery Reports; Open Research)?

Rob will oversee this new article format when it launches next month. Please share your views in the comments below or you can feed back directly via email to Rob or Chris, or on twitter.

5 comments:

  1. I applaud the effort! A couple of quick thoughts that I hope will help the intention of this format:
    1) If you do end up permitting p-values in this type of article (and I think there is probably a stronger case for not allowing them at all in this context), then there should probably be some qualifier attached to each reported p-value. Perhaps as far as stating "because this test was not pre-registered, the p-value should be interpreted with extreme caution".
    2) Try to require that constraints on generalizability be included in the discussion and probably even mentioned in each abstract (e.g. https://osf.io/preprints/psyarxiv/w9e3r)
    3) Clear labelling in the title or at least the article landing page that serves an educational purpose to a lay audience (or even an expert audience that is not well versed in the limits of exploratory, hypothesis generating work): this work is exploratory and requires replication prior to making any inference or drawing any conclusion
    4) many articles today have a "next steps" section, this should be mandated and perhaps even templated for each article. If something exciting was found, here is what needs to be included in any pre-registered, direct replication prior to any conclusion being drawn... (ideally the body of the paper would include the appropriate information to do so, but I think it is important enough to emphasize that point with some templated next step language).

    Good luck!!

    ReplyDelete
  2. I am very curious how this initiative will develop. I still believe that strictly speaking an Exploratory Report (ER) format isn't really necessary. Exploratory analyses are inherent in Registered Reports (RR) already: anything not pre-registered is by definition exploratory.
    Unless you make it worthwhile to publish an ER, I don't think anyone would bother submitting to it.

    For this reason I also disagree with David Mellor's comments above. If the ER contains a lot of flags and warnings that "this work is exploratory and requires replication" and that it should be taken with a grain of salt then do you think people will choose this format over a traditional high impact journal, at least while standard publishing options remain available (and I doubt you will get rid of them this way). Some disclaimer at the top explaining the format would be fine but anything beyond that is shooting this project in the foot.

    So instead, in my opinion the best approach would be to ensure that ERs are rigorous and thorough so that the evidence presented in them is strong. In the absence of confirmatory statistics, I suppose an ER should require ample reports of robustness checks. A good exploratory result should show that the findings aren't just strongly dependent on your analysis choices.

    To be fair, the same applies to RRs. Just because something is pre-registered doesn't mean a predicted finding is actually valid. However, with RRs the number of robustness checks can and should be limited to essential questions. However, with ERs I think they must be an essential requirement for acceptance.

    ReplyDelete
  3. "Exploratory Reports format at Cortex, the opposite of Registered Reports."

    Why "opposite"? RRs are permitted an exploratory section. So I can't imagine what the ER protocol could possibly be like.

    Max Coltheart

    ReplyDelete
    Replies
    1. Thanks for commenting, Max. Yes, RRs do have an exploratory section but the project must still be fundamentally hypothesis driven, so exploration is permitted for RRs only within a relatively narrow scientific context. The aim of ERs is to provide a route beyond hypothesis testing in the first place. For ERs there wouldn't be peer review of the protocol (as happens with RRs).

      Delete
  4. Hello,
    Why do not use a mix format on which a purely exploratory research is followed by confirmatory experiments focused on the more interesting and potentially reproducible (in term of affordable effect size) results? This will required at least three-submission stages. The first proposing the exploratory part the second presenting the exploratory results and proposing the confirmatory experiments and the final presenting the results of the latter. In other words using the exploratory research to define/select the preliminary results required to perform the confirmatory research. This will resolve at least the problem of defining statistical standard. Eventually the information’s of confirmatory research could not be sufficiently important /strong to carry on the confirmatory part ( or the authors could decide to not pursue the research) . In this case, the work cloud be in some way “devaluated” compared to those bringing to the confirmatory research. The devaluation could, for example, be done by publish the work in a special section “ Non confirmed exploratory research” eventually not indexed in pubmed. In this case, the acceptance at stage one could be based on nonrestrictive criteria (for example by judging only the proposed method). Personally, I believe that many researches will accept the game since even a not-indexed article can constitute an acknowledgment of the scientist work.
    All the best
    Nicola Kuczewski

    ReplyDelete