"I had considerable freedom of clinical choice of therapy: My trouble was that I did not know which to use and when. I would gladly have sacrificed my freedom for a little knowledge." (Cochrane, A.L., 1971)
Archie Cochrane said it best. His pursuit of “a little knowledge” to inform clinical practice is the cornerstone of the systematic review methodology that is now the gold standard considered “a rigorous and transparent approach of synthesizing scientific evidence that minimizes bias,” according to The Agency for Healthcare Research and Quality.
Consensus is a funny thing though. It’s not always immediate. And I say that as a true believer in systematic reviews who didn’t start that way when I was first introduced to the methodology as a member of an Evidence-Based Practice Center (EPC) team over 10 years ago.
My Journey
As I have progressed in my career and trained junior colleagues new to the systematic review methodology, I find myself laughing out loud today at my reactions during my first review team experience:
STEP ONE: PRE-PLANNING
MY REACTION: This struck me as backwards. Defining the research question a priori? And then registering a protocol that specifies the population, intervention, comparator, and outcomes? I like the transparency of pre-planning, but is this right? What do we do if the data are not what we expect to see? And why is an information specialist needed if I know how to do a search in PubMed?
MY LEARNINGS: Key questions are not research questions. Eligibility criteria, Boolean search terms, all of the database options—I had no idea how little I knew about executing an appropriate, informed literature search, as information specialists are trained to do.
STEP TWO: TITLE & ABSTRACT, FULL TEXT REVIEW, and DATA EXTRACTION
MY REACTION: Imagine a mountain of data. Now imagine some poor soul underneath it struggling to make sense of it all. That’s how it felt when I imagined getting thousands of results. Crushing results. How in the world will we get this done? And who decided that this data extraction table in Excel was a good way to collect it all? Data extraction … can’t breathe … eyes crossing …
MY LEARNINGS: The prescription and specificity of the systematic review process is designed with the goal of minimizing bias. As a scientist, minimizing bias is my bias. It has guided me throughout my career; however, after being trained in the systematic review methodology, the minimization of bias is now the foundation of every scientific thought I have, as well as every scientific effort I pursue going forward. Disseminating a methodologically pure, empirically-driven perspective is my bedrock.
STEP THREE: ASSESSMENT OF RISK OF BIAS AND STRENGTH OF EVIDENCE
MY REACTION: Who, me? Biased? I wish data could talk like this. But it does talk, nonetheless. It can be a long, tedious process turning a corner in assessing individual study quality and overall evidence-based quality.
MY LEARNINGS: Though only one of several stages of this review process, the value of assessing bias and strength of evidence only strengthens any eventuality by advancing the unbiased synthesis of a given evidence base. Every component of this methodological process, as prescribed, must be executed with a commitment to maintaining methodological rigor to disseminate the intended empirical synthesis.
My New Best Friend: Tech
There is no ceiling to transparency in research. This is also true for replicability and minimization of bias in scientific pursuits. The gamechanger here is software platforms that can balance the preservation of transparency and replicability that define the systematic review methodological process, while also offering a useable software solution that can be optimized over time.
And as the methodology has penetrated medical practice and research, practitioners have benefitted from the parallel technological innovations that offer welcomed solutions to a historically-demanding procedure characterized by a sometimes onerous review process inherent to systematic reviews.
One gamechanger that comes to mind is a feature in the Rayyan system (the “Open Review” option) that gives scientists the option to share their systematic reviews publicly at the appropriate time. This feature is a critical step towards raising the bar of transparency and replicability, while also going further to promote collaboration and broader dissemination of systematic review data that I’ve never seen before.
This idea of making a systematic review public was new to me, despite the public registration of protocols and peer-reviewed publications. I like the questions it made me ask myself:
- Why haven’t I considered public access for data details from each level of review in my systematic reviews?
- How can I proactively and more broadly share deeper details on my systematic reviews with the scientific community?
- Finally, am I worried about my intellectual property being threatened and that others might take my ideas?
I challenge others to ask the same questions of themselves.
Though I am in no way a Rayyan super user, I am impressed with this feature because it raises the bar of what is already considered the gold standard.
There’s still lots of work for all of us to do, both on methods and tech. But there are best practices—and now, purpose-driven tools—to help us accomplish it.