At Covidence, we recognize that Artificial Intelligence (AI) offers a massive opportunity to reduce the workload of systematic reviews. However, we also know that the integrity of your review is non-negotiable.
Our approach to building AI features is aligned with the 2025 Cochrane Position Statement and the RAISE (Responsible AI in Systemic Evidence synthesis) guidelines. We prioritise transparency, user control and methodological rigor over "black box" automation.
Here are the core principles that guide our AI tools:
1. Validated Automation & Human-in-the-Loop
We apply different levels of automation depending on the task and the strength of the underlying evidence base. For example:
Validated Exclusion (RCT Classifier): Because our RCT classifier is rigorously evaluated to achieve >99% recall, it is sufficiently accurate to allow for the automated filtering and removal of non-randomized studies. This allows you to exclude irrelevant records with confidence.
Decision Support (Extraction & Sorting): For tasks requiring nuanced judgment, such as data extraction or relevance sorting, our AI acts as a suggester. It drafts text or reorders lists to speed up your work, but the final verification remains in your hands.
2. Task-Specific Performance Metrics
We tune our AI models based on the specific job they are performing and the potential impact of an error. For example:
Prioritizing Recall (Screening): For tasks like identifying RCTs, missing a relevant study is a critical failure. Therefore, our classifiers are calibrated to be highly sensitive (aiming for >99% recall) to ensure you don't miss relevant evidence, even if that means showing you a few irrelevant studies.
Prioritizing Precision (Extraction): For tasks like suggesting study characteristics, we prioritize precision. The goal is to provide accurate, high-quality suggestions that save you typing time, rather than overwhelming you with low-confidence guesses.
3. Data Privacy and Security
We treat your review data with the highest level of security. For example:
Private Data: Your unpublished data, extracted results, and proprietary comments are not used to train public-facing AI models without your explicit consent.
Security Standards: Our AI features operate under the same strict enterprise-grade security and compliance standards as the rest of the Covidence platform.
4. Transparency and Reporting
You cannot trust a tool if you don't know how it works. We are committed to "Glass Box" AI. For example:
Validation Metrics: We publish the performance metrics (sensitivity/specificity) of our models so you can decide if they are "fit for purpose" for your specific review.
Reporting Support: We provide templates to help you cite the use of AI in your PRISMA flow diagram and methods sections of protocol and manuscript, ensuring you meet peer-review standards.
Next Steps
Now that you understand our safety and responsibility framework, you can explore how to apply these tools to your review:
Planning and reporting the use of automation (AI) (A guide to deciding when AI is appropriate for your methodology, and reporting templates for your protocol and manuscript).
📝 How do you feel about Covidence’s approach to automation? Share your perspective in this 2-minute survey (anonymous)