Logo

What can we help you with today?

Get help straight from our team...

Planning and reporting the use of automation (AI)

✨ Automation (AI)

Planning and reporting the use of automation (AI)

Last updated on 23 Dec, 2025

Using automation in a systematic review is a methodological decision that should be planned and documented early. This guide helps you assess if Covidence’s AI features are a "good fit" for your specific topic and provides the standard text needed to report their use in compliance with Cochrane and RAISE guidelines.

Part 1: Is AI a good fit for your review?

Before writing AI into your protocol, we recommend assessing your project against three factors: Domain, Methodology and Validation.

1. Assess your Topic (Domain Fit)

Most Covidence AI features operate by analysing the text within the records and PDFs in your review and are therefore applicable across a wide range of research disciplines.

However, some screening features are powered by specialised classifiers trained on external datasets. In particular, features that identify or remove non-randomized studies are powered by an RCT classifier that has been evaluated primarily on English-language biomedical literature. Performance may vary for reviews in non-biomedical or highly specialised domains.

2. Assess your Review Type

Assessing your review type involves defining the acceptable balance between efficiency and potential error rates (e.g. prioritizing sensitivity for systematic reviews vs. speed for rapid reviews). Covidence's AI features are developed with the specific requirements of systematic reviews in mind, prioritizing high recall and human verification. Consequently, these features are suitable for systematic reviews, as well as rapid or scoping reviews where researchers wish to maintain similar methodological standards while accelerating the process.

3. The "Mini-Pilot" (Validation)

If you are unsure if a specific AI feature is performing well for your topic, the Cochrane Position Statement requires authors to "pilot or calibrate" the tool to ensure it does not compromise the integrity of the review. To satisfy this requirement practically, we recommend running a Verification Pilot:

Action: Enable the feature (e.g. a screening feature that removes non-RCT references), then manually verify a random sample of records affected by the automation.

Sample Size: The size of this sample depends on your total volume and risk tolerance. For large reviews, checking a random selection (e.g. 50–100 records or 20% of the excluded set) is often sufficient to identify systematic errors. For smaller reviews, we recommend verifying a larger proportion.

Decision: If you agree with the AI on your sample, you can proceed with confidence and cite this validation step in your methods to demonstrate methodological rigour.


Part 2: Reporting Templates

Transparency is the cornerstone of responsible AI. According to PRISMA 2020 (Item 8) and the RAISE guidelines, you must declare which tools were used and how they influenced the review. This Position Statement (2025) provides a specific template for reporting the use of AI to ensure you address methodological rigour, integrity and human oversight.

For your Protocol

Use this when registering your review (e.g. PROSPERO) to declare your intent.

"We will use the [insert feature, e.g. Tagging references reporting on RCTs] integrated into Covidence to assist in the [screening / data extraction] process.

The AI tool will be used to [describe intent, e.g. prioritize records / filter non-RCTs]. Outputs from the AI tool are justified for use in our synthesis because we will validate its performance within our specific context by [describe validation plan, e.g. manually checking a random sample of excluded records / verifying all suggested text against the source PDF] to ensure it does not undermine the trustworthiness of the synthesis."

For your Manuscript (Methods Section)

Use this in your final paper to describe what actually happened, based on the Cochrane Reporting Template.

General Declaration (Adapt based on features used):

"We used the [insert feature, e.g. Tagging references reporting on RCTs] developed by [Covidence] for [specific purpose, e.g. identifying randomized trials] in [the title and abstract screening process].

The tool was [state how it was used, e.g. used to filter out records with a low probability of being an RCT].

Outputs from the tool are justified for use in our synthesis because [describe validation, e.g. the model has been independently evaluated to achieve >99% recall and was validated in this specific review by manually checking a random sample of [insert number] excluded records].

Limitations of the tool include [describe limitations, e.g. potential lower precision in non-biomedical topics]. A detailed description of the methodology, including validation procedures, is available in [supplementary materials]."

Specific Feature Declarations:
For detailed reporting templates specific to the tools you used (including validation metrics and citations), please refer to the individual feature guides:


A Note on Authorship

Please remember: AI is a tool, not an author.
International publication ethics (COPE) state that AI tools cannot be listed as authors. You remain ultimately responsible for the accuracy of the data in your review. Always verify AI suggestions against the source text.

Did you find this article helpful?
Previous

Overview of all Automation (AI) features available in Covidence

Next