Doctor Evidence Is Training AI-Based Technology to Distinguish Real-World Evidence in Literature Search and Monitoring
Doctor Evidence (DRE), a global health technology and healthcare research consulting company, continues to breakthrough artificial intelligence (AI) hurdles with the ability to auto-identify research studies that are based on real-world evidence (RWE). Yesterday, at the AcademyHealth Annual Research Meeting in Washington DC, DRE presented a poster entitled, “Developing a Training Set to Teach AI-Based Technology to Distinguish Real-World Evidence in Literature Search and Monitoring.”
There continues to be a dramatic increase in both volume and value of RWE across all disciplines of medicine. Precision medicine depends on the integration of RWE and data from clinical trials to improve predictions of treatment benefits and harms. Yet, until now, it has been very difficult to search for studies based on RWE and bibliographic databases do not have RWE subject headings.
DOC Search, a cloud-based platform developed by DRE, employs AI, natural language processing, comprehensive ontologies, and machine learning to search medical concepts across PubMed, ClinicalTrials.gov, WHO-ICTRP, and over 200 official and health RSS feeds. Employing machine learning methods and an artificial neural network, the technology has been trained to identify different evidence types (e.g., randomized controlled trials, clinical studies) and to categorize terms that refer to patient or population characteristics, interventions, outcomes, and study designs. Now this technology is being trained to identify what is and what is not RWE.
Based on published descriptions, a consensus-based definition of real-world data (RWD) was used to create a screening protocol that was then pilot tested by two senior methodologists and several untrained analysts to identify RWE. Ten teams of analysts used the protocol to identify candidate articles, with the goal of producing a final screened and verified set of 5,000 RWE articles and 5,000 non-RWE articles. All reviews, meta-analyses, and study protocols were excluded. The training set was used to successfully initiate training the AI algorithm. An additional 1,000 articles were used to evaluate the success rates. The initial model demonstrated good precision and recall (both 84%), with an F1 score of 0.84. Further testing of the prediction model using articles from the 31 million records in the DRE database continues to examine how well the approach generalizes.
“Having the capability to rapidly identify and filter studies containing RWE further enhances the usefulness of this technology for researchers, healthcare providers, and policy-makers,” stated Robert Battista, MBA, FRSPH, FRCP Edin, the CEO of Doctor Evidence. “The quantity of research published daily makes this a critically important innovation in clinical and health services research.”