Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Why Human Controls Are Critical for Ethical AI in Life Sciences

Embarrassing stumbles with AI in one industry are potentially life-threatening hazards in the biosciences. Bioinformaticians and AI experts were quick to trace failures such as inappropriate conversational AI and skewed hiring practices to biases in the data sets on which the algorithmic machines were trained. These very public missteps contributed to regulatory concerns and slower AI adoption by the life sciences industry.

Life sciences has long been a data-driven industry, and AI is a data-driven technology.

With each clinical trial generating an estimated 18 TB of data and more than two decades of real-world data (RWD) gathered from patients’ actual use of therapies, the industry’s appetite for using AI to find discoveries in this information trove is growing. Regulators also have shown more openness toward allowing AI experimentation, such as in creating personalized and targeted therapies.

Read More: Adobe Target Announces New AI-enriched Enhancements for Multi-channel Personalization

In addition, technical solutions exist for some of the bias issues that must be addressed in most AI applications. Following FAIR data principles helps ensure quality data sets and stronger AI governance. Despite this growing technological maturity, AI’s ethical dimensions remain a work in progress. The important tools life sciences practitioners must bring to this work are skepticism and humility, two very human qualities. Here are three ways to apply a human touch to help address ethical AI in life sciences.

  • Recognize that AI tools can lack transparency and rigorously test their conclusions. The reasoning behind algorithmic results is not always intelligible to humans – it’s the “black box” issue. To take a hypothetical example, an algorithm designed to find a cancer biomarker in a potential clinical trial control group returns the finding that many of the participants are at risk for a secondary disease. The researcher can’t see how the algorithm found that pattern. It might have found a gene fragment for the disease. But the algorithm can’t understand a mechanism of action [as in, what would trigger the gene?]  Publicizing such a finding would be ethically questionable, causing unwarranted alarm. It also would hurt the credibility of the researchers.

Results must be rigorously tested. In this case, one test could look for correlated biomarkers and test for the same results throughout the data set. Other tests could be run on other independent data sets to see if the results are repeatable. Another tactic would be to examine longitudinal data from actual patients to see if the results predicted by the algorithm have occurred in the real world. It could be one mutation negates the other. Or it could show that people with a specific background are in fact at risk for certain types of cancers.

It’s also critical for bio-statisticians to share findings with colleagues who view data from different perspectives. Epidemiologists can weigh in on whether a pattern reflects any real-world cross-sectional behavior they’ve seen from different diseases. Molecular biologists can say whether they can tie a pattern to a molecular pathway. Data scientists can correlate findings to real-world data and provide perspective about human behavior and how it might influence the pattern. This type of cross-disciplinary approach is essential when testing algorithmic outcomes/theories.

Related Posts
1 of 7,299
  • Question the techniques used to arrive at AI-based decisions because they can be prone to inaccuracies and/or biased outcomes due to embedded or inserted bias. Algorithms assign weights to different variables; if the weights are inappropriate, then results may be skewed. For example, in a range of adult human heights, it probably would be safe to design an algorithmic equation that gives the greatest weight to data points clustered between 5 feet and 6 feet vs. those outside the range. Real-world observation of human heights indicates that would be a valid approach.

But in biology, ignoring the edges could be a problem. The biochemical effect for which an algorithm is searching is not tied to everyday observation and could appear anywhere in the data set. That’s why it’s critical to know the data weighting technique used and its limits. Sometimes simply changing the order in which an algorithm searches the data can change the outcomes.

Read Also: Ivy League Researchers develop New-age Micro-robot Swarms for Brushing and Flossing

Human skepticism is a sharp tool in searching out bias. Question the technique used. Have non-scientists review results. Watch for confirmation bias, i.e., did the researchers see what they hoped to see and ignore outliers?  The results should be repeatable on different data sets. Apply real-world data to see if AI findings remain valid in the face of actual results.

  • Deploy where AI is fit for the purpose. Most life sciences companies work with AI today in areas that require process and effort optimization. AI is at the proof-of-concept stage in clinical data review and clinical data management. The technology is more mature where structured or uniform data standards make machine learning possible, such as natural language generation for clinical study reports, patient narratives for submission data and medical coding.

AI is still not ready to be the final arbiter of decisions with a direct impact on an individual’s care. Take multiple sclerosis (MS).  Older therapies include beta-interferon and glatiramer acetate. Studies show these therapies apparently reduce exacerbations in patients suffering the relapsing-remitting form of MS. Yet the mechanisms of how these therapies achieve their results still are poorly understood.

Conversely, the mechanisms of newer monoclonal antibody therapies for MS are clear. An algorithm might use evidence of reduction in cells that attack the myelin nerve sheaths as indication the newer therapies are therefore better. But individual MS patients have a range of symptoms and experiences. An older therapy might be the right answer for one individual, the antibodies best for another. An AI algorithm cannot account for these human factors and its weighting of one class of therapies over the other could cause more harm than good. Again, skepticism and humility must play here to question results and bring the human touch to sophisticated mathematical calculations.

With good governance and controls, AI will help the life sciences industry find insights for new therapies from its vast quantities of scientific research and clinical studies. Yet even, and maybe especially, as algorithms become more sophisticated and widely used, human skepticism and humility will be the guardians of Ethical AI in Life Sciences.

[To share your insights with us, please write to us at sghosh@martechseries.com]

Comments are closed.