Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

WHO Outlines Considerations for Regulation of Artificial Intelligence for Health

The World Health Organization (WHO) has released a new publication listing key regulatory considerations on artificial intelligence (AI) for health. The publication emphasizes the importance of establishing AI systems’ safety and effectiveness, rapidly making appropriate systems available to those who need them, and fostering dialogue among stakeholders, including developers, regulators, manufacturers, health workers, and patients.

With the increasing availability of health care data and the rapid progress in analytic techniques – whether machine learning, logic-based or statistical – AI tools could transform the health sector. WHO recognizes the potential of AI in enhancing health outcomes by strengthening clinical trials; improving medical diagnosis, treatment, self-care and person-centred care; and supplementing health care professionals’ knowledge, skills and competencies. For example, AI could be beneficial in settings with a lack of medical specialists, e.g. in interpreting retinal scans and radiology images among many others.

Recommended: Predictions Series 2022: AiThority Interview with Anoop Ramachandran, Chief Technology Officer at Preciso

However, AI technologies – including large language models – are being rapidly deployed, sometimes without a full understanding of how they may perform, which could either benefit or harm end-users, including health-care professionals and patients. When using health data, AI systems could have access to sensitive personal information, necessitating robust legal and regulatory frameworks for safeguarding privacy, security, and integrity, which this publication aims to help set up and maintain.

“Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks.”

In response to growing country needs to responsibly manage the rapid rise of AI health technologies, the publication outlines six areas for regulation of AI for health.

Related Posts
1 of 41,059
  • To foster trust, the publication stresses the importance of transparency and documentation, such as through documenting the entire product lifecycle and tracking development processes.
  • For risk management, issues like ‘intended use’, ‘continuous learning’, human interventions, training models and cybersecurity threats must all be comprehensively addressed, with models made as simple as possible.
  • Externally validating data and being clear about the intended use of AI helps assure safety and facilitate regulation.
  • A commitment to data quality, such as through rigorously evaluating systems pre-release, is vital to ensuring systems do not amplify biases and errors.
  • The challenges posed by important, complex regulations – such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States of America – are addressed with an emphasis on understanding the scope of jurisdiction and consent requirements, in service of privacy and data protection.
  • Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners, can help ensure products and services stay compliant with regulation throughout their lifecycles.

Recommended: Predictions Series 2022: AiThority Interview with Dr. Arnaud Rosier, CEO & Founder at Implicity

AI systems are complex and depend not only on the code they are built with but also on the data they are trained on, which come from clinical settings and user interactions – for example. Better regulation can help manage the risks of AI amplifying biases in training data.

For example, it can be difficult for AI models to accurately represent the diversity of populations, leading to biases, inaccuracies or even failure. To help mitigate these risks, regulations can be used to ensure that the attributes – such as gender, race and ethnicity – of the people featured in the training data are reported and datasets are intentionally made representative.

Recommended: Predictions Series 2022: AiThority Interview with David Low, CMO at Talkwalker

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.