How to Leverage Artificial Intelligence to Combat Inequities McKinsey
Artificial intelligence has potential to both drive health equity and to harm, according to McKinsey. Two of the firm’s partners spoke at a Northwell Health event on AI in healthcare this week. In their panel, they provided many examples of how to harness AI to solve inequity challenges while also cautioning against tech-driven bias.
Lucy Pérez, a senior partner, and Melvin Mezue, associate partner and former academic physician, see barriers to health equity through four lenses: accessibility, affordability and quality of care as well as supportive social context. AI could be leveraged in all four areas in ways ranging from pulling data from a medical record to identifying a patient’s language preference to using natural language processing to assess nursing notes and determine discharge readiness. But before designing any solution, organizations should engage with impacted communities and understand what is most relevant to them, the presenters said.
McKinsey has estimated reducing health disparities could not only improve outcomes but also drive $3 trillion in annual incremental GDP, save upward of $100 billion in healthcare costs and offer a 4:1 return on investment for every dollar invested to reduce disease burden. The panel offered many ideas of how AI could be used to drive value including in precision medicine, supply usage forecasting, AI-enabled prior authorization, digital disease management, readmission prediction and talent management automation.
“As comes with lots of opportunity, there is potential for risk,” Pérez said. One of the guiding principles organizations should adhere to is do no harm, working to mitigate risks of AI-driven bias.
Many medical devices, like tech at large, can discriminate against people of color because of the way they are designed and tested. For instance, researchers discovered pulse oximeters are nearly three times more likely to overestimate oxygen levels in critically ill Black patients with COVID-19. This can cause ripple effects like affordability issues, because Medicaid reimburses patients with significant hypoxemia only for readings at or below 88%.
To think meaningfully about this sort of risk, organizations should aim to test their AI capabilities by asking questions. Considering how AI could be biased serving certain populations and whether it allows for patient consent and ownership of data are among the approaches Pérez and Mezue recommended. It is also helpful if organizations align use of AI with their mission, establish use boundaries, have an oversight committee and set up risk controls.
The first step for providers is to create an organizational infrastructure to think about equity, Mezue told Fierce Healthcare after the panel. That usually looks like hiring for a role focused solely on the issue. Then, leadership needs to prioritize the work and make equity efforts a core part of operations. Finally, an organization needs to make sense of its population data, develop governance frameworks and figure out how to fix disparities.
“That’s where things start to get a little bit more difficult,” he said. Ingesting and understanding data is not always straightforward and can get expensive. But “there are things you can do without a lot of money,” Mezue stressed, like asking the right questions and designing the right solutions.
“Part of what adds to the complexity is the fragmentation of the healthcare system,” Pérez told Fierce Healthcare about the challenge of bringing many stakeholders together. “What’s easy is, in a way, what’s within your control.”
Even organizations focused on piloting, not scaling, a solution must collaborate with many different parties. “There’s so many different people whose perspectives would add value if you want to have something which is really meaningful,” Mezue said.
Recommended AI News: Mapp Updates its Insight-led Customer Experience Platform
[To share your insights with us, please write to email@example.com]