FinRegLab Finds Machine Learning Tools Have Potential to Usher in Fairer Credit Decisions
FinRegLab issued two papers that examine lenders’ ability to build, understand, and manage machine learning (ML) models to ensure that they can be trusted to underwrite applications for credit by millions of consumers and small businesses.
Recommended: AiThority Interview with Manuvir Das, VP, Enterprise Computing at NVIDIA
“Machine Learning Explainability & Fairness: Insights from Consumer Lending” updates and expands upon empirical research that FinRegLab released in April 2022 with Professors Laura Blattner and Jann Spiess of the Stanford Graduate School of Business, while “Explainability & Fairness in Machine Learning for Credit Underwriting: Policy & Empirical Findings Overview” summarizes the project’s key findings and major implications for regulation and public policy.
The research addresses fundamental questions that are shaping the adoption of machine learning in credit markets. ML models have the potential to increase accuracy and to expand credit access, particularly when combined with new data sources, but some versions can be so complex that they are described as “black box” models. While data science techniques are still evolving, the research finds that some explainability tools can provide important information about how ML models operate and that automated debiasing techniques may offer significant improvements in fairness over traditional compliance approaches.
“Our research serves as a critical step in understanding when and how machine learning models can be used responsibly for credit underwriting,” states Melissa Koide, CEO of FinRegLab. “Particularly when combined with new data sources, machine learning models may be able to increase access to millions of underserved consumers and small businesses. But achieving that potential depends on appropriate human oversight and effective data science tools for understanding and managing these models.”
The overview paper further stresses that rigorous research, thoughtful deployment and proactive regulatory engagement are critical to ensuring that any new technology must ultimately benefit borrowers and financial service providers alike. The paper notes that while there are still many questions to be answered regarding the trustworthy and responsible use of machine learning models, it is critically important to begin the process of updating existing regulatory frameworks to account for the increasing use of both machine learning models and explainability and fairness techniques. In the near term, the paper suggests that articulating what qualities are important for explainability techniques and regulators’ expectations on how and when lenders should search for fairer alternative models could be helpful for addressing this early stage of evolution across diverse stakeholders, markets, circumstances, and technologies.
Recommended: AiThority Interview with Gregor Stühler, Co-Founder and CEO at Scoutbee
The empirical paper evaluates model diagnostic tools to help lenders address transparency challenges and manage machine learning models as required by law. Participating in the research project are seven technology providers—Arthur, H2O.ai, Fiddler, RelationalAI, SolasAI, Stratyfy, and Zest AI. The companies’ model diagnostic tools as well as several open-source tools were applied to a spectrum of underwriting models custom built for purposes of this study.
These publications are part of a greater explainability and fairness in machine learning for credit underwriting research project that FinRegLab has undertaken with support from JPMorgan Chase and the Mastercard Center for Inclusive Growth. Findings from this project and FinRegLab’s other work focusing on the implications of artificial intelligence for financial inclusion are available on the organization’s website.
Recommended: AiThority Interview with Shafqat Islam, Chief Marketing Officer at Optimizely
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.