Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

InRule Technology Enhances Machine Learning Platform With Bias Detection To Reduce Risk And Deliver Fairness Through Awareness

InRule Technology, provider of AI-enabled, end-to-end automation for the enterprise, introduced bias detection features within xAI Workbench, the company’s suite of machine learning modeling engines.

Recommended AI News: Google Cloud Announces Cloud-First Partnership with KeyBank

The bias detection features within xAI Workbench support organizations whose machine learning models are associated with predictions that could contain bias toward a protected class (gender, race, age, etc.) or impact an individual’s well-being, such as clinical trials, population health management, incarceration recidivism, loan origination, insurance policy rating and more.

Bias detection in xAI Workbench delivers “fairness through awareness” and minimizes risk for organizations that leverage machine learning predictions at scale within business operations. Augmenting xAI Workbench with bias detection allows enterprises to quantify and mitigate potential hazards when complying with federal, state and local regulations or corporate policies.

“Fairness through awareness” means an organization is empowered to evaluate their machine learning models for bias with all elements and data that are relevant to a prediction, even if those characteristics are not used to train the model itself. Conversely, “fairness through blindness” refers to selectively excluding elements and data from the modeling process. The risk with this blind approach is that it does not account for the potential correlation of remaining data to the variable excluded. And, it gives no transparent comparison to determine if the remaining proxy characteristics led to harmful bias even with the obvious element removed.

Related Posts
1 of 40,942

Recommended AI News: Dune Analytics Closes $69,420,000 Series B Led by Coatue

Unlike platforms that exclusively measure whether the distribution of data has changed over time, xAI Workbench bias detection evaluates the fairness of the model, ensuring people who are similar (on the basis of reasons most relevant to make the modeled decision) receive equal treatment. Additionally, the bias detection in xAI Workbench scours to the deepest subsets of the model, exploring millions of data paths to ensure that the model operates with equal fairness within groups and between groups. In contrast, most machine learning platforms that offer bias detection only pursue bias at the model’s highest, most overarching, global level.

“Organizations leveraging machine learning need to be aware of the harmful ways that bias can creep into models, leaving them vulnerable to significant legal risk and reputational harm,” said David Jakopac, Ph.D., vice president, Engineering and Data Science, InRule Technology. “Our explainability and clustering engines provide unprecedented visibility that enables organizations to quantify and mitigate harmful bias through action in order to advance the ethical use of AI.”

In addition to reducing risk and preventing harmful algorithmic bias, the automated bias detection in xAI Workbench helps minimize bottlenecks in the model ops lifecycle by giving data science teams a set of automated tools to accelerate their development process, leading to faster model deployments with greater confidence.

Recommended AI News: Google Cloud Announces Cloud-First Partnership with KeyBank

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.