Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

New Study Finds AI Benefits Hinge on Explainability and Automation with a Human Touch

InRule Technology, an intelligence automation company providing integrated decisioning, explainable AI and digital process automation software to the enterprise, published The End of AI Ambiguity, a newly commissioned research study conducted by Forrester Consulting on behalf of InRule. The study found that ethical worries around artificial intelligence (AI) and machine learning (ML) stymie the implementation of AI/ML decisioning. In fact, 66 percent of AI/ML decision makers stated that current AI/ML offerings are unable to meet their organization’s ethical business goals.

Respondents worry that harmful bias can lead to inaccurate (58 percent) or inconsistent (46 percent) decisions, decreased operational efficiency (39 percent), and loss of business (32 percent). The study concludes that addressing these ethical risks by leveraging human accountability within AI-powered process automation is central to enabling decision makers to better predict customer needs and personalize solutions.

Recommended AI News: Salesforce Expands Social Commerce Offerings, Connecting Merchants with TikTok

The research found that nearly 70 percent of decision-makers agree involving humans in decisioning with AI/ML reduces risks associated with these technologies, but to keep humans in the loop AI systems need native explainability functionality. Automating human governance and engaging a wider group of stakeholders improves decisions and model transparency.

Related Posts
1 of 41,025

Linking explainability with a human touch also unlocks other benefits: assurance to stakeholders that AI/ML can be safely used (59 percent), reduced regulatory risk (51 percent), and fairer models (48 percent). “Right to explainability” legislation is spreading with the Algorithmic Accountability Act of 2022 proposed in Congress and the European Union pushing for stricter AI regulations abroad, as well. Businesses must take steps today to ensure they can prove the accuracy and fairness of their algorithms.

According to Rik Chomko, CEO and co-founder of InRule, “AI is consistently ranked by c-suite executives as critically important to the future of their business, yet two-thirds of those surveyed by Forrester Consulting have difficulty explaining the decisions their AI systems make. Built-in, native explainability empowers non-data scientists and c-suite executives to quickly understand why a decision was made and take confidence in the outcomes of intelligence automation.”

Recommended AI News: New Cisco Technology Can Predict Network Issues Before They Happen

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.