Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AiThority Top Post of the Week: How Data Integrity is Vital for AI

AI has stormed its way into the public conscience over the past year thanks to the meteoric rise of generative AI and foundation models. This popularity has many business leaders rushing to infuse AI capabilities into their workflows, and for good reason. They see opportunities to increase productivity and time to value by applying pre-trained AI models to tasks like analyzing text, extracting insights from documents, generating code, and more.

However, the importance of doing it right cannot be overstated. We’ve seen the pitfalls of poorly trained AI play out in public examples, like algorithms that unfairly denied mortgages to minority applicants and biases in HR recruitment tools.

Digging Into the Data

The common thread in these examples is data. Data is the fuel that drives AI’s engine, and quality control is paramount. To run at peak performance, AI needs data that’s easily accessible, cleaned of imperfections, and trustworthy.

A recent study found that 73% of business executives are unhappy with their data quality, despite the fact that data engineers can spend up to 40 percent of their time cleaning, preparing, and integrating it.

The increasing volume of data spread across numerous repositories creates issues such as data sprawl, where organizations’ data is spread out and often siloed off for different business purposes. For example, accounting data in one place, customer service in another.

Top AI ML News: Zibra AI Raises $500,000 Funding From Successful a16z Speedrun

By bringing together this data under one common architecture, such as a data fabric, organizations can make it more readily available for AI to work its magic. Data fabric and data mesh technologies can provide streamlined access across repositories with tools to help clean and qualify the data, taking human error out of the equation. They also allow for better governance, ensuring that only the people who need to access it can do so.

Care to Explain

Related Posts
1 of 13,133

For AI to be trustworthy, it’s vital to have insight into whether the data that powers it is not just free of errors, but also biases – whether unintentional or the product of bad actors. Like a zero-trust security architecture, data should be safeguarded with governance structures that assume any opportunity for manipulation will be exploited.

Once your data is architect-ed for centralized access and vetted for ethical risks, then you can begin to apply AI applications to unlock business value. But even then, it’s not as simple as throwing a switch and sitting back. AI requires ongoing adjustments and lifecycle management to ensure it doesn’t drift from its original intended use and parameters. This means that your data should remain closely scrutinized for biases, as well.

To maintain trust, AI can’t be a black box; organizations need to be able to open the hood on their applications and demonstrate how they operate. In fact, this ability to explain AI is increasingly being mandated by way of government regulations.

Recommended AI Story of the Week: HCLSoftware Collaborates with Microsoft to Accelerate AI-fueled Offerings

For example, New York City began enforcing a law this summer that prohibits employers from using AI in recruiting, hiring, or promotion decisions without a detailed bias audit.

Measure Twice

There’s a lot of pressure on businesses today to find an operating edge to navigate the uncertainty of the economy – and AI presents a compelling solution.

But, before venturing into these waters, it’s a good idea to consider forming an ethics board to establish guiding principles for your use of AI. Such a body can support ongoing review of your AI applications to validate that they are transparent, explainable, and drawing upon unbiased data, and to ensure your use of technology isn’t at odds with your company values.

To implement an AI strategy the right way, it’s critical to take the time to embed ethics, vet data, and build a solid architecture on which it can work properly.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.