Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Data Lineage Sheds Light on Generative AI’s Hallucination Problem

Artificial intelligence (AI) has taken over headlines, with every industry looking for ways to use the technology to speed up processes, increase efficiency, and work with leaner teams. But the discourse around generative AI spans the good and the bad — from its myriad uses to its various dangers. Yes, it can save organizations time and money, but unfettered AI use comes with risks.

When an AI chatbot “hallucinates” 

We’ve already seen high-profile AI slip-ups. A judge recently fined two lawyers and their law firm after they used ChatGPT to draft trial documents that included fake case law. Not only that — the judge said the lawyers and firm “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”

Top AI ML News: 10 Plus AI Research Projects That Should Be on Everyone’s Radar

The dangers of AI “hallucinations” — confident responses from an AI-powered chatbot that aren’t supported by its training data — extend into nearly every industry. When you consider the nature of the technology, it’s not astonishing that these hallucinations occur. Part of the problem stems from our own inconsistent expectations. When we use generative AI to draft a story or a picture for our children, for instance, we crave unexpected, frivolous — even illogical — results to surprise and delight. But when we ask the same technology to perform a task that demands rigorous results, we’re bound to be disappointed.

Using this technology for exacting work doesn’t just lead to dashed expectations. Serious consequences can result. A wellness chatbot meant to replace a human-staffed helpline for people with eating disorders was suspended and is under investigation after it offered harmful advice to users.

Shift the conversation from consumer-facing models to internal business uses of generative AI, and additional concerns arise. Expecting generative AI technologies to produce accurate answers to business-critical questions is a misuse of the technology that can cause devastating consequences.  The models are, quite simply, meant to make up answers. Thus, the incorrect information they produce — from risk modeling analytics to fraud analytics and predictive services — can mislead executives and business leaders about their operations. These false insights hamper asset management, damaging a business’s reputation and bottom line and jeopardizing jobs and operations in the process.

How hallucinations happen

Generative AI depends on the data it’s been trained with to build its ideas. It needs high-quality data to form high-quality information. But inherently, the nature of the algorithm is to produce output based on probabilities and statistics — not a true understanding of the content. That’s why we see hallucinations. The fact is this: Generative AI models are always hallucinating. The threshold for concern isn’t based on when hallucinations start but rather on when users become aware of them.

And the threshold we’re discussing is a moving target, dependent on our expectations for the AI’s output. As mentioned earlier, it’s one thing to ask for and receive a fanciful bedtime story for your children — but a very different matter to expect accurate trial documents and get hallucinated results.

Related Posts
1 of 7,354

Even Microsoft, which offers the AI-powered chatbot Bing, alludes to this in an internal document, saying the systems “are built to be persuasive, not truthful.” A large language model’s outputs can look very realistic while including statements that aren’t true, which is why it’s so dangerous. We trust the output based on the quality and familiarity of the language — but not its actual content. Extend the conversation to include AI-generated images, and our propensity to believe what we see (at least until we take a more critical eye) poses even greater risks.

While some experts in the field say they don’t fully understand how AI hallucinations emerge — or how to stop them — I believe we do understand their origins. But as I’ve said, we accept certain hallucinations that work in our favor based on the training sets and our criteria for the model’s output. The issue at hand is how to stop less favorable outputs in case users want to apply LLMs and other generative AI models to elicit more rigorous — and consequential — responses.

AI in Finance: The Double-Edged Sword of AI in Financial Regulatory Compliance

How data lineage can help

True generative AI, beyond LLMs like ChatGPT, depends on intricate data algorithms that encounter data obstacles like any other program. This complexity, combined with AI’s increasing adoption across fields, makes the need for auditable systems urgent.

Generative AI’s success hinges upon high-quality data. Data lineage enables organizations to root out harmful data to ensure data accuracy and quality. Developers can use automated lineage tools to trace the origins of training data. They can identify instances where the algorithm shifts sources to inaccurate information, whether that “bad” data flows from the wrong source or has been filtered in a way that introduces bias.

Changes in a model’s data source (which can involve hundreds of steps) will yield inaccurate results if the model is not retrained to accommodate them. Lineage sheds light on the impacts of such changes and uncovers inaccurate data that negatively affects the performance of an AI model. Eventually, we need to address the greatest challenges of AI. These efforts would include highlighting the most significant inputs and, ideally, pinpointing which inputs lead to specific parts of the generated outputs.

This transparency is of utmost importance as generative AI faces stricter regulatory oversight and end users demand higher trustworthiness from AI tools. Not only that — the onus is on all of us to avoid plaguing the internet with hallucinated and false AI content, or allowing for the proliferation of purposely generated fake news. Transparency and auditability of the information is a key aspect to help us manage whatever the AI future brings.

From patient outcomes and legal matters to business insights, false information emanating from generative AI chatbots can persuade users, leading to decisions that don’t reflect reality — and have significant consequences. While technologists try to understand the inner workings of these hallucinations, automated lineage tools enable a birds-eye view of data’s journey, offering visibility and understanding of complex data environments.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.