Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

How to Avoid AI Hallucinations From Becoming Costly Liabilities

Minimizing AI hallucinations is possible, but it will require a lot of human involvement.

Do you remember the Monopoly man having a monocle? Did you grow up reading the Berenstein Bears? Have you ever enjoyed Walker’s Salt & Vinegar chips in the green bag? 

If any of these experiences sound familiar, get ready to question your reality: The Monopoly man has no monocle, “Berenstain” is the bear family name, and Walker’s chips come in a blue bag. These gaps between reality and our shared memories have grown common enough to have a name — the Mandela Effect. It describes a phenomenon in which a human recalls something that never existed or existed differently than memory serves. 

Questioning your own recall can be an unsettling experience. But, for anyone whose livelihood has been impacted by the rise of AI, the Mandela Effect is growing even more ever-present as a similar phenomenon is showing up in generative AI outputs. 

What’s Causing AI to Have “Hallucinations” Akin to Human Misremembering?

Not unlike humans, AI recognizes patterns and uses repetition to reinforce what’s already known. Also, like humans, if an AI system is only relying on pattern repetition to learn, the technology sometimes arrives at an erroneous conclusion or belief — and an AI “hallucination” is born.

With Generative AI applications beginning to do the work of humans in a variety of roles, AI hallucinations can become extremely problematic. These errors can make us look foolish in front of clients, lead to bad business decisions, and reinforce harmful biases.

Safeguarding AI from making these mistakes is possible, but we must first understand what causes them and then apply several key principles when building AI applications. 

Understanding the Causes of AI Hallucinations

AI is a system that functions as a consensus engine. AI-powered technologies take in massive amounts of information for training and extract dependencies to answer questions or formulate text. But, like our own brains, AI technologies aren’t perfect. For example, Google’s AI Bard famously hallucinated an answer during a product demo — costing the company millions in lost share value

To make matters worse, AI anomalies are not always easy to spot. AI is designed to provide convincing and confident answers, so if you aren’t up to speed on a particular topic or lack time to fact-check, you could unwittingly rely on bad information. 

There are several approaches to fixing this problem — and none of them require us to abandon the further development of this amazingly useful technology. Instead, we must take more care in how we use AI technologies and accept the fact that AI can occasionally invent things out of thin air.

Top AI ML Insights:

Data Lineage Sheds Light on Generative AI’s Hallucination Problem

Why Designing Unbiased AI Is So Hard

Constructing any intelligent system poses a significant challenge because its decision-making prowess relies on the quality of the data sets employed during development, as well as the techniques used to train its AI model over time. However, an entirely flawless, impartial, and accurate data set is a fantasy — it doesn’t exist. This presents a formidable obstacle in crafting AI models that are immune to potential inaccuracies and biases.

Facebook’s parent company, Meta, is a perfect case study for understanding how data impacts AI training. The company initially made its new large language model (LLM) available to researchers studying natural language processing (NLP) applications that power virtual assistants, smart speakers, and similar tools. After some exposure to the model, researchers determined that the new system “has a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt, and adversarial prompts are trivial to find.” To say this is not ideal is an understatement.

Meta hypothesized that the AI model — trained on data that included unfiltered text taken from social media conversations — is incapable of pinpointing when it “decides” to use that data to generate hate speech or racist language. This example is further proof that AI systems are not capable of reflecting on the content they are creating and should not operate independently of human decision-making processes and intervention.

Various Approaches to Solving the AI Hallucination Problem

Related Posts
1 of 15,852

So, if we can’t completely trust AI, how do we nurture its development while reducing its risks?

By embracing one (or more) of several pragmatic ways to address the issue:

Institute Domain-Specific Filtering.

One helpful approach to navigating AI hallucinations is to apply domain-specific data filters, which can prevent irrelevant and incorrect data from reaching the AI model while it’s being trained.

For example, imagine an automaker that wants to incorporate an AI that detects soft failures of sensors and actuators in an engine for a small, four-cylinder vehicle. The company likely has a comprehensive data set covering all of its models, from compact cars to large trucks and SUVs. But the automaker should filter out irrelevant data — say, data specific to an eight-cylinder truck — to avoid misleading the four-cylinder car’s AI model.

OpenAI’s recently announced customizations are another variation of this approach: feed custom data sources to the AI as helpful context, aiming for the technology to focus on this provided “knowledge base” most when generating answers for users. Despite good intentions, this effort is, ironically enough, heavily “biasing” the AI system with the custom information provided. Hopefully, the material provided is free of unwanted biases, but it’s also a good reminder that a single mitigation approach alone is unlikely enough as we work to reduce the risks surrounding AI tools.

Keep Humans in the Loop.

We can also establish filters that protect the world from bad AI decisions by confirming each decision will result in a good outcome — and, if not, making sure a human can prevent the technology from taking action.

To achieve this, we must implement domain-specific monitoring triggers that instill confidence in the AI’s ability to make specific decisions and take action within predefined parameters. However, decisions outside of those parameters should trigger and require human intervention and approval.

Run Parallel Systems.

A third guardrail against AI biases is to use proven systems to check newer models. In this instance, developers can run more trusted systems in parallel to newer ones, to spot discrepancies and mistakes.

Much like the other methods of preventing bias and avoiding hallucinations, this approach requires humans to make judgment calls about outputs and potential adjustments. This technique is similar to the way we guide a child to learn a new skill, such as riding a bike.

Recommended AIThority Conversations:

AiThority Interview with Keri Olson, VP at IBM IT Automation

An adult serves as a guardrail by running alongside to provide balance and guidance so the child stays on course — and avoids making rash decisions or dangerous turns, learning along the way. 

AI’s Future Depends on Human Care

Minimizing AI hallucinations is possible, but it will require a lot of human involvement. Misapplication of AI technologies will only create hallucinations, inaccuracies, and biases in these systems — everyone must be vigilant when choosing to use AI tools. But, with a thorough understanding of the stakes, we can help AI systems avoid some very human mistakes.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.