Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Control of Generative AI is the Only Way to Unlock Its True Benefits 

Is there a way to impose generative AI control systems? While the ability of generative AI tools like ChatGPT to answer seemingly any question that is posed to it and generate everything from lines of computer code to legal agreements with just a few prompts by the end user is indeed astonishing, there’s a catch, and it’s a big one: Generative AI often makes up things (like a fiction!). 

In AI circles, this problem is referred to as a “hallucination.”

Hallucinations occur when AI – in its eagerness to come up with a pleasing and convincing-sounding answer – simply invents facts and examples. Since these AIs are trained to predict the most likely text to follow what the user has typed, it will not say that it doesn’t know the answer – so, the end user is left to guess if they’ve just received a brilliant answer or a load of bunk.

That might be a slight annoyance or even something chuckle-worthy in a consumer setting, but in the professional realm, that blurring of truth and fiction is a nonstarter – not to mention a huge potential liability for the organization. 

Finding More Generative AI Control: Five Guidelines for Marketers Using ChatGPT and AI Tools to Create Successful Campaigns and Content

Imagine a law firm where a lawyer asks a ChatGPT-type tool for guidance on how to structure an upcoming merger to avoid running afoul of any EU regulations, and the AI simply makes up an answer. Likewise, picture the financial services firm that asks generative AI whether a certain company is a good investment or not, and it provides revenue figures that have no basis in reality. Things can quickly get ugly with this kind of unreliable intelligence. 

A Lesson from the 20th Century 

Does this mean generative AI is much ado about nothing – a seemingly promising technology that is soon destined for the scrap heap of history because of its inability to be effectively controlled and put to practical use?   

In a word: no.  

It’s helpful here to think back to another world-changing technology that emerged in the 20th century: The Internet. In the early days, it had no relevance to the business world, being primarily the domain of hobbyists, university researchers, and government agencies. Due to its decentralized nature and anonymity, it was difficult to fully control, which provided space for cyber criminals, spammers, and misinformation to take root. Despite these challenges, society found a way to make good use of the internet to transform our world and the way that business is done.

Similarly, we need to find ways to make positive use of generative AI without being limited by negative aspects like hallucinations. The question is: how? What are the best practices and safeguards that make it safe for organizations to dip their toes into the generative AI waters? 

Read AI ML Stories: Riding the AI Wave: Three Ways AI Can Shape the Future of Sales

The steering doesn’t work… 

An understanding of the way generative AI works is crucial to understanding how its hallucination problem can be brought to heel and safeguarded against.  

Related Posts
1 of 6,950

Natural language generative AI models – such as ChatGPT and Bard – are based on so-called “transformer” technology. During the training phase, they are fed large amounts of text with the simple mission to predict words that were blanked out. To help “fill in the blanks”, these models construct a “world view” of the text they have seen

Such a worldview is not necessarily “correct” or aligned with our way of thinking. And if that worldview is incorrect, the generative AI is likely to answer questions incorrectly. 

 

To make matters even more challenging, tweaking a large language model isn’t as straightforward a task as tweaking a more classical machine learning algorithm – like the kind used for document classification, for example.  

Whereas guiding the “thought process” of a classical machine learning model is fairly uncomplicated – you adjust the data you feed it – the inner workings of a large language model and the way it makes decisions are rather complex and not very well understood. This makes it very difficult to steer generative AI in a particular direction the way you can with other machine learning models.  

…but grounding does 

Since there’s no effective way to tweak generative AI’s worldview at the moment, the best way to bring it under control is to make sure that any answers it provides are grounded in reliable, quality information resources, through a process called “grounding”.  

 

There are different sources that generative AI can use to source its answers from: These resources can be internal resources, like knowledge assets within an organization’s document management system, or trusted external resources (for financial services firms, that might be a site like EDGAR; for legal professionals, it might be LexisNexis). 

AI ML for Business Leaders: Revolutionizing CX: Contact Center AI Takes Center Stage in 2023

This “grounding” approach turns generative AI into an assistant that can quickly and efficiently perform research tasks and gather information while remaining grounded in fact rather than fiction.  

For example, a lawyer could use a ChatGPT-type interface that connects to trusted internal and external resources to research a question about real estate law in Singapore. The grounded generative AI would, in turn, provide not just the answers, but references to the material that was used to generate that response.  

 

In other words, instead of asking the generative AI to come up with the answer itself, grounding breaks the task down into two tasks: let the Generative AI find the supporting documentation to answer the question, and then formulate an answer based on what it found in the document.

 

Taming generative AI in this manner is the only way to truly harness its power and is a crucial step for forward-looking organizations that want to unlock its benefits. 

Comments are closed.