Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Fake News Detection and AI’s Limitations in the Enterprise

Artificial intelligence (AI) is a powerful tool in the fight against misinformation, particularly when it comes to detecting false or misleading information at scale. With the increasing volume of information on the internet, detecting and addressing fake news is a significant challenge for news companies and consumers. AI can help by identifying patterns in large amounts of data, flagging content that is likely to be false or misleading, and even predicting potential sources of misinformation before it spreads.

AI ML Updates: Responsibility for AI Ethics Shifts from Tech Silo to Broader Executive Champions, says IBM Study

Despite the impressive progress in AI-powered analysis, the technology in its current state still has limitations. AI algorithms often lack context when processing information, so while it can detect patterns and anomalies, it may not always understand the nuances of the content it is analyzing. This is an obvious drawback in the world of news media and misinformation detection, but these shortcomings will also apply in the enterprise setting as well.

As retailers, CPG brands, and pharmaceutical companies begin to experiment with generative AI tools like ChatGPT, DALL-E and others in their marketing efforts, they should draw upon the following tips and best practices that came out of AI-backed news media efforts. 

AI’s Applications in News Media

It’s fair to say AI has revolutionized news media, making it important to understand the role that AI plays in the current media landscape. While it may not seem obvious to the average consumer, AI algorithms often determine which news gets presented and how it’s delivered on a daily basis. 

App-based news aggregators, for example, deploy algorithms that look at reader profiles, including the user interests, browse signals, readership patterns, trending documents and other factors to provide content that conforms to their habits and preferences. Organizations can also direct these algorithms to seek feedback on app stores, social media and chat sites to help inform a tailored feed. Some social media platforms that also share news, however, don’t treat their news content any differently than a post from a friend. These models optimize for engagement, to the point where everything feels a bit too hyper personalized, causing users to impulsively scroll through negative, junk and repetitive content for hours at a time, also known as doom-scrolling. 

AI ML News: Monolith Publishes First-ever ‘State of AI in Engineering’ Study

Fortunately, there are some news media companies using AI algorithms to promote content discovery and foster new perspectives among readers, creating guardrails and training AI to help foster accurate, trustworthy and enjoyable feeds. Some news applications use AI models to flag suspicious coverage based on things like a publisher score (which helps verify quality based on voice, authenticity, posting date, reviews, and more), and have implemented rules and procedures to identify and remove misleading and fake news. While this is an effective use of AI’s analytical capabilities and a step in the right direction, there are some limitations to keep in mind that only a human can address, including context and empathy. 

Related Posts
1 of 7,579

Human Role in Fake News Detection

It’s crucial to understand at this stage in AI’s technological advancement, it doesn’t always comprehend the full context surrounding a piece of news or impact it might have on a reader. Therefore, AI tools can get it wrong and flag, or miss, content as a result. Because of this, human oversight, intervention and continual refinement of guidelines remains a necessary step to ensure accuracy and appropriate handling of content. 

New users of ChatGPT and other commercial generative AI tools already have a sense of this process, as they intuitively question or accept the text output they receive. When companies begin to deploy generative AI in the enterprise setting, they must consider policies requiring humans to be kept in the loop at all times. The combination of AI and human expertise is a powerful and effective approach to combating misinformation and solving work-related issues.

Using AI Ethically

Whether or not there is human oversight layered into an AI application, there are key elements to consider regarding the ethics of AI. First, AI must be leveraged with the right intent. Technology itself is never good or bad; it’s the intent of the user that determines this. It’s up to industry sectors, organizations and consumers to deploy AI responsibly and ensure they have the right guardrails and policies in place to manage and mitigate risk. 

It’s also essential to be transparent about using AI. There’s nothing wrong if a piece of content was generated by AI, but it’s key for the content owners to be transparent about it and take responsibility for the final product. A news outlet might leverage AI to generate an article, for example, but informing the readers of that fact will build trust in the long run. Informing readers could be a disclosure in the dateline or as part of an annotation. Each news organization may choose its own path, but there does need to be an indication that AI was used, otherwise it can be seen as disingenuous. 

Top News: Tableau + GPT: Ushering into a New Era of AI-led Business Analytics

The first foray into enterprise AI usually involves out-of-the box solutions. The best ones have built-in protocols to recognize when bias is clouding its judgment. Even with these safeguards, it is critical for humans to exercise reasonable discretion when using the tool. If companies have the opportunity to build or license their own AI models, they must ensure that the algorithms are designed responsibly and prudently. 

We’re at the beginning of something special, like the Internet in the 90’s and while we’re just now beginning to understand the power of generative AI, we can only partially comprehend the implications it will have on society as a whole. Since AI can’t think for itself yet, we need more smart people who can understand the design faults and build guardrails to prevent abuse. I’m excited about the future of AI, its potential and the amazing community that will help shape its future.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.