Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Artificial Intelligence and the Trust Deficit: A Call for Greater Transparency

The field of generative AI is flush with cash at the moment. Investment in the space grew by nearly $8 billion in the last year alone. But, executives in corporate America still harbor a lot of mistrust when it comes to business applications of AI. As many people know, AI can hallucinate, conjuring up “facts” and confidently presenting them as truth. AI can also perpetuate outdated or even dangerously biased data, and even absorb confidential information used as inputs into its algorithm. As a result, company-wide bans on unauthorized use of AI in workflows have proliferated. And a flurry of AI detection tools have cropped up on the market.

I genuinely think this proscriptive approach is a mistake.

Of course, we leverage the power of generative AI every day.

Tome Adds Key Machine Learning and Engineering Leaders to Shape AI-Powered Communication

Our very founding, in fact, was predicated on the transformative power of AI, so some people would say that I’m biased. And that’s probably true, but I think my logic still stands. Employers and publishers who use AI detection tools to root out and reject AI-generated content wholesale are throwing out the proverbial baby with the bathwater.

AI is far from perfect, but it’s getting better all the time – that’s the entire point of AI. You retrain it based on problems with previous iterations. And organizations that enact an outright ban on AI are missing out on many game-changing efficiencies. Ultimately these businesses will be left behind as their competitors forge ahead wielding the power of AI.

At the same time, it’s important to recognize that not all suspicion around AI is unfounded. There’s a real value to human-generated content. Readers know this value, they connect with it, and they don’t appreciate a bait and switch. This is why organizations utilizing AI-generated content need to make transparency a top priority.

Transparency as a competitive edge

For the most part, distrust emanating from the C-suite isn’t’ standing in the way of progress. More than 70 percent of companies are already experimenting with generative AI. It’s not a question of whether all businesses will begin using generative AI, it’s a matter of when.

Let’s explore how to do it right.

The content generation needs of any business are vast. Even the generative AI skeptics will admit this much. Website copy, email copy, blog posts, guest articles, press releases, product labeling, internal memos – it’s a never-ending and ever-growing list.

But not all content is created equal. Whereas it might make sense to devote your organization’s best and brightest communicative minds to writing high-visibility thought leadership articles or investor reports, lower-stakes writing like deck content and web copy are perfectly suited for generative AI.

Still, it’s critical for maintaining client and audience trust to clearly disclose where and how your organization is using AI-generated content. I’m not saying you need to place a bright-red banner ad at the top of every company webpage declaring: “WARNING, PARTS OF THIS CONTENT MAY HAVE BEEN GENERATED BY ARTIFICIAL INTELLIGENCE!” It should be more than sufficient to include a legible and prominently placed link to a page of disclosures – as well as in your online policies, disclosures, and terms of service – outlining where and why you’ve leveraged generative AI.

Related Posts
1 of 6,986

You can go a step further and provide customers or clients with a way to report concerns about any content generated by AI, or perhaps allow them to interact directly with digital content. This approach will not only have audiences trust you more but also help you to capitalize on fresh sets of eyes to improve content quality over time.

The explainability revolution

When it comes to building audience trust in the age of generative AI, transparency alone is only half the equation. It’s also critical to explain which generative AI models you use and how they work.

Power of ChatGPT: How Brands are Leveraging ChatGPT AI to Create Engaging Content for Email Newsletters

Your organization’s Explainable AI (X-AI) story can start with data. It’s important for organizations using generative AI to ensure the models they’re using are trained on good data. That data should be up to date and drawn from reliable sources, thereby avoiding outputs that might perpetuate harmful stereotypes. And you should share these vetting processes with readers. Explain how you assessed your model of choice, and which vetting metrics you used.

You should also explain how the models you’re using work – at least to the extent the modelers themselves have publicly disclosed. For example, Google regularly publishes explainers with each update that the company issues for Bard. OpenAI also maintains an easily navigable list of FAQs about how ChatGPT works and what to expect in its outputs.

But why stop with simple web pages? If you’re using generative AI, capitalize on AI-supported workflows by documenting them in videos and other visualizations. You can then share these on your organization’s YouTube and social media accounts. Everyone’s fascinated by AI right now. Bringing customers and users in on the secret is a great way to not only take transparency and trust further but also generate interest in your business operations.

Final thoughts

Like it or not, generative AI isn’t going anywhere.

It’s simply too powerful a tool to count out. And, businesses that hold their noses in the air will fall behind competitors who’ve found ways to use generative AI smartly.

And when I say smartly, I don’t mean sneakily. Generative AI use should be strategic and discerning, not deceptive. We shouldn’t be working towards a future that outsmarts AI detection tools altogether. In fact, I think these tools are very much a part of a healthy AI future. Just as we should know what ingredients have gone into our food, and be certain they’ve been thoroughly tested for safe consumption, we should know the quality of our content.


[To share your insights with us, please write to]


Comments are closed.