Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Survey: ‘Algospeak’ on the Rise in Attempt to Avoid Automated Content Moderation

Use of emojis, alternative language can provide marginalized groups an online forum, but also pose risks to brand reputation if used negatively, according to TELUS International research

Internet users have become increasingly creative in their efforts to circumvent AI-powered content moderation of offensive or banned expressions online, according to the results of a survey conducted by TELUS International, a digital customer experience (CX) innovator. The survey found that more than half of respondents (51%) said they have seen “algospeak” used on social media, moderated forums and brand websites as well as in gaming communities, with 42% indicating this behavior has increased since they first spotted it themselves.

Recommended AI News: Complete Verkada Platform is Now Available Across the UK and Europe

A combination of ‘algorithm’ and ‘speak,’ algospeak is the collection of codewords, slang, deliberate typos, emojis and the use of different words that sound or have a meaning that is similar to the intended one. For example, “unalive” is a regularly-used algospeak term for “dead” and “The Vid” for COVID-19. While algospeak can help individuals – including those in marginalized communities – discuss topics perceived by some to be controversial without having their content automatically flagged for removal, it also can be used by those wanting to intimidate, harass and cyberbully others.

While 30% of Americans have used algospeak, the behavior is most common among the digital natives of Gen Z (aged 18-24) with nearly three-quarters (72%) saying they have recognized and been exposed to this type of behavior, and 41% saying they have used it themselves.

“The wide variety of digital platforms where people can express their thoughts and opinions combined with the cultural normalization of society documenting the world around us on a daily or even hourly basis, means that today’s brands are facing an even steeper uphill battle when it comes to quickly reviewing and accurately removing harmful, abusive or inappropriate materials that violate their guidelines,” said Siobhan Hanna, Managing Director, AI Data Solutions, TELUS International. “With the use of algospeak becoming more and more prevalent, brands must establish a robust content moderation practice that leverages both AI and a diverse team of content moderators and data annotators. Depending on a given site’s particular content guidelines, a human-in-the-loop approach can help brands either remove all instances of algospeak or ensure that context is properly considered in these instances to minimize the flagging of content by marginalized groups that does not contradict community guidelines. An established content moderation strategy is no longer a nice-to-have for brands, but a must-have in order to maintain safe online environments.”

Recommended AI News: VMware Enhances Its Unique Lateral Security for Multi‑Cloud

Related Posts
1 of 41,068

Other findings from the survey of 1,000 U.S. consumers included:

  • Words/text are the top method of communicating on Facebook (65%), Instagram (35%), Twitter (40%), moderated forums/online communities (49%) and branded websites (47%) over other forms of communication such as emojis or video.
  • More than one in five (22%) said they immediately see an uptick in the use of algospeak and emojis to circumvent banned terms when a polarizing societal event occurs.

Meeting Customer Expectations for Content Moderation

Consumers entering a brand’s online environment have the expectation that it will be free of offensive or hateful content, and for many, that includes instances of algospeak. In fact, 38% of respondents said that brands should be able to identify and remove these nuanced phrases and emojis immediately. Only a third (32%) of survey respondents believed that brands are currently doing a good job moderating this type of language, which puts them at high risk of losing consumers’ trust and their business.

“These survey findings indicate that today’s discerning consumers have put the onus on brands to ensure their content moderation strategies are keeping pace with evolving digital behaviors,” added Hanna. “Inevitably, the complexity and quantity of content will continue to grow over time, and now more than ever, brands need to consider working with an experienced partner that has the insight to foresee these shifts in trends, the agility to quickly pivot and the global resources to scale if they are going to successfully build and maintain consumer loyalty over the long term.”

Recommended AI News: Blotout Announces New Partnership with Fastly to Improve Meta Ad Spend with Blotout’s EdgeTag

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.