Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Two Hat Releases New Artificial Intelligence to Moderate and Triage User-Generated Reports in Real Time

Creator of Content Moderation Solution Community Sift Expands Efforts to Protect Online Communities from Abusive Content

In response to growing concerns about social networks’ responsibility to protect users, leading AI technology company Two Hat announced that it has released Predictive Moderation, an artificial intelligence model that moderates user-generated reports in real time. In conjunction with Two Hat’s content moderation solution Community Sift, Predictive Moderation helps gaming and social platforms automatically sort and triage user-generated reports containing abusive content like harassment and hate speech.

The new feature provides a scalable solution for platforms to ensure that reports containing time-sensitive content are sent to human moderators for priority review.

“In 2018, social networks started to realize that users are no longer willing to accept abusive content on their platforms as the cost of being on the Internet, but  it remained a problem without a scalable solution—until now,” said Chris Priebe, Two Hat CEO and founder. “With Predictive Moderation, we can provide the industry with a better way to protect their users from content that damages the community.”

Read More: Interview with Cédric Carbone, CTO at Ogury

Related Posts
1 of 3,814

For years, gaming and social platforms have relied on users to report abuse, hate speech, and other NSFW content. Content moderation teams then review each abuse report individually. Many platforms receive thousands of reports daily, most of which are considered “false” and can be closed without taking action.

Meanwhile, reports that contain time-sensitive content — including suicide threats or calls to real-life violence — risk going unseen. With Predictive Moderation, platforms can train a custom AI model on their moderation team’s decisions, automating the most time-consuming part of the moderation process by closing false reports, taking action on the obviously abusive reports, and sending reports that require human eyes for priority review.

Read More: Interview with Christopher McCann, CEO and Co-Founder, snap40

“We’ve been working on the problem of disruptive content behind the scenes for the last seven years,” said Priebe. “The market radically shifted last year, when social networks began to focus more heavily on the complexities of addressing disruptive content and the industry is now actively seeking a scalable solution.”

Priebe will host a webinar on Wednesday, February 20th, where he’ll share Two Hat’s latest advancements in artificial intelligence and outline the company’s vision of the future of content moderation and “invisible AI.”

Read More: Innovate for Success: Use AI to Monetize your Data

1 Comment
  1. Copper ingot recycling says

    Scrap copper processing Copper scrap reusability Metal recycling consultancy
    Copper cable stripping machine, Scrap metal reclamation and reclaiming, Innovative technologies for copper scrap recycling

Leave A Reply

Your email address will not be published.