Updated: Content Moderation Is Hard, but There’s a New Approach… and It’s Fueled by Spectrum Labs
Yes, the internet has become the most transformative invention of the modern age – it has forever changed technology, communication, gaming, marketing, banking, d***** and more. But along with that change comes a dark side: The internet has also become a cesspool of toxic human behavior, poisoning the experience both for users and for the content moderators charged with safeguarding online platforms.
But, real talk: Faced with harassment or a disgusting experience online, many of us never report it. Instead, up to 30% of users decide to close their account or stop using certain social networks altogether. They just… leave. All that focus on growth? Wasted.
Which begs a couple of questions: With all the transformation and dizzying innovations brought by technology, why do we still see daily headlines of online harassment, radicalization, human trafficking, child s** abuse, and more? And can online platforms manage growth while still keeping their communities safe?
Many companies think of “Trust and Safety” as just a compliance play – a box to check, rather than seeing the connection to their platform’s health and growth.
But Spectrum Labs, a San Francisco-based Contextual AI platform, thinks that’s a mistake. Growth is directly tied to user experience.
Recommended News: Jobcase Announces New Chief Technology Officer, Arthur Thompson, to Scale and Grow Platform
Platforms like Facebook have faced backlash for outsourcing their content moderation services – traumatizing lower-paid contractors with images and videos of shootings, violence and hate – and only removing a fraction of toxic content on their platform.
Content moderation tools, while seeing some improvement over the last decade, are still flawed and need to be drastically improved. That’s where Spectrum Labs comes in.
Spectrum Labs has developed an astonishingly accurate Contextual AI system that identifies toxic behaviors like hate speech, radicalization, threats, and other ugly behaviors which drive users away from online communities. They’ve also made it dead-simple, so that even people who don’t understand code or datasets can know what’s happening on their platforms any time. Spectrum Labs’ approach is gaining traction with giant names in social networks, d*****, marketplaces and gaming communities.
Legacy content moderation technologies typically use some form of keyword and simple message recognition (classification), which works best for interactions that occur at a single point in time. But most toxic behavior builds gradually; and Spectrum Labs’ superpower is spotting those larger patterns of toxic behavior — in context. Some customers have already seen a reduction of 75% or more in violent speech, heading them off before they ever reach users, while flagging the trickier, ambiguous cases to human moderators on the Trust and Safety team.
“Our customers put the safety of their community first — and are seeing better retention rates and satisfaction. Our technology gives them the visibility and power to easily know what’s happening on their platforms, any time, and in real time.”
Recommended News: SoftServe Achieves Data Analytics Specialization in Google Cloud Partner Program
“In 16 years of working in tech, this is the first company I’ve been with where we are actually saving and improving lives — users, players, kids, and moderators. We never forget that online experiences can have offline impact, so we’re excited to continue helping companies make the Internet safer and healthier for their users,” Davis added.
Spectrum Labs has built a library of large labeled datasets for over 40 unique models of toxic behavior, such as self-harm, child abuse/sexual grooming, t********, human trafficking, cyberbullying, radicalization and more, across multiple languages. Spectrum Labs centralizes its library of models across languages and then democratizes access so that each client can tune the service to their own specific platform and policies. No one-size-fits-all because a) it doesn’t exist and b) it doesn’t work (see: headlines every day of one-size-fits-all keyword recognition failing, with disastrous consequences).
This collaborative approach solves the “cold start” problem of launching new models without training data, and brings together a fractured and siloed data landscape, giving online platforms the ability to automate their moderation needs, at scale, while allowing for human judgment to be the final arbiter of what to allow on their platform.
Additionally, the ethical use of AI, in combination with a strong commitment to diversity and inclusion, and transparent data sets are just a few of the critical elements needed in order to operationalize automated AI systems that can recognize and respond to toxic human behaviors and content on social platforms at scale without causing harm to employees, contractors and users.
Tiffany Xingyu Wang, Chief Strategy Officer of Spectrum Labs said, “Whether it’s the content children are watching, the d***** apps adults are on, the gaming done by both children and adults, enjoying the experience safely is the priority.” Wang added, “Internet safety is no longer just a nice-to-have. We’re getting closer to a world where investments in trust and safety are differentiators that drive topline revenue.”
Recommended News: Zigmabit Releases 3 New Crypto Mining Rigs and Leaves the Competition Frantic in Its Wake
Comments are closed, but trackbacks and pingbacks are open.