Bumble’s New Framework is Combating Misogyny in AI and Synthetic Media
Let’s call out the elephant in the room. Artificial intelligence, sometimes, can be sexist as well as misogynistic. One company that is focused on combating biases, and harassment, is Bumble, the online d***** app which strongly believes in keeping the community safe while d***** and does not tolerate misogyny or bad behavior of any kind.
Lately, you would have come across the profile picture in the form of synthetic images (highly exaggerated), especially of women, in unrealistic proportions. Technically, these images, impractical in nature, should immediately serve as a red flag, as the brand understands the possible abuse of the said technology.
Recommended: E********* With ChatGPT: 10 Simple Ways To Get You Started
A New Framework for Ethical Development And Sharing Of Synthetic Media
In the ever-evolving era, the internet will transform, as will the synthetic media which includes artificially-created music, images, and text. Being a tech company, Bumble is more than excited at the scope of innovation here.
The company knows about AI which, at the moment, is used nonconsensually, particularly in deep fake porn. The brand intends to have a say in the creation of emerging media and is not restricted to discussing or supporting its evolution.
This is precisely why Bumble believes in working really hard behind the scenes along with the nonprofit Partnership on A.I. (PAI). PAI is a coalition whose mission is to ensure the responsible use of these technologies.
With this, Bumble has joined other industry peers like BBC, CBC/Radio Canada, Adobe, D-ID, TikTok, OpenAI etc. as launch partners in PAI’s Responsible Practices for Synthetic Media: A Framework for Collective Action. Bumble aims to use this framework to guide the constant efforts to fight nonconsensual image abuse (NCII), while facilitating a safer and more equitable internet.
How does it work?
The Responsible Practices Framework has inputs from over a hundred contributors and over 50 organizations. It also included Bumble Inc.
- The Framework is a guide focused on devising policies, practices, and technical interventions that may interrupt harmful uses of generative AI like misinformation and manipulation (Synthetic images of women in particular often have exaggerated, unrealistic proportions)
- It is the first of its kind to not only acknowledge but also addresses the impending opportunities related to the creation, use, and distribution of synthetic media.
- Companies and organizations that sign on to the Framework make a commitment to demonstrate how they can or are using the Framework to guide their decision-making process. At Bumble, we plan to use this framework to help guide our continued efforts in the non-consensual intimate image space, as well as, our larger goals of contributing to a safer and more equitable internet.
Recommended: Breaking Language Barriers: Voicero, the Real-Time AI Translation Platform that Ensures Your Ideas are Heard
Claire Leibowicz, Head of AI and Media Integrity at the Partnership for AI (PAI) said:
“In the last few months alone we’ve seen AI-generated art, text, and music take the world by storm. As the field of artificially-generated content expands, we believe working towards a shared set of values, tactics, and practices is critically important and will help creators and distributors use this powerful technology responsibly.”
Payton Iheme, Bumble’s VP of Global Public Policy believes that the brand has always advocated for safe spaces online for the lesser represented voices. Bumble’s ‘work with PAI on developing and joining the code of conduct, alongside an amazing group of partners, is an extension of that. We are especially optimistic about how we continue to show up to address the unique AI-enabled harms that affect women and marginalized voices.”
PAI’s Responsible Practices for Synthetic Media: A Framework for Collective Action consists of a set of guidelines and recommendations for those who are creating, sharing, and distributing synthetic media. Popularly called AI-generated media, industry experts believed that in the constantly evolving landscape of synthetic media, a new frontier for creativity and expression has emerged. But, this also means a concerning potential for misinformation and manipulation if this goes unsupervised.
Recommended: NVIDIA’s AI-on-5G System to Deliver Graphics, Computer Vision & AI Apps from the Same Server
Bumble, which closely monitors the safety of its users and combats misogyny, and toxicity online has rolled out safety a few features within the Bumble app itself. Private Detector, an A.I. tool guards the community from unwanted lewd images.
The term known as cyber flashing refers to sending lewd images. Additionally, the brand has teamed up with legislators, to successfully back bills in both the U.S. and U.K in order to penalize for sharing lewd images or videos.
Bumble’s Safety Measures
- Users can not have guns or any other weapons of violence in profile pictures;
- No hate speech, and sexual harassment;
- Users can Video Chat and Voice Call within the Bumble app to meet new people without disclosing their phone number or email ;
- Users can use the Snooze feature to take a break from d***** and focus on their mental health ;
- The‘Private Detector’ feature automatically blurs lewd images;
- An Unmatch feature and Block & Report system.
Bumble is committed to ensuring that AI and synthetic media are spaces that are safer for women and individuals belonging to underrepresented groups.
[To share your insights with us, please write to sghosh@martechseries.com].
Comments are closed.