Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

How MIT’s PhotoGuard is Guarding Against Image Manipulation

AI can generate content instantly, engage with customers, aid doctors, and effortlessly execute marketing campaigns with just a click. Undoubtedly, AI’s impact has been extraordinary. However, we must not overlook the significance of ethical AI usage, especially regarding AI image and video generation.

In recent times, a plethora of AI-powered tools have emerged. These tools range from simple tasks like automatically removing backgrounds to more advanced ones, such as deepfakes, which replace people in videos, or cutting-edge technologies like DALL-E and Stable Diffusion, capable of producing entirely new and hyper-realistic scenes from scratch.

The question remains, how can we verify authentic images?

The answer is PhotoGuard.

Invented by MIT CSAIL researchers, this ingenious method ensures the integrity of images in the age of advanced generative models, effectively thwarting unauthorized image manipulation.

Read: How Do Marketing Automation Platforms Actually Improve Performance using AI?

PhotoGuard – How it Works?

This method employs perturbations, which are tiny alterations in pixel values that go unnoticed by the human eye but can be detected by computer models. These perturbations effectively interfere with the model’s capability to manipulate the image.

PhotoGuard’s Dual Attack Methods

PhotoGuard employs two distinct “attack” methods for creating these perturbations. The first method, known as the “encoder” attack, focuses on the AI model’s latent representation of the image, causing the model to interpret it as a random object. The second method, the “diffusion” attack, is more complex. It selects a target image and optimizes the perturbations to resemble that target in the final appearance closely.

Hadi Salman, an MIT graduate student in electrical engineering and computer science (EECS), and the lead author of a new PhotoGuard research paper explains the scenarios where image manipulation can occur.

Imagine the potential for deceitful dissemination of false catastrophic incidents, such as an explosion at a prominent landmark. This manipulation could influence market trends and public perception, but the dangers extend beyond the public domain. Personal images might be illicitly altered and exploited for blackmail, leading to substantial financial consequences when deployed on a massive scale.

Related Posts
1 of 41,130

“In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

PhotoGuard’s Ingenious Defense

PhotoGuard operates by understanding how AI models perceive images differently from humans. AI models view images as intricate mathematical data, representing each pixel’s color and position as a latent representation. The “encoder” attack tweaks this representation, causing the AI model to interpret the image randomly, thwarting manipulation attempts. These imperceptible changes preserve the image’s appearance while safeguarding it. The more complex “diffusion” attack targets the entire diffusion model.

Read: Wimi Developed a Closed-Loop Hybrid-Signal Brain-Machine Interface Robotic ARm Control System Based on AR 

Selecting a target image and optimizing the process, closely aligns the generated image with the chosen target. During implementation, perturbations are introduced into the original image’s input space. These perturbations are used during inference, offering a robust defense against unauthorized image manipulation.

A Real Example

Let’s take an actual example using PhotoGuard. Imagine an image with several faces. You can mask the faces you don’t want to change and then ask the system to show “two men attending a wedding.” The system will adjust the image to create a realistic depiction of two men at a wedding.

Now, to protect the image from edits, you can add perturbations before uploading it. This immunizes the image against modifications. However, the final result might not look as realistic as the original, non-immunized image.

The Road Ahead

PhotoGuard’s success in combating image manipulation relies on cooperation from various stakeholders, especially creators of image-editing models. Policymakers can mandate regulations to safeguard user data and encourage AI developers to design APIs that add protective perturbations to users’ images.

While PhotoGuard is not foolproof, collaboration among model developers, social media platforms, and policymakers offers a robust defense. However, designing image protections that resist circumvention remains a challenge. The paper highlights the importance of companies investing in robust immunizations against AI threats. T

The research emphasizes the need for a collaborative approach to address image manipulation’s pressing issues in this new era of generative models.

[To share your insights with us, please write to sghosh@martechseries.com].

Comments are closed.