Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AI Content Watermarks Are Effortless To Wipe

The University of Maryland set out to investigate the efficacy of artificial intelligence in picture identification. To one’s surprise, diffusion cleaning and model replacement attacks seem to be effective against watermarks made during picture production.

Diffusion cleaning

Watermarks applied using low-noise techniques as RivaGAN and WatermarkDM may be removed by diffusion cleaning, which involves introducing noise to a picture and then removing it. To trick AI analysis systems into thinking that fake pictures are genuine, high-interference techniques like StegaStamp use a model substitution assault.

The availability of AI image generators has led to their increased usage in creating fake “photos” and even movies of nonexistent events. It’s not crucial whether or not these works are meant as a joke or to purposely misinform people; what matters is that they are convincing enough to fool someone. Authorities recommend that AI system developers report such information, but doing so proved difficult in reality.

Read: AI In Cryptocurrency

Related Posts
1 of 40,579

Internet Giants

To make matters worse, internet giants including Alphabet, Amazon, Meta, Microsoft, and OpenAI have pledged to create AI content tagging solutions to counteract misinformation.

However, since these invisible tags are made by AI generators, the ordinary internet user is not likely to verify the legitimacy of every picture they come across on the internet, therefore the notion of marking images created by AI generators does not represent total protection against disinformation. This implies that suspicion is necessary for labeling to make sense.

Read: AI And Cloud- The Perfect Match

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.