Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Apple Announces ML-Powered Content Monitoring Features to Protect Children Against Sexually-Abusive Material

Today, we have numerous examples of AI’s application in content monitoring. They are mostly used to detect fake news, terror content, and so on. But, Apple has found a novel way to prevent spread of nudity on its devices used by kids.

Hereon, Apple will leverage machine learning algorithm to warn parents against sensitive content on their iPhone, iPad, Mac, Apple Watch and Apple TV, which could potentially harm kids. Apple already offers a range of kid-protection features as part of “Families”, including Ask to Buy.

Apple is tightening its content monitoring technology using on-device machine learning features to prevent spread of adult content. In a recent announcement, the world’s leading smartphone maker announced it has expanded new features to protect children against sexually abusive content. These features would limit the spread of Child Sexual Abuse Material (CSAM).

Read Also: CRM Maker HubSpot Elevates Yamini Rangan To CEO Position

CSAM Detection will be at play across all Apple’s devices, including on Siri and Search.

During the pandemic, we have seen a majority of families switching to mobile platforms to consume information. Children, blocked from traditional modes of education and socializing activities, spend more time on their smartphones and internet-connected devices. It has been observed that some platforms are streaming obscene content to kids despite parental control.

How Apple Built its Content Monitoring Technology for Kids?

Apple has been working with child safety experts to introduce new-gen on-device machine learning features for content monitoring. The technology will enable parents and guardians to play a proactive role in preventing their wards from navigating to sites that publish and promote malicious content, such as nudity. Apple’s Messages app has been rendered with this on-device machine learning technology to detect adult content, even as Apple confirmed that these would private communications and would remain “unreadable by Apple.”

Complete Block or Partial Efforts

As per our information, Apple device may not fully block CSAM or other adulterous content. Instead, it would either send a prompt message to the user (kids or parents) about accessing CSAM. Apple has stated the photos would be “blurred” and the user would be warned, in addition to presenting helpful resources to safely navigate out of potentially harmful sites and apps.

These filters would work at both sides of communication medium, that is, parents would be notified whenever CSAM is being received or sent from their kid’s device.

Recommended: Parks Associates: On Average, Cord-Cutters Are Spending $85 Per Month On OTT Services

Does it affect your privacy?

Related Posts
1 of 12,136
What will you choose if you’re a parent- Privacy or CSAM Detection?

The introduction of Apple’s new content monitoring features could offend activists questioning the legitimacy of this decision to twiddle with user’s privacy. Apple has provided a clarification that the technique doesn’t influence user’s device privacy; rather, it protects users from surfing potentially harmful content. Apple is relying on cryptography technique to limit the spread of CSAM on its iOS and iPadOS. It will use a technique called CSAM image hashes, that performs on-device image matching process before an image is stored in iCloud Photos. Apple doesn’t perform the conventional image scanning that impacts user privacy. Using crypto-powered CSAM image hashes, Apple can stay vigil against transfer of CSAM, warn kids and parents, and if the act persists, inform NCMEC or law enforcement agencies.

Nonetheless, experts in security and privacy, cryptography, and legal consultants have raised question about Apple’s move to breach user privacy and screen content, even if it means red-flagging CSAM content.

What’s At Risk?

Your privacy is at risk only when your device crosses a threshold limit of CSAM content. Using threshold secret sharing process, Apple promises users that the content of the security vouchers can’t be read by Apple as long as you are within limits. This means, kids can still send/ receive CSA, content as long as they stay within the threshold before getting reported!

Apple says, its threshold secret sharing technology is highly accurate and flawless.

What Happens when Threshold is crossed?

100% ACCOUNT Block.

Apple analysts will manually monitor and analyse each report to confirm match with CSAM norms and send the report to NCMEC. This happens only when user’s iCloud Photos collection matches with CSAM description.

Users can still file for re-activation if they think that that their account has been mistakenly blocked.

Apple will provide information to the law enforcement agencies about CSAM in iCloud Photos, detected using crypto.

For instance, CSAM detection would be used to report incidents to The National Center for Missing and Exploited Children (NCMEC), an institution that identifies CSAM and works in collaboration with law enforcement agencies across the US.

AiThority Interview with John Stamer, Vice President & GM of Americas Services at Lenovo

The company also confirmed that CSAM detection features would be available from later this year. Updates will be added to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.