When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Apple reveals more details about its child safety photo scanning technologies

Apple has been the target of criticism since it revealed that it will be introducing some child safety features into its ecosystem which would allow scanning of Child Sexual Abuse Material (CSAM). An open letter demanding that Apple halts the deployment of this technology already has thousands of signatories. The firm had internally acknowledged that some people are worried about the new features, but said that this is due to misunderstandings that it will be addressing in due course. Today, it has made good on its promise.

Apple logo on a multicolored background

In a six-page FAQs document that you can view here, Apple has emphasized that its photo scanning technology is split into two distinct use-cases.

The first has to do with detecting sexually explicit photos sent or received by children 12 years of age or younger via the Messages app. This capability uses on-device machine learning to automatically blur problematic images, inform children that they do not have to view the content, provide them guidance, and inform their parents if they still opt to view such images. In the same scenario, children aged 13-17 will be provided similar guidance but their parents will not be informed. In order for this flow to function, child accounts need to be set up in family settings on iCloud, the feature should be opted in to, and parental notifications need to be enabled for children.

No other entity including Apple or a law enforcement authority is informed if a child sends or receives sexually explicit images. As such, this does not break any existing privacy assurances or end-to-end encryption. Apple has emphasized that the feature is applicable to Messages only, which means that if a child is being abused, they can still still reach out for help via text or other communication channels.

The second prong of Apple's child safety approach is about keeping CSAM off iCloud Photos. In this case, hashes of iCloud images will be compared against known CSAM images, and the company will be notified if a match is detected. This feature does not work for private on-device images or if iCloud Photos is disabled.

The firm has emphasized that it does not download any CSAM images on your device to compare against. Instead, it computes hashes of your images and compares it to known CSAM content to determine a hit. Apple went on to say that:

One of the significant challenges in this space is protecting children while also preserving the privacy of users. With this new technology, Apple will learn about known CSAM photos being stored in iCloud Photos where the account is storing a collection of known CSAM. Apple will not learn anything about other data stored solely on device. Existing techniques as implemented by other companies scan all user photos stored in the cloud. This creates privacy risk for all users. CSAM detection in iCloud Photos provides significant privacy benefits over those techniques by preventing Apple from learning about photos unless they both match to known CSAM images and are included in an iCloud Photos account that includes a collection of known CSAM.

The company has revealed other details about its end-to-end process for detecting CSAM images as well. It has stated that its system does not work for anything other than CSAM media, as even the possession of such images is illegal in many countries. That said, authorities are not automatically informed. If there is a match, Apple first conducts a human review before notifying authorities.

The Cupertino tech giant has bluntly stated that it will not add non-CSAM images to its repository for comparison, even if there is pressure from certain governments. In the same vein, Apple itself does not add hashes on top of known CSAM images, and since these are all stored on an OS-level, this means that specific individuals can't be targeted via misuse of the technology.

Finally, Apple has boasted that its system is extremely accurate and that the likelihood of a false positive is less than one per trillion images per year. Even in the worst case, there is a human reviewer in place as a safety net who performs a manual review of a flagged account before it is reported to the National Center for Missing and Exploited Children (NCMEC).

Report a problem with article
Next Article

Humankind will be a day one Xbox Game Pass for PC release

Xbox logo blue-grey on dark grey background with green Xbox outlined logos
Previous Article

Xbox gamescom 2021 show set for August 24 with in-depth looks at upcoming games

Join the conversation!

Login or Sign Up to read and post a comment.

29 Comments - Add comment