Apple recently announced that it is introducing new child safety features to its ecosystem, including the ability to scan photos uploaded to iCloud using on-device machine learning and comparing their hashes to known images of child sexual abuse material (CSAM) from the National Center for Missing and Exploited Children's (NCMEC) repository. Another feature may also inform parents if their child - who is below 13 years old - shares or receives sexually explicit content.
The move has drawn criticism from a lot of tech experts and entities such as the Head of WhatsApp, the Electronic Frontier Foundation (EFF), Edward Snowden, and more, who call it a breach of privacy despite being well-intentioned. Apple is fully aware of the the debate it has created, as can be seen in an internal company memo.
The document in question was obtained by 9to5Mac and contains words from Apple's Software VP Sebastien Marineau-Mes. An excerpt from the memo reads:
Keeping children safe is such an important mission. In true Apple fashion, pursuing this goal has required deep cross-functional commitment, spanning Engineering, GA, HI, Legal, Product Marketing and PR. What we announced today is the product of this incredible collaboration, one that delivers tools to protect children, but also maintain Apple’s deep commitment to user privacy.
We’ve seen many positive responses today. We know some people have misunderstandings, and more than a few are worried about the implications, but we will continue to explain and detail the features so people understand what we’ve built. And while a lot of hard work lays ahead to deliver the features in the next few months, I wanted to share this note that we received today from NCMEC. I found it incredibly motivating, and hope that you will as well.
The attached note from NCMEC congratulates Apple for its efforts and says that "we know that the days to come will be filled with the screeching voices of the minority".
Overall, it's clear that Apple is aware that it has opened up a somewhat difficult topic since it involves scanning photos of its users, computing hashes, and then comparing them against CSAM databases. While the company claims that it is doing this in a privacy-protective manner using on-device machine learning, many are understandably concerned about the potential for misuse. Marineau-Mes does say that the firm will be explaining the features in more detail "in the next few months", so it will be interesting to see whether it can tackle the concerns that are being raised by the public.