Translate News

Apple Says Its iCloud Child Abuse Material Scanning System Won’t Trigger Alerts Until It Detects At Least 30 Images


Apple has provided additional information about its upcoming plans to scan iCloud Photos for child sexual abuse material (CSAM) using users' iPhones and iPads. The company has published a new paper that delves into the safeguards that it hopes will boost user confidence in the initiative. This includes a rule that only flags images found in multiple child safety databases from different governments, preventing one country from adding non-CSAM content to the system.

Apple's upcoming iOS and iPadOS releases will automatically match known CSAM from a list of image hashes compiled by child safety organizations against US-based iCloud Photos accounts. While many companies scan cloud storage services remotely, some cryptography and privacy experts have slammed Apple's device-based strategy.

The paper, titled "Security Threat Model Review of Apple's Child Safety Features," aims to allay concerns about the rollout's privacy and security. It builds on an interview with Apple executive Craig Federighi in the Wall Street Journal this morning, in which he outlined some of the details.

Apple claims in the document that it will not rely on a single government-linked database to locate CSAM, such as the National Center for Missing and Exploited Children (NCMEC) in the United States. Instead, it will only match photos from at least two different national groups. Because it wouldn't match hashes in any other database, no single government would be able to secretly insert unrelated content for censorship purposes.



Apple has mentioned the possibility of using multiple child safety databases, but it hasn't explained the overlap system until now. Apple told reporters in a conference call that it is only naming NCMEC because it hasn't finalized agreements with other groups.

The paper backs up a point made by Federighi: initially, Apple will only flag an iCloud account if it detects 30 CSAM images. According to the paper, this threshold was chosen to provide a "drastic safety margin" to avoid false positives, and as the system's performance in the real world is evaluated, "we may change the threshold."

It also includes more details on the auditing system mentioned by Federighi. Apple's list of known CSAM hashes will be baked into all versions of iOS and iPadOS, though the scanning system will initially only work in the United States.

Apple will provide a complete list of hashes for auditors to compare to child safety databases, providing yet another way to ensure it isn't secretly matching more images. It also says it will "refuse all requests" for moderators to report "anything other than CSAM materials" for flagged accounts, implying that the system could be used for other types of surveillance.

Apple created "confusion" with its announcement last week, according to Federighi. Apple, on the other hand, has defended the update, telling reporters that, while it is still finalizing and iterating on details, it hasn't changed its launch plans in response to the recent criticism.