Apple walks the privacy rope to spot child abuse in iCloud

0
25


Child safety groups also immediately welcomed Apple’s moves, arguing that they are achieving the necessary balance that “brings us one step closer to justice for survivors whose most traumatic moments are spreading on the Internet,” Julie Cordua, executive director of Thorn’s child safety advocacy group, wrote in a statement. for WIRED.

Other cloud storage providers, from Microsoft to Dropbox, already perform image detection uploaded to their servers. But by adding any kind of image analysis to consumer devices, some privacy critics argue, Apple has also taken a step toward a worrying new form of surveillance and weakened its historically strong stance on privacy faced with police pressure.

“I do not defend child abuse. But the whole idea of ​​your personal device constantly locally scanning and monitoring you based on some criteria for unacceptable content and conditionally reporting it to the authorities is a very, very slippery slope, ”says Nadim Kobeissi, cryptographer and founder of Paris-based cryptography software Symbolic Software. “I will definitely switch to an Android phone if this continues.”

Apple’s new system is not simply scanning images of users, either on the company’s devices or on its iCloud servers. Instead, it’s a clever – and complex – new form of image analysis designed to prevent Apple from ever seeing those photos, unless they’ve already been found to be part of a collection of multiple CSAM images uploaded by a user. The system takes a “scatter” of all the images that the user sends to iCloud, converting the files into strings of characters that are uniquely derived from those images. Then, like older CSAM detection systems, such as PhotoDNA, he compares them to the vast collection of known CSAM image hashes that NCMEC provides for matching.

Apple also uses a new form of hashing called NeuralHash, which the company says can match images despite changes such as cropping or coloring. Equally important to prevent avoidance, his system never actually downloads those NCMEC hashes to the user’s device. Instead, it uses some cryptographic tricks to turn them into a so-called blind database that is downloaded to a user’s phone or computer, and contains seemingly meaningless strings of characters derived from those hashes. This blinding prevents any user from obtaining hashes and using them to bypass system detection.

The system then compares this blind hash database with the hashed images on the user’s device. The results of these comparisons are posted on Apple’s server in what the company calls a “security voucher” that is encrypted in two layers. The first layer of encryption is designed to use a cryptographic technique known as the intersection of privacy sets, so it can only be decrypted if the scatter comparison gives a match. No non-matching hash data is disclosed.

The second encryption layer is designed so that matches can only be decrypted if there are certain number match. Apple says this is designed to avoid fake positives and ensure the detection of entire CSAM collections, not individual images. The company declined to name its threshold for the number of CSAM images it requested; in fact, they are likely to adjust that threshold over time to adjust their system and keep false-positive results at less than one in a trillion. These safeguards, Apple argues, will prevent any possible surveillance abuse of its iCloud CSAM detection mechanism, allowing it to identify collections of child exploitation images without ever seeing other images users upload to iCloud.

This immensely technical process represents a strange series of hoops through which Apple cannot encrypt iCloud photos and could easily perform its CSAM checks on images hosted on its servers, as many other cloud storage providers do. Apple has argued that the process it introduces, which separates verification between the device and the server, is less invasive to privacy than simply mass-scanning server-side images.

But critics like Johns Hopkins University cryptographer Matt Green doubt the more complex motives in Apple’s approach. He points out that Apple has made great technical efforts to verify images on the user’s device, despite protecting the privacy of the process, it only makes sense when images are encrypted before leaving the user’s phone or computer and server-side detection becomes impossible. And it fears that this means that Apple will expand the detection system to photos on users’ devices they are not ever uploaded to iCloud — the kind of image scanning on a device that would represent a new form of invasion of a user’s offline storage.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here