Apple’s child abuse detection tools threaten privacy

0
29


Image for article titled Critics Say Apple Built Back Door 'Into Your iPhone With New Child Abuse Detection Tools

Photography: STR / AFP (Getty Images))

Apple’s plans present new features whose goal is to combat child sexual abuse material (CSAM) on their platforms have caused considerable controversy.

The company is basically trying to pioneer a problem that has pushed both police officers and technology companies in recent years: a major, ongoing crisis of CSAM expansion on major Internet platforms. As early as 2018. reported by technology companies the existence of as many as 45 million photographs and videos of child sexual abuse material – a frighteningly large number.

Yet while this crisis is very real, critics fear that Apple’s new features – which include algorithmic scanning of devices and user messages – violate privacy and, more worryingly, could one day be converted to search for different types of materials besides CSAM – a. Such a shift could open the door to new forms of widespread surveillance and serve as a potential workaround for encrypted communication– one of the last, best hopes of privacy.

To understand these concerns, we should take a quick look at the specifics of the proposed changes. First, the company will introduce a new tool for scanning photos uploaded to iCloud from Apple devices in an effort to look for signs of child sexual abuse material. According to technical work released by Apple, new feature uses “neural matching function“, Called NeuralHash, to assess whether the images on the user’s iPhone match the well-known” sprays “or unique digital fingerprints of CSAM. This makes it a comparison of images shared with iCloud with a large CSAM image database compiled by the National Center for missing and exploited children (NCMEC) If enough images are detected, they are then marked for inspection by human operators, who then alert the NCMEC (which is then likely to report to the FBI).

Some people have expressed concern that their phones might contain pictures of their own children in the bathtub or running naked through a sprinkler or something. But according to Apple, you don’t have to worry about that. Company stressed that don’t “learn anything about matching pictures [those in] the famous CSAM database ”- so it’s not just browsing your photo albums, looking at everything he wants.

Meanwhile, Apple will also be rolling out new iMessage function designed to “warn children and their parents when [a child is] receiving or sending sexually explicit photos. “Specifically, this feature is designed to alert children when they are about to send or receive an image that the company’s algorithm considered sexually explicit. The child receives a notification, explaining that they will look at the sexual image and assuring them that it is okay no to view the photo (the incoming image remains blurred until the user agrees to view it). If a child under the age of 13 passes by the notice to send or receive a picture, the notice will subsequently be sent to the child’s parent alerting him or her to the incident.

Suffice it to say that civil liberties advocates have not kindly welcomed the news of both of these updates – which will begin later this year with the release of iOS 15 and iPadOS 15 -. Concerns may vary, but in essence critics worry that implementing such a powerful new technology poses a number of privacy threats.

As for updating iMessage, the concern is based on how encryption works, the protection that should berovide, and what is it the update makes it basically bypass that protection. Encryption protects the content of a user message by encrypting it in unreadable cryptographic signatures before it is sent, basically canceling the interception point of the message because it is unreadable. However, due to the way Apple’s new feature is set up, communication with child accounts will be scanned for sexually explicit material before the message is encrypted. Again, that doesn’t mean Apple has the freedom to read a child’s text messages – it’s just looking for what its algorithm considers inappropriate images.

However, the precedent set by such a change is potentially worrying. In a statement released Thursday, the Center for Democracy and Technology aimed to update iMessage, calling it a breach of privacy provided by Apple’s end-to-end encryption: “The mechanism that will allow Apple to scan images in iMessages is not an alternative to the back door – it’s the back door, ”they say from the Center. “Scanning on the client’s side from one ‘end’ of the communication violates the security of the transmission, and informing the third party (parent) about the content of the communication violates its privacy.”

The iCloud upload scan plan similarly advocated for privacy advocates. Jennifer Granick, ACLU’s Speech, Privacy, and Technology Monitoring and Cyber ​​Security Adviser, told Gizmodo via email that she was concerned about the potential implications of photo scanning: “As altruistic as the motives are, Apple has built an infrastructure that could to be undermined for the widespread monitoring of conversations and information that we store on our phones, ”she said. “The ability to scan CSAM could be repurposed for censorship or to identify and report content that is not illegal, depending on which hashes the company chooses or is forced to include in the appropriate database. For this and other reasons, it is also susceptible to abuse by autocrats abroad, by over-serious civil servants in the country, and even by the company itself. ”

Even Edward Snowden said:

The concern here is clearly not Apple’s mission to combat CSAM, but the tools it uses to do so – which critics fear represent a slippery slope. In an article published Thursday, The Privacy-focused Electronic Frontier Foundation noted that scanning capabilities similar to Apple’s tools could eventually be repurposed to allow its algorithms to look for other types of images or text — which would basically mean a workaround for encrypted communication, one designed to control private interactions and personal content. According to EFF:

All that is needed to expand the narrow back door that Apple is building is to expand machine learning parameters to search for additional types of content or to tune configuration flags for scanning, not just children’s, but anyone’s accounts. It is not a slippery slope; it is a fully built system that just waits for external pressure to make the slightest change.

Such concerns are becoming particularly pronounced when it comes to introducing functions in other countries – with some critics warning that corrupt foreign governments could misuse and undermine Apple’s tools. In response to these concerns, Apple confirmed by MacRumors on Friday it plans to expand opportunities for each country individually. When it considers distribution in a given country, it will first make a legal assessment, the point of sale reported.

In a phone call with Gizmod On Friday, India McKinney, director of federal affairs at EFF, expressed another concern: the fact that both tools are not subject to audit means it is impossible to independently verify that they are working the way they should work.

“There’s no way for outside groups like ours or anyone else — researchers — to look under the hood to see how well it works, is it true, is this doing what it should be doing, how many false positives are there,” she said. . “Once they introduce this system and start putting it on phones, who will say that they will not react to the government’s pressure to start including other things – the content of terrorism, memes depicting political leaders in distasteful ways, various other things. “Relevant, in his article on thursday, The EFF noted that one of the technologies “originally designed to scan and scatter images of child sexual abuse” was recently redesigned to create a database run by the Global Internet Counter-Terrorism Forum (GIFCT) – whose similarity now helps Internet search platforms for and moderation / prohibition of “terrorist” content, focused on violence and extremism.

Because of all these concerns, he wrote a staff of privacy advocates and security experts open letter Apple, asking the company to reconsider its new features. The letter had over 5,000 signatures on Sunday.

However, it is unclear whether any of this will have an impact on the technology giant’s plans. In the company’s internal letter leaked on Friday, Apple vice president Sebastien Marineau-Mes acknowledged that “some people have misunderstandings and more are concerned about the implications” of the new introduction, but that the company will “continue to explain and detail features to make people understand what we” built. ”Meanwhile, NMCEC sent a letter Apple’s internal staff, whom critics of the program called “squeaky minority voices” and advocated for Apple’s efforts.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here