Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

EU plan to force messaging apps to scan for CSAM risks generating millions of false positives, experts warn

A controversial move by European Union lawmakers to legally require messaging platforms to scan citizens’ private communications for child pornography (CSAM) could lead to millions of false positives a day, hundreds warned on Thursday from security and privacy experts in an open letter.

Concern over the EU proposal has grown since the Commission proposed the CSAM analysis plan two years ago – with independent experts, European Parliament lawmakers and even the Commission’s own auditor block data protection among those who are sounding the alarm.

The EU proposal would not only require messaging platforms that receive a CSAM detection order to search known CSAM; they would also have to use unspecified detection analytics technologies to attempt to detect unknown CSAM and identify grooming activities as they occur, leading to accusations of lawmakers engaging in levels of magical thinking of technosolutionism.

Critics argue that the proposal asks for the technologically impossible and will not achieve the stated goal of protecting children from abuse. Instead, they say, it will wreak havoc on Internet security and web users’ privacy by forcing platforms to deploy blanket surveillance of all their users by deploying risky and unproven technologies, such as client-side analysis.

Experts say there is no technology that can accomplish what the law requires without causing far more harm than good. Yet the EU continues despite everything.

The latest open letter addresses amendments to the draft CSAM regulation recently proposed by the European Council which signatories say fail to address the plan’s fundamental flaws.

Signatories to the letter – numbering 270 at the time of writing – include hundreds of academics, including well-known security experts such as Professor Bruce Schneier of the Harvard Kennedy School and Dr. Matthew D. Green of the Johns Hopkins University, as well as a handful of researchers working for technology companies such as IBM, Intel and Microsoft.

A previous open letter (last July), signed by 465 academics, warned that the detection technologies that the bill relies on to force platforms to adopt are “deeply flawed and vulnerable to attack”, and would lead to a significant weakening life-saving protections provided by end-to-end encrypted (E2EE) communications.

Little traction for counterproposals

Last fall, MEPs in the European Parliament united to oppose a significantly revised approach – which would limit analysis to individuals and groups already suspected of child sexual abuse; limit it to known and unknown CSAMs, removing the requirement to search for grooming information; and remove any risk to E2EE by limiting it to platforms that are not end-to-end encrypted. But the European Council, the other co-legislative body involved in crafting EU law, has yet to take a position on the issue, and its decision will influence the final form of the law.

The latest amendment on the table was presented by the Belgian Council presidency in March, which is leading the discussions on behalf of representatives of EU member state governments. But in the open letter, experts warn that this proposal still fails to address the fundamental flaws inherent in the Commission’s approach, arguing that the revisions remain create “unprecedented capabilities for surveillance and control of Internet users” and “would undermine…a ensuring a secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond.

Changes to be discussed in the Council’s amended proposal include a suggestion that detection orders can be more targeted by applying risk categorization and mitigation measures; and cybersecurity and encryption can be protected by ensuring that platforms are not required to create access to decrypted data and by having detection technologies verified. But the 270 experts suggest that doing so amounts to skirting the boundaries of a security and privacy disaster.

From a “technical point of view, to be effective, this new proposal will also completely undermine the security of communications and systems,” they warn. Relying on “flawed detection technology” to determine cases of interest in order to issue more targeted detection orders will not reduce the risk that the law ushers in a dystopian era of “mass surveillance” of Internet users’ messages, in their analysis.

The letter also addresses a Council proposal to limit the risk of false positives by defining a “person of interest” as a user who has previously shared CSAM or attempted to manipulate a child – which should be done via automated assessment ; such as waiting for 1 response for a known CSAM or 2 for an unknown CSAM/grooming before the user is officially detected as suspicious and reported to the EU Center, which would process the CSAM reports.

Billions of users, millions of false positives

Experts warn that this approach still risks leading to a large number of false alarms.

“It is very unlikely that the number of false positives due to detection errors will be reduced significantly unless the number of repetitions is so large that detection ceases to be effective. Given the large quantity of messages sent on these platforms (in the order of billions), we can expect a very large number of false alerts (in the order of millions),” they write, stressing that the platforms risk ending up getting slapped in the face. with a detection order can have millions or even billions of users, like WhatsApp, owned by Meta.

“Given that there has been no public information on the performance of detectors that could be used in practice, let’s imagine that we would have a detector for CSAM and grooming, as stated in the proposal, with a false rate positive by only 0.1%. (i.e., one in a thousand times it misclassifies non-CSAMs as CSAMs), which is far lower than any currently known detector.

“Given that WhatsApp users send 140 billion messages per day, even if only 1 in 100 messages were tested by such detectors, there would be 1.4 million false positives every day. To reduce false positives to hundreds, at least 5 repeats would need to be statistically identified using different, statistically independent images or detectors. And that’s just for WhatsApp: when considering other messaging platforms, including email, the number of repetitions required would increase significantly to the point where it would not effectively reduce CSAM sharing capabilities.

Another Council proposal to limit discovery orders to messaging apps deemed “high risk” is an unnecessary revision, in the signatories’ view, as they say it will likely continue to “indiscriminately affect a massive number of people”. Here, they emphasize that only standard features, such as image sharing and text chat, are required for CSAM exchange – features that are widely supported by many service providers, meaning that a High-risk categorization “will undoubtedly have an impact on many services”.

They also point out that E2EE adoption is increasing, which they believe will increase the likelihood that services that deploy it will be classified as high risk. “This number could increase further with interoperability requirements introduced by the Digital Markets Act, which will result in messages flowing between low-risk and high-risk services. As a result, almost all services could be classified as high risk,” they claim. (NB: Message interoperability is an essential element of the EU DMA.)

A backdoor for the backdoor

As for encryption protection, the letter reiterates the message that security and privacy experts have been repeatedly shouting at lawmakers for years: “Detection in end-to-end encrypted services, by definition, undermines encryption protection.” “.

“The new proposal aims to “protect cybersecurity and encrypted data, while keeping services using end-to-end encryption under detection orders.” As we explained previously, this is an oxymoron,” they point out. “The protection afforded by end-to-end encryption means that no one other than the intended recipient of a communication is able to obtain information about the content of such a communication. Enabling detection capabilities, whether for encrypted data or for data before it is encrypted, violates the very definition of privacy provided by end-to-end encryption.”

In recent weeks, police chiefs across Europe have drafted their own joint statement – ​​raising concerns about the expansion of E2EE and calling on platforms to design their security systems so that they can still identify illegal activity and send reports on message content to law enforcement. .

The intervention is widely seen as an attempt to pressure lawmakers to pass laws such as the CSAM regulation.

Police chiefs deny they want encryption abused, but they don’t explain exactly what technical solutions they want platforms to adopt to enable the “legal access” they’re seeking. Squaring the circle puts a very shaky ball back in the legislators’ court.

If the EU continues on its current path – assuming the Council does not change course, as MEPs have urged – the consequences will be “catastrophic”, the signatories of the letter warn. “This sets a precedent for Internet filtering and prevents people from using some of the few tools available to protect their right to privacy in the digital space; this will have a chilling effect, particularly on adolescents who rely heavily on online services for their interactions. This will change the way digital services are used around the world and will likely have a negative impact on democracies around the world.

An EU source close to the Council was unable to provide an overview of ongoing discussions between member states, but indicated that a working group meeting was taking place on May 8 during which they confirmed that the proposed regulation aimed at combating child sexual abuse would be discussed.

techcrunch

Back to top button