Tech

Google’s call analysis AI could enable censorship by default, privacy experts warn

A feature that Google introduced yesterday at its I/O conference, using its generative AI technology to analyze voice calls in real time for conversation patterns associated with financial scams, has caused a collective shudder in the back from privacy and security experts who warn the feature represents the thin end of the wedge. They warn that once client-side analytics are integrated into mobile infrastructure, it could usher in an era of centralized censorship.

Google’s demonstration of the scam call detection feature, which the tech giant said would be integrated into a future version of its Android operating system – expected to work on around three-quarters of the world’s smartphones – is powered by Gemini Nano, the smallest of its current generation of AI models intended to run entirely on-device.

This is essentially client-side analysis: a nascent technology that has generated enormous controversy in recent years regarding efforts to detect child sexual abuse content (CSAM) or even grooming activity on messaging platforms .

Apple abandoned plans to roll out client-side analytics for CSAM in 2021 after huge privacy backlash. However, policymakers have continued to pressure the tech industry to find ways to detect illegal activities taking place on their platforms. Any industry move to develop on-device analytics infrastructure could therefore pave the way for all manner of default content analytics, whether government-led or tied to a particular commercial agenda.

Respond to Google’s call analytics demo in a post on, Meredith Whittaker, president of US-based encrypted messaging app Signal, warned: “This is incredibly dangerous. It paves the way for centralized client-side analysis at the device level.

“It’s a short step from detecting ‘scams’ to ‘detecting patterns commonly associated with seeking reproductive care’ or ‘commonly associated with providing LGBTQ resources’ or ‘commonly associated with the denunciation of technology workers”.

Cryptography expert Matthew Green, a professor at Johns Hopkins, also taken at to raise the alarm. “In the future, AI models will make inferences about your text messages and voice calls to detect and report illicit behavior,” he warned. “For your data to pass through the service providers, you will need to attach zero-knowledge proof that the analysis was carried out. This will block open clients.

Green suggested that this dystopian future of censorship by default would only be technically possible in a few years. “We are still a long way from this technology being effective enough to be implemented, but only a few years away. A decade at most,” he suggested.

European privacy and security experts were quick to object.

React to the Google demo on, Lukasz Olejnik, a Poland-based independent researcher and consultant on privacy and security issues, praised the company’s anti-scam feature, but warned that the infrastructure could be repurposed for social surveillance. “(C)it also means that technical capabilities have already been or are being developed to monitor calls, creation, redaction of texts or documents, for example for illegal, harmful, hateful or otherwise content undesirable or iniquitous – when it comes to someone’s standards,” he wrote.

“To go further, such a model could, for example, display a warning. Or block the possibility of continuing,” Olejnik continued emphatically. “Or report it somewhere.” Technological modulation of social behavior, or similar. This is a major threat to privacy, but also to a number of fundamental values ​​and freedoms. The capabilities are already there.

Further explaining his concerns, Olejnik told TechCrunch: “I haven’t seen the technical details but Google assures that detection would be done on the device. This is great for user privacy. However, the issues go well beyond privacy. This shows how AI/LLMs integrated into software and operating systems can be used to detect or control various forms of human activity.

This shows how AI/LLMs integrated into software and operating systems can be used to detect or control various forms of human activity.

Lukasz Olejnik

“So far, luckily things are going well. But what will happen if the technical capacity exists and is integrated? Such powerful capabilities signal potential future risks related to AI’s ability to control corporate behavior on a large scale or selectively. This is possibly one of the most dangerous computer capabilities ever developed. And we are approaching that point. How can we manage this? Are we going too far?

Michael Veale, associate professor of technology law at UCL, also raised the frightening specter of functional creep arising from Google’s conversation analysis AI – warning in response. post on that it “builds infrastructure for client-side analytics on the device for purposes other than these, which regulators and legislators will want to abuse.”

European privacy experts have particular reason to be concerned: Since 2022, the European Union has had a controversial legislative proposal on the table on message analysis, which, according to critics – including the bloc’s own data protection supervisor – represents a turning point for democratic rights around the world. region because this would force platforms to scan private messages by default.

Although the current legislative proposal claims to be technology agnostic, it is widely expected that such a law would lead platforms to deploy client-side analytics in order to be able to respond to a so-called “detection order” requiring that They detect both known and unknown information. CSAM and also captures grooming activity in real time.

Earlier this month, hundreds of privacy and security experts wrote an open letter warning that the plan could lead to millions of false positives per day because client-side scanning technologies likely to be deployed by platforms in response to a legal order have not yet been proven. , deeply flawed and vulnerable to attack.

Google was contacted to address concerns that its conversation analysis AI could infringe on people’s privacy, but at press time had not responded.

Read more about Google I/O 2024 on TechCrunch

techcrunch

Back to top button