Tech

India, grappling with election misinfo, weighs up labels and its own AI safety coalition

India, long in the when it comes to co-opting technology to persuade the public, has become a global hotspot when it comes to how AI is used and abused in political discourse, and particularly in the democratic process. Tech companies, which built the tools in the first place, are coming to the country to offer solutions.

Earlier this year, Andy Parsons, Adobe’s senior director who oversees its involvement in the cross-industry Content Authenticity Initiative (CAI), entered the whirlwind during a trip to India to meet with the country’s media and technology organizations. promote tools that can be integrated into content workflows to identify and flag AI content.

“Instead of detecting what is fake or manipulated, we as a society, and this is an international concern, should start declaring authenticity, that is, saying that if something is generated by AI, that should be known to consumers,” he said in an interview.

Parsons added that some Indian companies – which are currently not part of the Munich Agreement on AI election security signed by OpenAI, Adobe, Google and Amazon in February – had plans to build an alliance similar in the country.

“Legislation is a very delicate thing. It is difficult to assume that the government will legislate well and quickly enough in any jurisdiction. It is best for the government to take a very firm approach and take its time,” he said.

Detection tools are notoriously inconsistent, but they’re a start to solving some problems, or so the argument goes.

“The concept is already well understood,” he said during his trip to Delhi. “What I contribute is to raise awareness that the tools are also ready. It’s not just an idea. This is something that is already deployed.

Andy Parsons, Senior Director at Adobe. Image credits: Adobe

The CAI – which promotes open, royalty-free standards for identifying whether digital content was generated by a machine or a human – predates the current hype around generative AI: it was founded in 2019 and counts now 2,500 members, including Microsoft, Meta and Google. , the New York Times, the Wall Street Journal and the BBC.

Just as there is a growing industry around leveraging AI to create media, a smaller industry is being created to try to correct some of the more harmful applications of this industry.

So, in February 2021, Adobe went even further in developing one of these standards itself and co-founded the Coalition for Content Provenance and Authenticity (C2PA) with ARM, BBC, Intel, Microsoft and Truepic. The coalition aims to develop an open standard, which leverages the metadata of images, videos, text and other media to highlight their provenance and inform people about the origins of the file, the place and time of its generation, and s ‘it was modified before its arrival. the user. The CAI works with the C2PA to promote the standard and make it accessible to the general public.

Today, it is actively engaging with governments like India to expand the adoption of this standard to highlight the provenance of AI content and to participate with authorities in the development of guidelines for the advancement of AI.

Adobe has nothing but also everything to lose by playing an active role in this game. It is not – yet – about acquiring or creating its own extensive language models, but as the home of applications like Photoshop and Lightroom, it is the market leader in tools for the creative community, and so it is not only building new products like Firefly to generate AI content natively, but infusing existing products with AI. If the market grows as some believe, AI will become a must if Adobe wants to stay on top. If regulators (or common sense) have their way, Adobe’s future may well depend on its ability to ensure that what it sells doesn’t contribute to disorder.

Regardless, the overall situation in India is indeed a mess.

Google has focused on India as a test bed for how to ban the use of its generative AI tool Gemini on election content; parties use AI as a weapon to create memes featuring opponents; Meta has set up a deepfake ‘helpline’ for WhatsApp, such is the messaging platform’s popularity in spreading AI-powered missives; and as countries appear increasingly alarmed about the safety of AI and what they need to do to ensure it, we will have to see what impact the Indian government’s decision in March will have to relax the rules on how new AI models are built. , tested and deployed. In any case, this is certainly intended to stimulate more AI activity.

Thanks to its open standard, C2PA has developed a digital nutritional label for content called Content Credentials. CAI members are working to deploy digital watermarking on their content to let users know its origin and whether it is generated by AI. Adobe has content credentials for all of its creative tools, including Photoshop and Lightroom. It also automatically attaches to AI content generated by Adobe Firefly’s AI model. Last year, Leica launched its camera with content credentials built-in, and Microsoft added content credentials to all AI-generated images created using Bing Image Creator .

Content ID information on an AI-generated image

Image credits: Content identification information

Parsons told TechCrunch that the CAI is talking with governments around the world in two areas: one is to help promote the standard as an international standard and the other is to adopt it.

“During an election year, it is especially important for candidates, parties, incumbent offices and administrations that constantly release information to the media and the public to ensure that if anything is released by the office of Prime Minister (Narendra) Modi, , this is actually Prime Minister Modi’s office. There have been many incidents where this is not the case. So it’s very important to understand that something is truly authentic for consumers, fact-checkers, platforms and intermediaries,” he said.

India’s large population, vast linguistic and demographic diversity make it difficult to combat misinformation, he added, a vote in favor of simple labels to address it.

“It’s a bit ‘CR’… it’s two Western letters like most Adobe tools, but it indicates there’s more context to show,” he said.

Controversy continues to center around the real reason tech companies support any type of AI security measure: is it truly an existential concern, or simply a seat at the table to give the Are there a sense of existential concern, while ensuring that their interests are protected in the rule-making process?

“This is generally not controversial among the companies involved, and all the companies that signed the recent Munich agreement, including Adobe, which came together, have abandoned competitive pressure because these ideas are something we we all have to do,” he said in defense. work.

techcrunch

Back to top button