Tech

Google’s Reimagine AI tool works well, maybe too well, so it can be easily misused

A masterful bet, sir: Google’s new Pixel 9 phones hit the market this month, two months ahead of schedule. It’s almost as if Google couldn’t wait to show off all the AI ​​built into these devices. By launching early, they got a head start on the Apple Intelligence features that will be coming to the iPhone 16. However, in its haste, Google may have opened a Pandora’s box — one that could potentially backfire in spectacular fashion.

One of the Pixel 9’s most notable features, the Reimagine tool, is already getting some flak from critics. This innovative feature is part of Google Photos’ Magic Editor and lets you simply enter a description of how you want a photo to look, and it will apply that vision to the image. While it seems designed for innocent edits like changing a sunny day to a snowy scene or adding and removing people or objects, it has a darker side.

The Verge tested the tool and found it surprisingly effective, if not too effective. They found that it can easily be used to insert objectionable or disturbing content into images. This includes things like car crashes, smoking bombs in public places, sheets appearing to cover bloody corpses, and drug paraphernalia.

Google’s Reimagine AI tool works well, maybe too well, so it can be easily misused

In one example, they managed to alter a real photo of a person in a lounge, making it appear as if they were taking drugs.

For decades, people have been able to fake photos using editing software to manipulate public opinion or for other malicious purposes. However, this process requires considerable skill and time to make the fakes convincing. Reimagine, on the other hand, makes it very easy for anyone with a Pixel 9 to create similar images.

The Verge envisions a scenario in which malicious actors could rapidly produce fake but credible images related to events such as scandals, wars, or disasters, spreading misinformation in real time before the truth has a chance to emerge. They even suggest that “the default assumption about a photo is about to become that it is fake, as creating realistic and credible fake photos is now a trivial task.”

To be clear, The Verge isn’t calling the Pixel 9 a malicious tool designed to produce disinformation on a massive scale. However, it does serve as an example of how easily things can get out of control. While Google will likely work to address these issues with Reimagine, just as it did with Gemini’s image generator, other companies offering similar tools may not be as diligent about implementing safeguards.

Unfortunately, the Pixel 9’s AI issues don’t stop there. The phone also includes a new Pixel Studio app that lets users generate fully synthetic images using AI, and it seems to lack adequate safeguards.

Digital Trends has demonstrated that it is possible to create images of copyrighted characters in offensive scenarios, such as SpongeBob SquarePants depicted as a Nazi, Mickey Mouse as a slave owner, and Paddington Bear on a crucifix. It’s a double controversy. More worryingly, the images generated by this app do not appear to have a clear watermark indicating that they were artificially created.

While it is commendable that Google is innovating and pushing the boundaries of AI, there are still significant gaps despite the company’s claims that it has strong safeguards in place.

Photo credit: The Verge, Digital Trends

Back to top button