USAWorld News

Here’s What I Learned While Testing Photoshop’s New Generative AI Tool


Adobe has integrated generative AI capabilities into its flagship image-editing software, releasing a beta version of Photoshop on Tuesday that greatly expands what artists and photo editors can do. This decision promises to unleash a new torrent of creativity, even if it gives us all a new reason to pause and wonder if that sensational, scary or inspiring photo you see on the Internet is actually real.

In my testing, detailed below, I found the tool to be impressive but flawed. Adding it directly to Photoshop is a big deal, allowing creators to experiment in the software tool they likely already use without excursions to Midjourney, Stability AI’s Stable Diffusion, or other outside generative AI tools.

With Adobe’s Firefly family of generative AI technologies coming to Photoshop, you’ll be able to let the AI ​​fill in a selected part of the image with what it deems most appropriate – for example, replacing cracks in the road. by a smooth road. You can also specify the images you want with a text prompt, such as adding a double yellow line to the road.

Firefly in Photoshop can also enlarge an image, adding new backdrops beyond the frame based on what’s already in the frame or what you suggest with text. Want more sky and mountains in your landscape photo? A bigger crowd at the rock concert? Photoshop will oblige, without today’s difficulties in finding source material and patching it together.

The feature, called Generative Fill and slated to come out of beta testing in the second half of 2023, can be powerful. In Adobe’s live demo, the tool was often able to match tones in a photo, seamlessly blend into AI-generated images, infer geometric detail from perspective even in highlights and to extrapolate the position of the sun from the shadows and haze of the sky.

Such technologies have emerged over the past year as Stable Diffusion, Midjourney, and OpenAI’s Dall-E have captured the imaginations of artists and creative professionals. Now it’s built directly into the software they’re likely already using, simplifying what can be a tedious editing process.

“It really puts the power and control of generative AI in the hands of the creator,” said Maria Yap, vice president of digital imaging at Adobe. “You can really have fun. You can explore some ideas. You can create ideas. You can create without ever necessarily getting into the deep tools of the product, very quickly.”

But you can’t sell anything yet. With Firefly technology, including what’s produced by Photoshop’s generative fill, “you may not use the output for commercial purposes,” Adobe’s Generative AI Beta Rules state.

Photoshop’s Flawed-But-Useful Firefly AI

During my tests, I encountered frequent problems, many of which probably stemmed from the limited range of the training imagery. When I tried to insert a fish on a bike into an image, Firefly only added the bike. I failed to get Firefly to add a kraken to get out of San Francisco Bay. A musk ox looked like a panda-moose hybrid.

Less fancy hardware also presents problems. The text looks like the script of an alien race. Shadows, lighting, perspective and geometry weren’t always there.

People are tough too. Upon closer inspection, their faces were oddly distorted. Humans added to shots could be positioned too high in the frame or blended unconvincingly.

Still, Firefly is remarkable for what it can accomplish, especially with landscape shots. I could add mountains, oceans, skies and hills to landscapes. A white delivery van in a night scene was appropriately yellowish to match the sodium vapor streetlights in the scene. If you don’t like the trio of results presented by Firefly, you can click the “generate” button to get another batch.

Given the pace of AI developments, I expect Firefly in Photoshop to improve.

Recycling large AI models is difficult and expensive, requiring a data center full of expensive hardware to process the data, which sometimes takes weeks for larger models. But Adobe plans relatively frequent updates to Firefly. “To wait for [about] monthly updates for general improvements and retrain every few months in all likelihood,” Adobe product manager Scott Belsky tweeted on Tuesday.

Automation of image manipulation

For years, “Photoshop” didn’t just refer to software from Adobe. It is also used as a verb signifying photo manipulations like slimming the waistline of mannequins or concealing missile launch failures. AI tools are automating not only the fun and fancy flights, but also fake images like an alleged explosion at the Pentagon or a convincing photo of the Pope in a puffy jacket, to pick two recent examples.

With AI, expect much more subtle editing techniques than the easily recognizable extra smoke as digitally added to photos of an Israeli attack on Lebanon in 2006.

This is a reflection of the double-edged sword that is generative AI. Technology is undeniably useful in many situations, but also blurs the line between what is true and what is merely plausible.

For its part, Adobe tries to limit the problems. It does not allow prompts to create images of many political figures and blocks you for “security issues” if you try to create a black smoke image in front of the White House. And its AI usage guidelines prohibit images involving violence, pornography, and “misleading, fraudulent, or misleading content that could cause real-world harm,” among other categories. “We disable accounts that engage in misleading or harmful behavior.”

Firefly is also designed to ignore styling prompts like the ones that have caused serious complaints from artists unhappy to have their kind of art reproduced by a data center. And it supports the Content Authenticity Initiative’s content identification technology that can be used to label an image as having been generated by AI.

Today, generative AI images created with Adobe’s Firefly website add default content credentials along with a visual watermark. When the Photoshop feature is in beta and ships later this year, images will automatically include content credentials, Adobe said.

People trying to fake images can circumvent this technology. But in the long run, it will be part of how we all evaluate images, Adobe believes.

“Content Credentials give people who want to trust a way to trust. It’s an open-source technology that allows anyone to attach metadata to their images to show they’ve created a image, when and where it was created, and what changes have been made along the way,” Adobe said. “Once it becomes standard for important news to come with content credentials , then people will be skeptical when they see images that don’t.”

Generative AI for photos

Adobe’s Firefly family of generative AI tools started with a website that turns a text prompt like “modern chair made out of old tires” into an image. It’s added a few other options since, and Creative Cloud subscribers will also be able to try out a lite version of the Photoshop interface on the Firefly site.

When OpenAI’s Dall-E brought this technology to everyone who signed up for it in 2022, it helped move generative artificial intelligence from a technological curiosity into mainstream awareness. Now there is a lot of worry and excitement as even AI creators worry about what the technology will bring now and in the more distant future.

Generative AI is a relatively new form of artificial intelligence technology. AI models can be trained to recognize patterns in large amounts of data – in this case, tagged images from Adobe’s stock art business and other licensed sources – and then create new images based on this source data.

Generative AI has leapfrogged to mainstream awareness with language models used in tools like OpenAI’s ChatGPT chatbot, Google’s Gmail and Google Docs, and Microsoft’s Bing search engine. When it comes to generating images, Adobe uses an AI image generation technique called diffusion which is also behind Google’s Dall-E, Stable Diffusion, Midjourney and Imagen.

Adobe calls Firefly for Photoshop a “co-pilot” technology, positioning it as a creative aid, not a substitute for humans. Yap acknowledges that some creators fear being replaced by AI. Adobe prefers to see it as a technology capable of amplifying and accelerating the creative process, spreading creative tools to a wider population.

“I think the democratization we’ve been through, and have more creativityis a positive thing for all of us,” Yap said. “It’s the future of Photoshop.”

Editors’ note: CNET uses an AI engine to create personal finance explanations that are edited and verified by our editors. To learn more, see this post.



CNET

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button