• Home

  • Productivity

  • App tips

App tips

7 min read

Adobe Firefly: What are Adobe's new AI features?

By Ryan Kane · June 2, 2023
Hero image with the Adobe logo

My experience working with generative image AI tools has been mixed. Stable Diffusion, Midjourney, and DALL·E 2 are incredible tools for design and rapid ideation. But each one has drawbacks: DALL·E 2's output isn't very photorealistic, Midjourney is hampered by its clunky Discord-based user interface, and Stable Diffusion has a bit of a learning curve.

But there's a new player in town. Adobe recently launched Firefly, and it has a unique set of advantages: an intuitive user interface, a designer-focused set of features, and a massive, built-in audience. Firefly's launch is a mainstream moment for AI image generation. But here's the crucial question: is it any good? 

I took it for a spin, and in this article, I'll walk you through what makes Adobe Firefly different from other AI tools, highlight its capabilities, and help you decide if you should consider using Firefly over the slew of other art generators out there.

What makes Adobe Firefly different?

The leading generative AI tools aren't keen to reveal precisely how they train their models. But it's an open secret that, as Midjourney founder David Holz said in a September 2022 interview, most of them built their datasets with "a big scrape of the internet" without seeking consent from copyright holders. The result? Artists are suing over alleged copyright infringement.

Adobe hopes to avoid that fate. Unlike most AI tools, Firefly was trained using only licensed images and public domain content. This makes it a brand-safe option for corporations and commercial design work.

Adobe also tags the images it generates with Content Authenticity Initiative metadata, which lets people know the image was generated with AI and provides attribution for content used in the image's creation. Even with these precautions, Adobe is only allowing Firefly-generated images to be used for non-commercial purposes during the tool's beta.

Apart from its copyright-friendly (and arguably more ethical) approach, Adobe's key advantage is its massive existing foothold in the design industry. Over 90% of creative professionals use Adobe Photoshop. And indeed, in the first month after Firefly's release in March 2023, users generated more than 70 million images. Meanwhile, Adobe is in talks with Google to integrate Firefly directly into Google Bard, Google's AI chatbot.

Adobe is rolling out Firefly to millions of Photoshop users and will eventually offer generative AI features in popular products like Illustrator, Premiere Pro, InDesign, and Adobe Express. With features like vector recoloring, text to vector, text to brush, and sketch to image, Adobe is catering Firefly to the needs of professional designers—while keeping it accessible to everyone.

What can Adobe Firefly do?


Text-to-image is the bread and butter of any generative AI image tool—and Firefly nails it. It's also got advantages over other tools, particularly when it comes to the user interface.

Here's an example of what Adobe Firefly can do with the prompt "a girl staring at her reflection through a shop window."

The image generated with Adobe Firefly
Generated with Adobe Firefly. Prompt: "a girl staring at her reflection through a shop window"

This composition style is popular in the world of premium stock photos. Adobe Firefly, which pulled much of its training data from Adobe's stock library, executes it at a level comparable to Midjourney or Stable Diffusion—and, I'd argue, much better than DALL·E 2.

Like other AI tools, Firefly can sometimes generate odd image artifacts. In particular, it struggles with creating realistic human hands. Otherwise, it competes nicely on the text-to image-front and doesn't suffer from anything like DALL·E's tendency to output soulless-looking human features.

Even more than its output quality, though, Firefly's advantage is its user interface. It has the best UI I've seen in this space—hands down.

When you first click the text-to-image button, you're presented with a "wall of inspiration" that shows you what's possible with different prompts and styles, from hyper-realistic dogs playing guitar to interior design renders.

The wall of inspiration in Adobe Firefly

After clicking on an image, you can tweak the prompt and settings until you get what you want. If you're struggling to get an output you like, just go back to the wall of examples for visual inspiration. Or, if you already know what you want, you can type in the prompt directly.

My problem with the existing crop of generative AI image tools is that they tend to be overly technical. You have to start from scratch with a prompt that describes not only the content of your image, but also the technical parameters of your output: elements like style, aspect ratio, content type, lighting, composition, and color and tone.

Refreshingly, Firefly has built-in settings for all of this. You can use your prompt to describe the content of your image and refine your settings later. This workflow is more efficient, and has a gentler learning curve than anything else I've seen.

The Adobe Firefly interface, with specific fields for each part of an AI art prompt

Generative fill

Generative fill was rolled out directly to Adobe Photoshop beta users in May 2023, and it's also available in the Firefly web beta. As a 20-year veteran user of Photoshop, it's the kind of feature I always wished I'd had: a way to highlight something I don't want and cleanly replace it with something I do want.

To test this feature, I uploaded a photo of myself looking out over the Blue Ridge Mountains in North Carolina. Then, I selected the entire background and instructed Adobe Firefly to change it to "watching the sunset over the ocean."

The before and after photos
Original photo edited with Adobe Firefly. Prompt: "watching the sunset over the ocean"

Cool, right? But if your experience is like mine, you won't get results like this on your first try. Here's the secret to using this tool in its current beta form: highlight broad swaths of your image, then give Firefly as much creative liberty as you can.

Theoretically, you should be able to use generative fill to superimpose new objects onto your photo. But when I asked Firefly to generate objects like eagles, blimps, or airplanes in the sky, it created comically fake-looking outputs. Generative fill also struggles mightily with anything related to humans—except removing them from your photo. That said, when you let it do what it's best at, the results are impressive, and it's a blast to play around with.

Text effects

Many of the features in Adobe Firefly's arsenal are highly specific to the needs of designers rather than casual users. Text effects is one of them. This typography tool lets you turn text into stunning artwork using prompts.

I entered the word "jungle" and, for my prompt, instructed Firefly to fill the text with "jungle vine and animals."

The word jungle in all lowercase with jungle imagery inside the letters

You can also use text effects to add textures. Here's what the prompt "dripping honey" looks like.

The word Honey on a yellow background, with honey dripping down each letter

For designers, this is an immense time-saver. It also opens the door to a new world of creative branding possibilities. The only drawbacks: as of now, there are only a few fonts available, and the length of your text is limited to 20 characters.

Vector recoloring

Vector recoloring is another Adobe Firefly feature tailor-made for designers. Vector files can be expanded without losing quality at high resolutions, so they're often used by designers for commercial print work and are especially critical for large assets like billboards, signs, and vehicle wraps.

In the past, you might have used Adobe Illustrator to manually fill in the colors of a vector image. But with vector recoloring, you can experiment with different color combinations quickly.

Using one of Firefly's stock vector images, I used the recoloring prompt "80s style bold" and got a suitably Fresh Prince of Bel-Air-esque output.

The output from Ryan's recoloring prompt

Firefly vs. DALL·E 2, Midjourney, and Stable Diffusion

I ran a head-to-head comparison of Adobe Firefly against three other popular generative AI tools: DALL·E 2, Midjourney v5, and Stable Diffusion XL.

For each tool, I used the same prompt: A photo of a smiling old man, hyperrealistic.

The four outputs

I'll let you make your own decisions here—since a lot of it is subjective—but I'd say Adobe did pretty well. There's often a certain "posed stock photo" quality to Firefly's default output, perhaps as a result of having been trained on Adobe's stock library rather than the broader datasets the other tools use. That said, generative art requires experimentation. I've seen each of the leading apps outdo one another in certain instances, so with more sophisticated prompting, Adobe Firefly can run with the big kids.

Firefly's roadmap for new features

Most of Adobe Firefly's upcoming features will cater to its core audience of professional designers, including:

  • Text to vector: Create editable vectors using text prompts.

  • Text to pattern: Generate patterns using text prompts.

  • Text to brush: Use text prompts to create custom brushes for Adobe tools like Photoshop.

  • Sketch to image: Turn your drawings into full-color images.

  • Text to template: Use text prompts to generate editable templates.

A screenshot from Adobe of upcoming features

Adobe is also working on some innovative generative AI features that will be useful to casual users and hobbyists:

  • Extend image: A kind of "reverse crop," this feature can turn your 1:1 aspect ratio image into a 16:9 widescreen image.

  • Personalized results: Influence Firefly's output by training it on your own images.

  • 3D to image: Generate images from 3D elements.

And Firefly isn't stopping with static images. Adobe will also be incorporating Firefly-powered generative AI features into Adobe's video, audio, animation, and motion graphics apps, making it possible to create multimedia faster than ever.

As I mentioned, Google Bard also plans to add Adobe Firefly's text-to-image capability later this year. Users will be able to type image prompts directly into Bard. Then, they'll be given the option to edit the image in Adobe Express.

Ready to experiment with Adobe Firefly?

Mark Zuckerberg once described his early approach at Facebook as "move fast and break things." That's the approach most generative AI tools are taking today—they'd rather ask for forgiveness than permission.

Adobe Firefly is taking the opposite approach. Rather than ignoring artists' worries about attribution and brands' concerns about copyright and liability, Adobe has designed a tool catered to the needs of both. Given the quality of Firefly's output, its ease of use, its ethical design, and the fact that its features are rolling out to Adobe's built-in audience of millions of users, it's a worthy addition to the generative AI space.

If you've already got an Adobe subscription, you'll see these features start to be incorporated into the tools you already use. Otherwise, sign up for the web-based beta to try Adobe Firefly for yourself.

Related reading:

Get productivity tips delivered straight to your inbox

We’ll email you 1-3 times per week—and never share your information.


Related articles

Improve your productivity automatically. Use Zapier to get your apps working together.

Sign up
A Zap with the trigger 'When I get a new lead from Facebook,' and the action 'Notify my team in Slack'