When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Adobe launches major Firefly update featuring new image generation models and more

Adobe Firefly

Adobe Firefly is a suite of generative artificial intelligence AI models designed to assist creators. Last February, Adobe made its Firefly Video Model available for anyone to try through a public beta. Now, the company has announced that it has released the latest version of Firefly, bringing a host of updates to its image and video models, plus brand new capabilities and a redesigned web app.

This new release aims to unify AI-powered tools for image, video, audio, and vector generation into one platform. The star of the show for image generation is the debut of Firefly Image Model 4 and the even more advanced Image Model 4 Ultra. These are designed specifically for creative professionals wanting a higher level of control and accuracy. Building on the capabilities of the previous Image Model 3, which was already pretty capable at handling prompts, Model 4 offers rapid ideation and efficient image generation for everyday creative needs. Adobe says it is good for quick visuals like simple illustrations or icons, and covers about 90% of typical requirements quickly and affordably.

For projects demanding more detail and realism, Image Model 4 Ultra is positioned as the go-to model. It is intended for highly complex needs, excelling at rendering photorealistic scenes, human portraits, and small groups with precision and clarity.

Firefly Image Model 4 in action

Users also gain enhanced controls to fine-tune text-to-image prompts with options like aesthetic filters, specific styles, and matching compositions. According to Adobe:

The latest release sets a new standard for visual content generation, with Firefly Image Model 4 delivering unmatched definition and realism for high-resolution images, while the Firefly Video Model enables dynamic, commercially safe video creation.

The Firefly Video Model, which just recently became available for public testing as we discussed, has now officially moved out of beta. This model is still commercially safe and focuses on generating video clips from text or image prompts. Adobe says it has seen significant improvements over the beta version, particularly in photorealism, text rendering, and transition effects. It can create videos up to five seconds long at 1080p resolution, supporting various aspect ratios including 16:9, 9:16, and 1:1, plus camera controls. Image-to-video generations are also said to retain more detail from the original image. Users on the Firefly Premium plan now get unlimited access to the Video Model.

A brand new addition is the Text to Vector module. This feature lets users generate fully editable vector graphics, like icons and patterns, simply by typing out descriptions. According to Adobe, this can speed up design workflows for logos, social media graphics, or custom brand patterns.

Text to Vector module

On-the-go creation is also getting a boost, as Adobe announced a new Firefly mobile app is coming soon for both iOS and Android devices. The company also rolled out Firefly Boards, previously known as Project Concept, into the Firefly web app. This is described as a multi-player canvas for developing and exploring creative ideas visually.

Adobe Firefly

Perhaps one of the most interesting announcements is Adobe's decision to integrate non-Adobe AI models directly into the Creative Cloud ecosystem, starting with Firefly. While Adobe continues to push its own Firefly models as commercially safe and IP-friendly for final production use, it says this move is a response to community feedback and aims to give users more choice and flexibility during the concept phase.

Users will now be able to select and generate content using models from partners like Google Imagen3 and Veo2, OpenAI's GPT image generation, and Black Forest Labs Flux 1.1 Pro. Adobe is also working to bring in models from fal.ai, Runway, Pika, Luma, and Ideogram soon.

The goal is to let users easily compare outputs from different models, each with its own distinct aesthetic, to find the best fit for their project needs. Adobe says switching between models will be seamless, and it will always be transparent about which model was used. Content Credentials will also be attached to all AI-generated content, indicating which model created or edited it.

These new capabilities are tied together by tighter integration across Creative Cloud applications like Photoshop Web, and Express.

Report a problem with article
Google IO 2025
Next Article

Here's a round up of the talks you can tune into at Google I/O 2025 next month

Intel 3201016737 WHQL driver
Previous Article

Intel releases new GPU driver with support for TES IV Remastered and Clair Obscur

Join the conversation!

Login or Sign Up to read and post a comment.

0 Comments - Add comment