When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

OpenAI adds watermarks to AI-generated images using DALL-E 3

A banana is created by Dall-E 3

As AI tools become more sophisticated, tech companies are working to ensure that the public can distinguish computer-generated content from human creations. OpenAI just announced updates to its image generator, DALL-E, that aim to provide more transparency.

OpenAI has incorporated specifications from the Coalition for Content Provenance and Authenticity (C2PA) into DALL-E 3. All images generated by the API serving the DALL-E 3 or ChatGPT will include a visible watermark to identify them as AI-generated.

The watermark will include details such as the date the image was created and the C2PA logo in the upper left corner. The idea is to make it clear to users whether an image was created by a human or an AI. OpenAI claims that the watermark will not affect image quality or its creation speed but can increase file sizes by 3-5 percent via API and 32 percent when generated using ChatGPT.

DALL-E AI watermark

However, there are still ways for users to remove the AI provenance of an image. According to OpenAI, cropping or screenshotting a DALL-E output can potentially remove this provenance data. Manipulating the actual image pixels could also potentially tamper with an embedded watermark.

Metadata like C2PA is not a silver bullet to address issues of provenance. It can easily be removed either accidentally or intentionally. For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it. Therefore, an image lacking this metadata may or may not have been generated with ChatGPT or our API.

Microsoft is also applying the C2PA specification to Bing Image Creator output, saying, "AI-generated images created by Bing Image Creator now include an invisible, digital watermark that complies with the C2PA specification."

Meanwhile, Meta announced yesterday that it will begin labeling content uploaded to Facebook, Instagram, and Threads if it was created using AI. The move is part of an ongoing effort to develop industry standards for the transparent labeling of AI content.

Report a problem with article
People working on Microsoft Edge Dev 123
Next Article

Edge 123 is now available in the Dev Channel with extension support on Android and more

Google showing a message informing of an unsafe app
Previous Article

Google Play Protect introduces new financial fraud protection for Android users

Join the conversation!

Login or Sign Up to read and post a comment.

11 Comments - Add comment