Google on Tuesday introduced an indiscernible, permanent watermark on photos that will reveal them to be computer generated or AI-generated in an effort to aid in the prevention of the spread of false information.
The watermark is embedded within the images made by Imagen, one of Google’s most recent text-to-image producers, using a technology called SynthID. No matter how the image is edited, as by adding filters or changing the colours, the AI-generated label stays.
Additionally, by analysing incoming photos for the watermark with three levels of certainty—detected, not detected, and probably detected—the SynthID tool can determine if it is likely that they were created by Imagen.
Google stated in a blog post on Tuesday that, “While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations.”
Some Vertex AI clients now have access to a test version of SynthID, Google’s generative-AI platform for programmers. According to the firm, Google’s DeepMind division’s SynthID, developed in collaboration with Google Cloud, will continue to develop and may even spread to other Google products or third parties.
False photos and edited images
Tech companies are frantically trying to come up with a reliable method to recognize and flag modified information as deep fakes and edited photos and videos become more convincing. In recent months, AI-generated images of Pope Francis wearing a puffer jacket and the former president of the United States being detained were extensively shared before he was charged.
Vera Jourova, vice president of the European Commission, urged signatories to the EU Code of Practice on Disinformation, which also includes Google, Meta, Microsoft, and TikTok, to “put in place technology to recognize such content and clearly label this to users” in June.
With the introduction of SynthID, Google has joined the expanding list of Big Tech firms and startups that are looking for answers. Some of these businesses go by names like Truepic and Reality Defender, highlighting the possible risks involved in the quest to safeguard our perception of what is genuine and what is false.
OpenAI, Google agree to watermark AI-generated Content
Tracking the source of content
While Google has mostly adopted its own strategy, the Coalition for Content Provenance and Authenticity (C2PA), an Adobe-backed group, has been at the forefront of digital watermark efforts.
In May, Google introduced a feature called About this image that allows users to view information about the origin, first location, and other internet locations of photographs found on the company’s website.
Additionally, the tech corporation disclosed that each AI-generated image produced by Google will include markup in the original file to “give context” if the image is seen on another website or platform.
But it’s uncertain whether these technical fixes will be able to properly address the issue because AI technology is developing more quickly than humans can keep up. Dall-E and ChatGPT creator OpenAI acknowledged earlier this year that its own attempt to assist with the detection of AI-generated writing rather than visuals is “imperfect,”