How can you tell if an image is AI-generated? Soon, there'll likely be a watermark.
- Distinguishing whether digital content was produced by a human or AI is a tough task these days.
- But seven Big Tech and AI companies just partnered with the White House to step up efforts to flag such content.
Chances are that you have either been fooled by — or at least done a double take at — AI-generated content in the last few months.
But the days of deception may be numbered. A coalition of tech giants and startups pledged Friday to watermark content produced by AI.
The group — comprised of Big Tech pillars Google, Microsoft, Meta, and Amazon, as well as generative AI bastions OpenAI, Anthropic, and Inflection — made "voluntary commitments" to the Biden administration in the effort to make their products safer and tamp down on the technology's tendency to perpetuate biases and produce misinformation, according to a statement from the White House on Friday.
The pledge included commitments for the companies to make "robust systems" that identify or watermark content produced by their AI tools. These identifiers or watermarks would distinguish which AI service was used to generate the content, but omit any information that could be used to identify the original user.
Since OpenAI's release of ChatGPT last November, generative AI tools have dazzled users with their ability to conjure up text and images when prompted. But the emerging technology's power to produce cogent text and photorealistic images has already been used to disseminate false information.
In May, markets dropped briefly after a fake image of the Pentagon shrouded in smoke circulated on social media. The picture was never confirmed to be created using AI but contained many unrealistic elements that occasionally crop up in AI-generated images, such as physical objects blending into each other.
A political ad released Tuesday in support of Florida Governor Ron DeSantis reportedly used AI to replicate Donald Trump's voice and create a soundbite that never actually occurred. This comes a month after another DeSantis ad featured an image of Trump hugging and kissing Anthony Fauci, the White House's former chief medical advisor, which experts told AFP was likely AI-generated.
A study published in June found that a majority of people were unable to tell whether a tweet was written by a human or by ChatGPT. The participants surveyed even found ChatGPT-tweets more convincing than their human counterparts.
When contacted for comment by Insider, several of the companies who made the agreement with the White House pointed to their recent statements on the partnership, in which many referenced upcoming collaborations that will allow them to follow through on the commitments. Meta and Inflection did not immediately respond to Insider's request for comment.
In a blog post on the agreement, Google directly referenced its ongoing efforts to integrate watermarking, as well as to develop its "About this image" tool, which would allow users to identify where an image originated online and whether its been featured on fact-checking sites or news publications.
Inflection, the studio behind the personal AI chatbot named "Pi," said in a blog post about the agreement that "the project of making truly safe and trustworthy AI is still only in its earliest phase."
"We see this as simply a springboard and catalyst for doing more," Inflection added.
The growing anxiety around how to spot fake content has incentivized many AI startups to develop tools that vet whether a piece of content was produced by a human or AI. But in studies and trials, many of the programs have demonstrated bias or produced poor results.