Adobe’s Firefly seems to be following in the woke footsteps of Google’s failed Gemini AI image producer — generating photos of black Nazis and black and female founding fathers.
In a test of the product conducted by The Post on Thursday, Firefly produced an image of two smiling black men standing in front of an American flag when prompted to create a photo of the “founding fathers of the USA.”
Searches for the 1787 Constitutional Convention also produced images of both black men and white women standing in front of the historic State House in Philadelphia, Penn., and a search for “German war soldiers in World War II” yielded photos of smiling black and Asian men in military fatigues.
A search for the “Pope addressing a church” also generated a photo of a black woman in a white robe and mitre — even though all 266 popes throughout history have been white men.
None of the search terms included a specific skin color, and the prompts were all designed to mimic those that tripped up Google’s AI, which infamously also produced images of black Vikings, “diverse” Nazis and female NHL players.
Similar tests conducted by Semafor and the Daily Mail produced nearly identical results, with a reporter for Semafor claiming that when he asked the bot to produce a comic book drawing of an elderly white man, it did — but it also produced images of a black man and a black woman.
The mix-up is apparently the unintended result of the software designers’ attempts to ensure that the bot steers clear of any racist stereotypes, according to Semafor.
Both it and Google’s Gemini rely on similar techniques to create images from written text, but Adobe relies on stock images that it licenses, the online outlet reports.
But Adobe has not yet seen the same backlash that Google’s parent company Alphabet faced as its woke AI-generated images went viral last month.
The company lost more than $70 billion in market value in the aftermath, and Google CEO Sundar Pichai panned the bot’s habit of producing historically inaccurate images as “completely unacceptable” in an email to employees.
He said Google AI teams were “working around the clock” to fix Gemini, and claimed they were already seeing a “substantial improvement on a wide-range of prompts.”
“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” he said in the email first obtained by Semafor.
“And we’ll review what happened and make sure we fix it at scale.”
The company itself also apologized to the public, acknowledging that in some cases the AI tool would “overcompensate” in seeking a diverse range of people — even when such a diverse range did not make sense.
In its own statement Thursday, Adobe executives said Firefly is not “meant for generating photorealistic depictions of real or historical events.”
“Adobe’s commitment to responsible innovation includes training our AI models on diverse datasets to ensure we’re producing commercially-safe results that don’t perpetuate harmful stereotypes,” the company explained in a statement to Semafor.
“This includes extensively testing outputs for risk to ensure they match the reality of the world we live in.
“Given the nature of Gen AI and the amount of data it gets trained on, it isn’t always going to be correct, and we recognize these Firefly images are inadvertently off-base.
“We build feedback mechanisms in all our Gen AI products, so we can see any issues and fix them through restraining or adjusting filters,” it added.
“Our focus is always improving our models to give creators a set of options to bring their visions to life.”
The Post has also reached out to Adobe for comment.