AI

‘Embarrassing and wrong’: Google admits it lost control of image-generating AI

Comment

Image Credits: Adobe Firefly generative AI / composite by TechCrunch

Google has apologized (or come very close to apologizing) for another embarrassing AI blunder this week, an image-generating model that injected diversity into pictures with a farcical disregard for historical context. While the underlying issue is perfectly understandable, Google blames the model for “becoming” oversensitive. But the model didn’t make itself, guys.

The AI system in question is Gemini, the company’s flagship conversational AI platform, which when asked calls out to a version of the Imagen 2 model to create images on demand.

Recently, however, people found that asking it to generate imagery of certain historical circumstances or people produced laughable results. For instance, the Founding Fathers, who we know to be white slave owners, were rendered as a multi-cultural group, including people of color.

This embarrassing and easily replicated issue was quickly lampooned by commentators online. It was also, predictably, roped into the ongoing debate about diversity, equity, and inclusion (currently at a reputational local minimum), and seized by pundits as evidence of the woke mind virus further penetrating the already liberal tech sector.

Image Credits: An image generated by Twitter user Patrick Ganley.

It’s DEI gone mad, shouted conspicuously concerned citizens. This is Biden’s America! Google is an “ideological echo chamber,” a stalking horse for the left! (The left, it must be said, was also suitably perturbed by this weird phenomenon.)

But as anyone with any familiarity with the tech could tell you, and as Google explains in its rather abject little apology-adjacent post today, this problem was the result of a quite reasonable workaround for systemic bias in training data.

Say you want to use Gemini to create a marketing campaign, and you ask it to generate 10 pictures of “a person walking a dog in a park.” Because you don’t specify the type of person, dog, or park, it’s dealer’s choice — the generative model will put out what it is most familiar with. And in many cases, that is a product not of reality, but of the training data, which can have all kinds of biases baked in.

What kinds of people, and for that matter dogs and parks, are most common in the thousands of relevant images the model has ingested? The fact is that white people are over-represented in a lot of these image collections (stock imagery, rights-free photography, etc.), and as a result the model will default to white people in a lot of cases if you don’t specify.

That’s just an artifact of the training data, but as Google points out, “because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

Illustration of a group of people recently laid off and holding boxes.
Imagine asking for an image like this — what if it was all one type of person? Bad outcome! Image Credits: Getty Images / victorikart

Nothing wrong with getting a picture of a white guy walking a golden retriever in a suburban park. But if you ask for 10, and they’re all white guys walking goldens in suburban parks? And you live in Morocco, where the people, dogs, and parks all look different? That’s simply not a desirable outcome. If someone doesn’t specify a characteristic, the model should opt for variety, not homogeneity, despite how its training data might bias it.

This is a common problem across all kinds of generative media. And there’s no simple solution. But in cases that are especially common, sensitive, or both, companies like Google, OpenAI, Anthropic, and so on invisibly include extra instructions for the model.

I can’t stress enough how commonplace this kind of implicit instruction is. The entire LLM ecosystem is built on implicit instructions — system prompts, as they are sometimes called, where things like “be concise,” “don’t swear,” and other guidelines are given to the model before every conversation. When you ask for a joke, you don’t get a racist joke — because despite the model having ingested thousands of them, it has also been trained, like most of us, not to tell those. This isn’t a secret agenda (though it could do with more transparency), it’s infrastructure.

Where Google’s model went wrong was that it failed to have implicit instructions for situations where historical context was important. So while a prompt like “a person walking a dog in a park” is improved by the silent addition of “the person is of a random gender and ethnicity” or whatever they put, “the U.S. Founding Fathers signing the Constitution” is definitely not improved by the same.

As the Google SVP Prabhakar Raghavan put it:

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.

I know how hard it is to say “sorry” sometimes, so I forgive Raghavan for stopping just short of it. More important is some interesting language in there: “The model became way more cautious than we intended.”

Now, how would a model “become” anything? It’s software. Someone — Google engineers in their thousands — built it, tested it, iterated on it. Someone wrote the implicit instructions that improved some answers and caused others to fail hilariously. When this one failed, if someone could have inspected the full prompt, they likely would have found the thing Google’s team did wrong.

Google blames the model for “becoming” something it wasn’t “intended” to be. But they made the model! It’s like they broke a glass, and rather than saying “we dropped it,” they say “it fell.” (I’ve done this.)

Mistakes by these models are inevitable, certainly. They hallucinate, they reflect biases, they behave in unexpected ways. But the responsibility for those mistakes does not belong to the models — it belongs to the people who made them. Today that’s Google. Tomorrow it’ll be OpenAI. The next day, and probably for a few months straight, it’ll be X.AI.

These companies have a strong interest in convincing you that AI is making its own mistakes. Don’t let them.

More TechCrunch

Fisker is just a few days into its Chapter 11 bankruptcy, and the fight over its assets is already charged, with one lawyer claiming the startup has been liquidating assets…

The fight over Fisker’s assets is already heating up

A hacker is advertising customer data allegedly stolen from the Australia-based live events and ticketing company TEG on a well-known hacking forum. On Thursday, a hacker put up for sale…

Hacker claims to have 30 million customer records from Australian ticket seller giant TEG

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Elon…

Tesla makes Musk best-paid CEO of all time and Fisker bites the dust

Dot is a new AI companion and chatbot that thrives on getting to know your innermost thoughts and feelings.

Dot’s AI really, really wants to get to know you

The e-fuels startup is working on producing fuel for aviation and maritime shipping using carbon dioxide and other waste carbon streams.

E-fuels startup Aether Fuels is raising $34.3 million, per filing

Fisker was facing “potential financial distress” as early as last August, according to a new filing in its Chapter 11 bankruptcy proceeding, which the EV startup initiated earlier this week.…

Fisker faced financial distress as early as last August

Cruise, the self-driving subsidiary of General Motors, has agreed to pay a $112,500 fine for failing to provide full information about an accident involving one of its robotaxis last year.…

Cruise clears key hurdle to getting robotaxis back on roads in California

Feel Therapeutics has a pretty original deck, with some twists we rarely see; the company did a great job telling the overall story.

Pitch Deck Teardown: Feel Therapeutics’ $3.5M seed deck

The Rockset buy fits into OpenAI’s broader recent strategy of investing heavily in its enterprise sales and tech orgs.

OpenAI buys Rockset to bolster its enterprise AI

The U.S. government announced sanctions against 12 executives and senior leaders of the Russia-based cybersecurity giant Kaspersky. In a press release, the Department of the Treasury’s Office of Foreign Assets…

US government sanctions Kaspersky executives

Style DNA, an AI-powered fashion stylist app, creates a personalized style profile from a single selfie. The app is particularly useful for people interested in seasonal color analysis, a process…

Style DNA gets a generative AI chatbot that suggests outfit ideas based on your color type

Rates of depression, anxiety and suicidal thoughts are surging among U.S. teens. A recent report from the Center of Disease Control found that nearly one in three girls have seriously…

Khosla-backed Marble, built by former Headway founders, offers affordable group therapy for teens

Cover says what sets it apart is the underlying technology it employs, which has been exclusively licensed from NASA’s Jet Propulsion Laboratory.

A new startup from Figure’s founder is licensing NASA tech in a bid to curb school shootings

Spotify is introducing a new “Basic” streaming plan in the United States, the company announced on Friday. The new plan costs $10.99 per month and includes all of the benefits…

Spotify launches a new Basic streaming plan in the US

Photographers say the social media giant is applying a ‘Made with AI’ label to photos they took, causing confusion for users.

Meta is tagging real photos as ‘Made with AI,’ say photographers

Website building platform Squarespace is selling Tock, its restaurant reservation service, to American Express in a deal worth $400 million — the exact figure that Squarespace paid for the service…

Squarespace sells restaurant reservation system Tock to American Express for $400M

Featured Article

Change Healthcare confirms ransomware hackers stole medical records on a ‘substantial proportion’ of Americans

The February ransomware attack on UHG-owned Change Healthcare stands as one of the largest-ever known digital thefts of U.S. medical records.

20 hours ago
Change Healthcare confirms ransomware hackers stole medical records on a ‘substantial proportion’ of Americans

Google said today that it globally paused its experiment that aimed to allow new kinds of real-money games on the Play Store, citing the challenges that come with the lack…

Google pauses its experiment to expand real-money games on the Play Store

Venture firms raised $9.3 billion in Q1 according to PitchBook data, which means this year likely won’t match or surpass 2023’s $81.8 billion total. While emerging managers are feeling the…

Kevin Hartz’s A* raises its second oversubscribed fund in three years

Google is making reviews of all your movies, TV shows, books, albums and games visible under one profile page starting June 24, according to an email sent to users last…

Google is making your movie and TV reviews visible under a new profile page

Zepto, an Indian quick commerce startup, has more than doubled its valuation to $3.6 billion in a new funding round of $665 million.

Zepto, a 10-minute delivery app, raises $665M at $3.6B valuation

Speak, the AI-powered language learning app, has raised new money from investors at double its previous valuation.

Language learning app Speak nets $20M, doubles valuation

SpaceX unveiled Starlink Mini, a more portable version of its satellite internet product that is small enough to fit inside a backpack.  Early Starlink customers were invited to purchase the…

SpaceX debuts portable Starlink Mini for $599

Ali Rathod-Papier has stepped down from her role as global head of compliance at corporate card expense management startup Brex to join venture firm Andreessen Horowitz (a16z) as a partner…

Brex’s compliance head has left the fintech startup to join Andreessen Horowitz as a partner

U.S. officials imposed the “first of its kind” ban arguing that Kaspersky threatens U.S. national security because of its links to Russia.

US bans sale of Kaspersky software citing security risk from Russia 

Apple has released Final Cut Pro for iPad 2 and Final Cut Camera, the company announced on Thursday. Both apps were previously announced during the company’s iPad event in May.…

Apple releases Final Cut Pro for iPad 2 and Final Cut Camera

Paris has quickly established itself as a major European center for AI startups, and now another big deal is in the works.

Poolside is raising $400M+ at a $2B valuation to build a supercharged coding co-pilot

The space industry is all abuzz about how SpaceX’s Starship, Blue Origin’s New Glenn, and other heavy-lift rockets will change just about everything. One likely consequence is that spacecraft will…

Gravitics prepares a testing gauntlet for a new generation of giant spacecraft

LTK (formerly LiketoKnow.it and RewardStyle), the influencer shopping app with 40 million monthly users, announced on Thursday the launch of a free direct message tool for creators to instantly share…

Influencer shopping app LTK gets an automatic direct message tool

YouTube appears to be taking a firm stance against Premium subscribers who attempt to use a VPN (virtual private network) to access cheaper subscription prices in other countries. This week,…

YouTube confirms crackdown on VPN users accessing cheaper Premium plans