Generative AI keeps messing up on important issues about diversity and representation — especially when it comes to love and sex.
According to one report from The Verge, Meta’s AI image generator repeatedly refused to generate images of an Asian man with a white woman as a couple. When it finally produced one of an Asian woman and a white man, the man was significantly older than the woman.
Meanwhile, Wired found that different AI image geneators routinely represent LGBTQ individuals as having purple hair. And when you don’t specify what ethnicity they should be, these systems tend to default to showing white people.
Generative AI tools are no better than their underlying training data. If their training data is biased, say, by having disproportionate amounts of queer people with highlighter-colored hair, it will consistently represent them as such. It’s incumbent on the proprietors of these models to both improve their training data — buying or accumulating more comprehensive datasets — but, more importantly, tweaking the outputs so they are more inclusive and less stereotypical. That requires lots of testing, and consultation with real-world people from diverse backgrounds about the harms of such representation failures.
But they need to be wary of overcorrection too: Google was recently condemned for generating Black and Asian Nazi soldiers as well as Native Americans in Viking garb. AI can’t understand the complexities of these things yet — but the humans in charge need to.