Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI struggles with gender and race
Generative AI keeps messing up on important issues about diversity and representation — especially when it comes to love and sex.
According to one report from The Verge, Meta’s AI image generator repeatedly refused to generate images of an Asian man with a white woman as a couple. When it finally produced one of an Asian woman and a white man, the man was significantly older than the woman.
Meanwhile, Wired found that different AI image geneators routinely represent LGBTQ individuals as having purple hair. And when you don’t specify what ethnicity they should be, these systems tend to default to showing white people.
Generative AI tools are no better than their underlying training data. If their training data is biased, say, by having disproportionate amounts of queer people with highlighter-colored hair, it will consistently represent them as such. It’s incumbent on the proprietors of these models to both improve their training data — buying or accumulating more comprehensive datasets — but, more importantly, tweaking the outputs so they are more inclusive and less stereotypical. That requires lots of testing, and consultation with real-world people from diverse backgrounds about the harms of such representation failures.
But they need to be wary of overcorrection too: Google was recently condemned for generating Black and Asian Nazi soldiers as well as Native Americans in Viking garb. AI can’t understand the complexities of these things yet — but the humans in charge need to.Midjourney quiets down politics
Everything is political for GZERO, but AI image generator Midjourney would rather avoid the drama. The company has begun blocking the creation of images featuring President Joe Biden and former President Donald Trump in the run-up to the US presidential election in November.
“I don’t really care about political speech,” said Midjourney CEO David Holz in an event with users last week. “That’s not the purpose of Midjourney. It’s not that interesting to me. That said, I also don’t want to spend all of my time trying to police political speech. So we’re going to have to put our foot down on it a bit.”
Holz’s statement comes just weeks after the Center for Countering Digital Hate issued a report showing it was able to use popular AI image generators to create election disinformation in 41% of its attempts. Midjourney performed worst out of all of the tools the group tested with researchers able to generate these images 65% of the time.
Examples included images of Joe Biden sick in a hospital bed, Donald Trump in a jail cell, and a box of thrown-out ballots in a dumpster. GZERO tried to generate a simple image of Biden and Trump shaking hands and received an error message: “Sorry! Our AI moderator thinks this prompt is probably against our community standards.”
For Midjourney, it seems like they simply don’t want to be in the business of policing what political speech is acceptable and what isn’t — so they’re taking the easy way out and turning the nozzle off entirely. OpenAI’s tools have long been hesitant to wade into political waters, and stark criticism has come for Microsoft and Google for their sensitivity failures about historical accuracy and offensive imagery. Why would Midjourney take that risk?
Smile! Say cheese for your new AI headshot
My headshots are woefully out-of-date. They’re from just before the pandemic — the last time I really went into an office. Luckily for me, technology has advanced to help me get up to date: Enter the AI-generated headshot.
A bevy of new artificial intelligence tools now promise to generate professional-quality headshots — for a new job, a dating profile, etc. All you need is to upload clear photos of yourself, and the software spits out a series of realistic images for you.
Inspired by The Washington Post, I decided to try two services: Aragon.ai and Try It On. The functionality of the two websites was nearly identical: I uploaded some photos of myself that I thought were nice and paid a small fee of $35 for Aragon, $21 for Try It On, and waited about a half hour for each website to do its magic. When my photos were ready, I clicked and instantly laughed. It gave me full glam.
Both services gave me a lot of photos. I downloaded 100 from each, including a fake version of myself, ones with me in different outfits (suits, cardigans, t-shirts), and different backgrounds (office buildings, nature scenes, streetscapes). Some were moody noir shots, others casual pics primed for a dating app, and some very traditional and corporate — for a LinkedIn profile.
The quality was variable: Some were very good, some were not. My wife and I flipped through, with her commenting “That looks like you!” and “Oh my god that's terrible” and even “Oh, he's cute. I like him.” The software generally had a slimming effect on me, and gave me a squarer jawline and pronounced cheekbones. I was most impressed that it seemed to understand the contours of my hair.
Most of my headshots were still noticeably artificial and not quite at the level where we should worry about deception. Some of the “good” ones either look heavily Photoshopped or like I got a lot of plastic surgery. There’s a term in art and science fiction called the “uncanny valley,” a phenomenon that occurs when someone sees a face that's not quite human — a uniquely eerie sensation, like recognizing alien life. For now, my AI-generated headshots feel like they belong in the uncanny valley.
While there were a few headshots I could use, the technology has a ways to go to avoid alarming folks into thinking I’m a cyborg.