Maybe you've seen images like these floating around social media this week: photos of people with lime-green boxes around their heads and funny, odd or in some cases super-offensive labels applied.
What's happening: They're from an interactive art project about AI image recognition that doubles as a commentary about the social and political baggage built into AI systems.
Why it matters: This experiment — which will only be accessible for another week — shows one way that AI systems can end up delivering biased or racist results, which is a recurring problem in the field.
- It scans uploaded photos for faces and sends them to an AI object-recognition program that uses ImageNet, the gold-standard dataset for training such programs.
- The program matches the face with the closest label from WordNet, a project that started in the 1980s to map out word relationships throughout the English language, and applies it to the image.
"The point of the project is to show how a lot of things in machine learning that are conceived of as technical operations or mathematical models are actually deeply social and deeply political," says Trevor Paglen, the MacArthur-winning artist who co-developed the project with Kate Crawford of the AI Now Institute.
- The experiment and accompanying essay reveal the assumptions that go into building AI systems.
- Here, the system depends on judgment calls from the people who originally labeled the images — some straightforward, like "chair"; others completely unknowable from the outside, like "bisexual."
- From those image–label pairs, AI systems can learn to label new photos that they've never seen before.
But, but, but: This is an art project, not an academic takedown of ImageNet, which is mostly intended to detect objects rather than people. Some AI experts have criticized the demonstration for giving a false impression of the dataset.
This week ImageNet responded to the project, which Paglen says is currently being accessed more than 1 million times per day.
- The ImageNet team says it's making changes to person-related image labels, in part by removing 600,000 potentially sensitive or offensive images — more than half of the images of people in the dataset.
Bonus: When Axios' Erica Pandey uploaded a photo of herself, the ImageNet experiment classified her as a "flibbertigibbet," which is disrespectful but a great word.