This article originally appeared in Yale Engineering Magazine.
Shortly after arriving at Yale, Theodore Kim was invited to give a guest lecture on the history of computer-generated imagery (CGI) for a Film and Media Studies class.
“While I was assembling the materials, a pattern became clear to me,” he said. That is, the degree of bias that exists in computer graphics technology toward the features of white people. He met with the professor of the course, John MacKay, who confirmed that the pattern was real — and deep-seated in film history.
“He introduced me to the book ‘White’ by Richard Dyer, which described how similar biases pervaded film technology in the analog era,” said Kim, associate professor of computer science. “From there, it became clear that there was a whole body of scholarship on this topic, but its coverage of the digital age is still ongoing, especially with movie CGI.”
In recent decades, the technology of computer graphics has made remarkable progress. However, when it comes to matters of race, and the means to depict characters of different ethnic backgrounds, the field remains very much in the past. Kim, who co-leads the Yale Computer Graphics Group, is among those who are trying to change that. He’s seen the issue from the perspectives of both the industry and academia. Before coming to Yale, he was a senior research scientist at Pixar, where his work can be seen in such movies as Cars 3, Coco, Incredibles 2, and Toy Story 4.
From the ground up, computer graphics technology has been developed with the notion that the skin and hair of white people are the default when it comes to depicting humans. For instance, articles in computer graphics journals often include only computer-generated images of white people when discussing skin rendering, even when the topic is broadly claimed as “humans.” And many of the lighting techniques used in computer graphics are based on guidelines for film lighting developed before the 1940s — long before the modern computer — and specifically designed for white skin.
It’s a problem that severely limits what computer graphics artists can do, and how wide of an audience they can reach.
“We’re supposed to be the leaders in storytelling,” said Kim. “There are lots of stories out there and we haven’t told a bunch of them, so let’s go tell these stories.”
Thanks in part to the efforts of Kim and others in the field, there’s more awareness about the issue. Because racial bias is so deeply baked into the technology, though, there’s no quick fix. For instance, early in the development of models for human features, computer graphics turned to the medical literature for guidance. Despite having the imprimatur of “hard science,” Kim notes, it turned out that much of the literature was made with the same biases, with Caucasian skin and hair being treated as the standard.
“We thought we were doing the right thing by going to the medical literature, but instead we inherited all the same things,” Kim said. “Everybody needs to be more careful about this stuff and think a lot harder about what we’re doing. We’re trying to develop technology that we claim is for all of humanity.”
The first step is getting a substantive discussion going in the community.
“At the very least, we’re locating people who actually care about it,” he said. “From there, it’s a community-building exercise. There are people who care about it, but we need to form a community.”
A big step toward that goal happened after Kim published an article about the issue in Scientific American magazine in 2020. From that article:
Today’s moviemaking technology has been built to tell white stories, because researchers working at the intersection of art and science have allowed white flesh and hair to insidiously become the only form of humanity considered worthy of in-depth scientific inquiry. Going forward, we need to ask whose stories this technology is furthering. What cases have been treated as “normal,” and which are “special?” How many humans reside in those cases, and why?
That article got the attention of many others in the field with similar concerns. Kim and an “all-star cast” of co-authors that includes fellow Yale computer science professors Julie Dorsey and Holly Rushmeier submitted an extended abstract to SIGGRAPH 2021, a prestigious conference for computer graphics and interactive techniques hosted by the Association for Computing Machinery.
“You never know what’s going to happen when issues are this controversial or hot-button,” Kim said. “And what happened was we got seven reviews, which is usual. Five were extremely positive, one was neutral. And one was virulently negative, and in fact contained coded racist messages, and this person forced it to get rejected.”
But Kim was invited to be the opening speaker for the event’s Diversity, Equity, and Inclusion Summit to give his talk, “Anti-Racist Graphics Research.” He and Rushmeier also led a town hall-style gathering, otherwise known as a Birds of a Feather, titled “Countering Racial Bias in Computer Graphics Requires Structural Change.” The goal was to get others interested in joining them in submitting a broad range of extended abstracts for SIGGRAPH 2022. The rejection of the previous submission, Kim said, made it clear that it was a “numbers game.”
In his talk, Kim discussed a common lighting technique in computer graphics known as subsurface scattering, which creates a glowing effect. It adds realism to white skin, but is much less important in darker tones. While there are ways to add pigment to the default white skin to make darker skin, details are lost in the process. The technique is even codified in elaborate mathematical equations, creating the sense that rigorous science is behind it.
“We carved out the piece of physics that’s most important to white skin,” he said. “This is not all skin.”
The notion of bias built into technology can be particularly distressing to people in the field who are used to thinking that “math is math.”
“That’s what attracted many of us to research to begin with,” he said. “We get to look at these clean, neutral problems all day don’t get all tangled up in the ugly politics of the real world.”
Raqi Syed, one of Kim’s co-authors, said she noticed the problem while working on a project in 2018, and “trying to make a character look like me.”
“I became aware that if I wanted to tell stories that reflect my experience and use the tools that I understand from working in visual effects, then that’s going to be really challenging, because these tools aren’t designed to do that,” she said.
A.M. Darke, another co-author of the paper, encountered the results of anti-black bias in graphics technology while creating a virtual reality space called “In Passing,” a 3D media project about how people navigate public spaces. When developing the avatars for Black characters, she found a very limited range of hairstyles that she could use. This prompted Darke to create the Open Source Afro Hair Library, which gives users a wider choice of hairstyles to choose from for their characters. When Darke tweeted about an award for the library, it went viral — a sign that this issue resonates well outside just the computer graphics community.
“The response was really positive because this was something that had already been understood tacitly in a non-specialist community,” said Darke, an assistant professor at the University of California, Santa Cruz, in the department of Performance, Play and Design.
Spreading that awareness to the community of specialists is the next important step.
“The way we solve these issues is collectively, by opening up a dialogue,” Darke said. “The aim of what we did at SIGGRAPH was to encourage others in this community to write and research and go down this line of inquiry, so that this knowledge and expertise can be made available, and so this community can be amplified and heard.”