Inequality and Racism in A.I.

Jonathan Logan

S&E Editor

 

Timnit Gebru, a highly respected artificial intelligence (A.I.) ethicist, finished her Ph.D. thesis and was promptly hired by Google as part of their campaign to increase algorithmic scrutiny. Google, along with many Big Tech Companies have always pushed inclusive narratives. However, in December of 2020, Gebru was abruptly fired from her A.I. ethics research position. The company cited a paper in which she took issue with their minority hiring practices and language models, both of which lead to discriminatory biases. Many critics and fellow researchers cite this incident as further evidence that Big Tech companies do not truly care for underrepresented peoples. The hiring of researchers like Gebru is merely a front.

Gebru became a S.T.E.M. celebrity when she posted a well-written opinion piece on Facebook in response to exclusionary practices she witnessed while attending a conference in Barcelona, Spain. It reads: “I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community. The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit the few while harming a great many.

This post and the responses it received led Gebru to start a group called “Black in A.I.” A prominent ethics researcher that already worked in Google took Gebru in at Google where she would eventually be hired.

Beneath these public relations and front-page stories are actual examples of artificial intelligence drawing conclusions or carrying out its tasks with abhorrent racist results. For example, Google Photos (one of their many apps), is capable of sorting through pictures you may upload to their service. If you have 50 pictures of flowers in your upload folder, Google Photos will sort those 50 flower pictures in a folder labeled “flowers.” In 2015, a Brooklyn, N.Y. resident used the service to sort some photos a friend had sent their way. There was an entire folder labeled “gorillas.” This person opened the folder only to find 80 pictures of a Black friend.

Neural networks are responsible for analyzing huge amounts of data (the pictures you might upload to Google Photos) and determining what those photos are of based on training data. That training data is how the neural network learns to classify a picture of a flower as a flower. However, machines – especially artificial intelligence, are a direct reflection of the human that created it.

Something that scientists and engineers frequently fail at is communicating the complexities of the work they do. However, when it comes to blatantly racist technology, we cannot just accept “technical difficulties” as the reason. We have to hold scientists and engineers to a higher level of accountability when it comes to these issues; and even more so when the issues come to light. The groupthink that the mostly white males engage in at Google and other tech companies is directly reflected in their work. It was while confronting this that Timnit Gebru was fired by Google.

Ethics matter, especially in a field driven by scientific advancements. The cases of Gebru and Google Photos are merely drops in an ocean of misconduct by artificial intelligence experts and the companies that develop the technology. The bias that permeates artificial intelligence systems and the undercurrent of exclusion that runs through tech companies must be addressed. These are not isolated events, nor are they confined to Silicon Valley. We can not write off inequity to technical difficulty or allow the developers of artificial intelligence to suppress the likes of Timnit Gebru