Google’s Unethical Development of Facial Recognition Technology
Since its founding in 1998, Google's mission has been “to organize the world’s information and make it universally accessible and useful” (Bock, Work Rules!). This mission has propelled Google to become one of the largest and most influential technology companies globally. Google's impact is so profound that the term "to Google" has become part of everyday language. With innovations like Gmail, which boasts one billion active users, and Google Maps, with over a billion users monthly, Google's algorithms underpin these widely-used services, demonstrating few competitors.
However, the same algorithms that drive Google's success are not without flaws. Google’s algorithms have consistently produced racist results across various applications, including search, mapping, image searches, and facial recognition technology. Google’s search algorithm, for example, organizes results based on factors like keyword frequency and user interaction. While this system aims to deliver relevant results, it often reflects and amplifies societal biases.
Instances of Google's algorithms producing racist outcomes are well-documented. For example, searches for "three Black teenagers" once yielded mugshots, while "three white teenagers" returned stock photos of happy youths. Similarly, searches for "Black girls" resulted in pornography, while searches for "white girls" did not. During President Barack Obama’s tenure, Google Maps redirected searches for racist terms to the White House, highlighting the algorithm’s vulnerability to manipulation. Moreover, Google Photos' image recognition once categorized photos of Black people as "gorillas," prompting Google to remove the gorilla label entirely—a solution that avoided addressing the underlying issue.
Google's development of facial recognition technology further exemplifies ethical lapses. Through Google Photos, users could label faces, allowing the AI to recognize and tag individuals in future uploads. As Google expanded into devices like Nest security cameras and Pixel phones, the need for a robust facial recognition algorithm became critical. However, Google's methods for gathering the necessary data were deeply problematic.
In 2019, to improve the Pixel 4’s facial recognition capabilities, particularly for Black and darker-skinned people, Google employees solicited facial scans in exchange for $5 gift cards. Workers were instructed to target homeless people and college students, groups less likely to scrutinize the process or consent forms. Participants were often misled about the purpose of the data collection, believing it was for a different application like Snapchat.
This approach raises significant ethical concerns. Google exploited vulnerable populations, providing minimal compensation for valuable personal data. The lack of transparency and respect for participants underscores a troubling disregard for dignity and consent. Such practices continue a pattern of devaluing and exploiting Black individuals, historically marginalized and discriminated against.
Ethically, Google's actions are indefensible. From a Kantian perspective, the company's deceptive practices violate the categorical imperative, which demands treating individuals as ends in themselves, not merely as means to an end. If all companies adopted Google's approach, societal trust and ethical standards would erode.
Utilitarianism might argue that the benefits of advanced facial recognition technology for many outweigh the harm to a few. However, this perspective overlooks the profound impact on those exploited and the broader implications of normalizing unethical practices.
To atone and reconcile with affected communities, Google should take several steps:
Delete Unethically Gathered Data: Removing data collected under false pretenses respects participants' autonomy and rights.
Compensate Participants Fairly: Providing adequate compensation acknowledges the value of the data and the wrongs committed.
Issue a Public Apology: A formal apology is a necessary step toward accountability and rebuilding trust.
Implement Ethical Guidelines: Establishing and enforcing robust ethical standards for data collection will prevent future abuses.
Advocate for Industry Standards: Google should lead in promoting ethical practices across the tech industry.
While Google's mission is to "organize the world’s information and make it universally accessible and useful," it should equally strive to do so ethically. By innovating in ethical data practices, Google can set a precedent for the tech industry, ensuring technology development respects and protects individual rights and dignity.
References
“Chapter 2.” Work Rules!: Insights from inside Google That Will Transform How You Live and Lead, by Laszlo Bock, Twelve, New York, 2015, pp. 33–33.
Cloud, Shuttle. “The Most Popular Email Providers in the U.S.A.” Email Migration Blog, 11 Aug. 2016, blog.shuttlecloud.com/the-most-popular-email-providers-in-the-u-s-a/.
Wikipedia, Wikipedia. “Google Maps.” Wikipedia, Wikimedia Foundation, 23 June 2020, en.wikipedia.org/wiki/Google_Maps.
Google. “How Search Algorithms Work.” Google Search, Google, 2020, www.google.com/search/howsearchworks/algorithms/.
Noble, Safia Umoja. “Google's Algorithm: History of Racism Against Black Women.” Time, Time, 26 Mar. 2018, time.com/5209144/google-search-engine-algorithm-bias-racism/.
Griffin, Andrew. “Google Has Apologised for One of the Most Racist Maps Ever Made.” The Independent, Independent Digital News and Media, 21 May 2015, www.independent.co.uk/life-style/gadgets-and-tech/news/nigga-house-racist-search-term-forces-google-to-apologise-for-taking-users-to-white-house-10266388.html.
Vincent, James. “Google 'Fixed' Its Racist Algorithm by Removing Gorillas from Its Image-Labeling Tech.” The Verge, The Verge, 12 Jan. 2018, www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai.
Bohn, Dieter. “Google's 'Field Research' Offered People $5 to Scan Their Faces for the Pixel 4.” The Verge, The Verge, 29 July 2019, www.theverge.com/2019/7/29/8934804/google-pixel-4-face-scanning-data-collection.
Ginger Adams Otis, Nancy Dillon. “Google Using Dubious Tactics to Target People with 'Darker Skin' in Facial Recognition Project: Sources.” Nydailynews.com, 2 Oct. 2019, www.nydailynews.com/news/national/ny-google-darker-skin-tones-facial-recognition-pixel-20191002-5vxpgowknffnvbmy5eg7epsf34-story.html.
Nicas, Jack. “Atlanta Asks Google Whether It Targeted Black Homeless People.” The New York Times, The New York Times, 4 Oct. 2019, www.nytimes.com/2019/10/04/technology/google-facial-recognition-atlanta-homeless.html.