TA Garduno TA Garduno

Preventing Tech-Fueled Political Violence

The recent report "Preventing Tech-Fueled Political Violence: What Online Platforms Can Do to Ensure They Do Not Contribute to Election-Related Violence" offers critical and timely insights into the urgent need for online digital platforms to adopt stringent measures to prevent their platforms from being used and exploited to incite political violence during elections.

The recent report "Preventing Tech-Fueled Political Violence: What Online Platforms Can Do to Ensure They Do Not Contribute to Election-Related Violence" offers critical and timely insights into the urgent need for online digital platforms to adopt stringent measures to prevent their platforms from being used and exploited to incite political violence during elections.

The report highlights the alarming trend of far-right extremist militias organizing on social media platforms like Facebook in anticipation of the 2024 U.S. presidential election. Highlighting the importance for platforms to proactively address and mitigate threats to the peaceful conduct of elections and the orderly transfer of power. The recommendations provided by a working group of experts aim to offer practical steps for platforms to enhance their role in safeguarding democratic processes globally.

Key Recommendations

  1. Prepare for Threats and Violence

    • Robust Standards: Platforms must develop comprehensive threat assessment and crisis planning standards, involving multidisciplinary expertise and transparent engagement with external stakeholders.

    • Scenario Planning: Engaging in detailed scenario planning and crisis training is essential to prepare for potential violence.

  2. Develop and Enforce Policies

    • Content Moderation: Clear and enforceable content moderation policies should be implemented year-round to address election integrity and prevent incitement to violence.

    • Uniform Enforcement: High-value users, including politicians, should not receive special treatment and should be subject to the same rules as all users.

  3. Resource Adequately

    • Scaling Up Teams: Platforms should increase resources for teams focused on election integrity, content moderation, and countering violent extremism, ensuring rapid response capabilities.

    • Transparent Investment: Public transparency regarding the level of investment in safety measures is crucial.

  4. Transparency on Content Moderation Decisions

    • Clear Communication: Platforms must explain content moderation decisions clearly and promptly, especially during election periods, to maintain public trust.

    • Crisis Communication Strategy: A robust crisis communication strategy should be in place to handle high-stakes situations effectively.

  5. Collaborate with Researchers, Civil Society, and Government

    • Data Access: Maximizing data access for independent research during election periods can support timely analysis and mitigation of false claims.

    • Counter False Claims: Platforms should proactively counter misinformation about their collaborations with researchers and civil society groups.

  6. Develop Industry Standards

    • Threat Assessment Capabilities: Establishing industry-wide standards for threat assessment and policy enforcement can help protect democratic processes.

    • Ongoing Monitoring: Continuous monitoring of threats to democratic processes and political violence is necessary beyond just the election periods.

The report's emphasis on the critical role online platforms play in maintaining the integrity of elections and preventing political violence. By adopting these recommendations, platforms can significantly contribute to a safer and more democratic digital landscape.

As we move forward, it is imperative for all stakeholders, including technology companies, researchers, and civil society organizations, to collaborate and ensure that the digital tools we create and use do not undermine the very foundations of democracy.

Read More
TA Garduno TA Garduno

Enhancing Fairness and Accuracy in Tenant Screening: Navigating the Challenges and Opportunities of AI-Driven Processes

Tenant screening processes have increasingly relied on artificial intelligence (AI) algorithms to review and evaluate potential renters. These algorithms analyze various data points, including criminal background checks, eviction histories, credit scores, and rental payment history. As highlighted in "How Automated Background Checks Freeze Out Renters" by Lauren Kirchner and Matthew Goldstein in the New York Times, the rapid expansion of the tenant screening industry—now valued at $1 billion—has been fueled by the growing data economy and the rise of American rentership since the 2008 financial crisis. According to a 2022 Consumer Financial Protection Bureau report, around 68% of renters pay application fees for tenant screening, which significantly influences the U.S. rental market.

The rise of corporate landlords and centralized property management systems has led to a reliance on AI-driven tenant screening services, prioritizing efficiency over accuracy. However, this technological shift raises significant concerns about the fairness and accuracy of the reports produced. AI-driven background checks often contain errors, such as incorrect criminal records, which can unjustly disqualify potential renters from securing housing. These errors lead to significant life disruptions, forcing individuals into substandard living conditions or homelessness. Moreover, these AI systems disproportionately affect marginalized groups, particularly people of color, by exacerbating existing societal biases and reinforcing racial disparities in housing access. The lack of stringent regulation overseeing these AI tools compounds these issues.

Despite existing laws and agencies like the Fair Housing Act of 1968, the Fair Credit Reporting Act of 1970, the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), and the Department of Housing and Urban Development (HUD), discrimination remains rampant in the housing market. These laws and agencies do not adequately protect consumers from the consequences of errors on tenant reports caused by AI data gathering practices. Furthermore, the tenant screening industry lacks standardized procedures to ensure data accuracy.

The use of AI algorithms in housing necessitates regulatory laws by state and local governments to promote transparency and accuracy in tenant screening processes and safeguard tenants' rights to correct any incorrect data. This paper explores best practices from relevant literature to mitigate the policy problems associated with AI-driven tenant screening, aiming to improve accuracy, fairness, and transparency.

Tenant screening processes commonly rely on AI algorithms to evaluate potential renters, using various data points, including criminal background checks, eviction histories, credit scores, and rental payment history. The growing reliance on automated tenant screening services, particularly with the rise of large corporate landlords and centralized property management systems, prioritizes efficiency over accuracy. The use of AI in tenant screening is important for several reasons:

  • Errors in Reports: AI-driven background checks often contain errors, including incorrect criminal records, which can unjustly disqualify potential renters from securing housing. This leads to significant life disruptions and can force individuals into substandard living conditions or homelessness.

  • Disproportionate Impact on Marginalized Groups: These systems disproportionately affect marginalized groups, particularly people of color. AI algorithms can worsen existing societal biases, reflecting and reinforcing racial disparities in housing access. For example, Abby Boshart notes that "Black women were more likely to have an eviction filing that ultimately resulted in dismissal," yet these records still appear on their rental histories. Kathryn A. Sabbeth elaborates that these cases will still show up on rental records, further disadvantaging affected individuals.

Cleo Bluthenthal’s report, "The Disproportionate Burden of Eviction on Black Women," highlights that 68% of Black mothers are the sole breadwinners of their households. The impact of evictions and housing instability on children is profound, leading to academic decline, emotional trauma, and long-term health implications.

The lack of stringent regulation overseeing the accuracy and use of these AI tools exacerbates these issues. Current laws do not sufficiently protect consumers from the consequences of mistaken reports, and the industry lacks standardized procedures to ensure information accuracy.

Rebecca Burns, in her article “Artificial Intelligence Is Driving Discrimination in the Housing Market,” quotes Hannah Holloway from the Tech Equity Collaborative: “We don’t know what their data sources are, or how often they’re scrubbing that information and updating it. And while some characteristics may be fairly objective, she noted, 'If I’m a tenant, I have no idea how they’re using that information to come up with a prediction about whether I’ll damage the property or miss a payment.’”

The problem of AI in tenant screening is substantial, growing, and critical due to its broad impacts on fairness, equality, and access to housing. The lack of regulation and increasing reliance on this technology make it an urgent area for policy intervention and reform.

To improve accuracy, fairness, and transparency in tenant screening processes, several best practices are recommended:

  1. Implement Stricter Regulations: Ensure the accuracy of tenant screening reports and enforce comprehensive data matching techniques. Regulations should mandate accurate data matching and provide tenants with the right to dispute and correct errors.

  2. Increase Transparency: Enhance tenants' rights to access and contest their data. Cities like San Francisco and Newark have implemented "ban the box" regulations that restrict certain types of data in tenant screening, reducing discriminatory impacts.

  3. Standardize Screening Guidelines: Develop and enforce standardized guidelines for tenant screening that prioritize data relevance and minimize reliance on potentially discriminatory information.

  4. Enhance Oversight: Conduct regular audits of tenant screening agencies and hold them accountable for errors. Increased oversight can ensure adherence to fairness standards.

  5. Support Tenant Education: Implement programs to help tenants understand their rights and navigate disputes about tenant screening.

  6. Employ Advanced Data Analytics: Use advanced data analytics to improve the accuracy of tenant screening, ensuring that only relevant and verified data influence rental decisions.

Implementing these best practices involves legislative action, industry standards, and community support. Together, these strategies can address current deficiencies in tenant screening processes, making them fairer and more transparent, ultimately improving housing access for all potential renters.

The integration of AI in tenant screening processes presents significant policy challenges due to inaccuracies and biases that can unjustly affect potential renters, especially marginalized groups. Current regulations do not sufficiently address the complexities of AI-driven data inaccuracies in tenant screening, perpetuating inequality. Implementing best practices in tenant screening not only improves fairness and accuracy but also stabilizes rental markets, benefiting both landlords and tenants.

The negative impacts of AI algorithms on housing and data protection require urgent regulatory intervention by state and local governments. These interventions should enhance transparency, ensure accuracy in tenant screening processes, and safeguard tenants' rights to correct mistaken data. By adopting these best practices and strategic steps, policymakers can create a fairer housing system, ensuring that the benefits of technology in the rental market are realized without sacrificing fairness or accuracy.

References

CFPB reports highlight problems with tenant background checks. (2022). Retrieved from https://www.consumerfinance.gov/about-us/newsroom/cfpb-reports-highlight-problems-with-tenant-background-checks

TechEquity. (2022). Tech, Bias, and housing initiative: Tenant screening. Retrieved from https://techequitycollaborative.org/2022/02/23/tech-bias-and-housing-initiative-tenant-screening/

Kirchner, L., & Goldstein, M. (2020). How automated background checks freeze out renters. Retrieved from https://www.nytimes.com/2020/05/28/business/renters-background-checks.html

Sabbeth, K. A. (2021). Erasing the “Scarlet e” of eviction records. Retrieved from https://theappeal.org/the-lab/report/erasing-the-scarlet-e-of-eviction-records/

Boshart, A., Tajo, M., & Choi, J. H. (2022). How tenant screening services disproportionately exclude renters of color from housing. Retrieved from https://housingmatters.urban.org/articles/how-tenant-screening-services-disproportionately-exclude-renters-color-housing

Bluthenthal, C. (2023). The disproportionate burden of eviction on Black Women. Retrieved from https://www.americanprogress.org/article/the-disproportionate-burden-of-eviction-on-black-women/

Wu, C. C., Nelson, A., Kuehnhoff, A., Cohn, C., Sharpe, S., & Cabañez, N. (2023). Digital denials. Retrieved from https://www.nclc.org/wp-content/uploads/2023/09/202309_Report_Digital-Denials.pdf

Burns, R. (2023). Artificial Intelligence is driving discrimination in the housing market. Retrieved from https://jacobin.com/2023/06/artificial-intelligence-corporate-landlords-tenants-screening-crime-racism


Read More
TA Garduno TA Garduno

Privacy and Ethical Considerations for the Use of Evolving Technology within Government

As technology evolves, we continuously integrate it into various aspects of our lives, from personal assistants like Siri, Alexa, and Google Home to drones, AI, and virtual reality. These technologies offer convenience and efficiency, making them attractive investments across industries. Today, it's rare to find someone without a smartphone, while traditional home phones have become almost obsolete. However, the rapid evolution of technology often outpaces our laws and ethical considerations, raising significant concerns about its use.

At all levels of government, from federal to local, technology has been integrated to streamline operations, improve efficiency, and reduce costs. Yet, many of these technologies were initially developed for the private sector, leading to potential issues and abuses when adapted for public use. For instance, in cities like New Orleans, city-wide surveillance aims to enhance public safety. But what happens when a private company managing this surveillance implements an algorithm to predict crime? This raises critical questions about privacy, access to technology, and the data being collected. When federal agencies use private databases like Salesforce for controversial activities, it prompts us to consider who oversees these technologies and establishes ethical standards for their use.

Consent and transparency are also major concerns. Do visitors to New Orleans consent to being surveilled? Does it matter if they are accused of a crime based on predictive algorithms? These scenarios, which may seem like science fiction, are becoming our reality.

Currently, there is a lack of comprehensive laws governing the ethical use of technology by government agencies. This new and uncharted territory requires ongoing discussions about privacy, ethics, and the limitations of private companies contracted by the government. In our pursuit of technological advancement, we must prioritize the well-being of individuals and communities.

Current Issues and Considerations

Several privacy issues related to technology and government use require our attention:

  • Search Engine Privacy: Should the government have the authority to subpoena your search history from Google or Yahoo? What about during a lawsuit?

  • NSA Surveillance: Should the NSA be allowed to track your cell phone location for two years? If so, should this information be accessible to the public?

  • Dynamic Pricing: Should personal information be used to determine the prices you see online? Some companies already use AI for this, raising concerns about discrimination based on race or gender.

  • Drone Surveillance: Should companies like Google, which use drones for deliveries, be subject to the Freedom of Information Act? With extensive mapping capabilities, shouldn't the public have access to recorded data?

  • DNA Collection: Should police be able to collect DNA from trash and compare it against open-source DNA websites to solve crimes? The case of the Golden State Killer highlights the ethical dilemmas of using such methods.

These examples illustrate the pressing need for clear laws and ethical guidelines. Government agencies might lack the expertise to fully understand the ethical implications of the technologies they deploy, leading to potential abuses and erosion of freedoms.

The Need for Ethical Conversations

Ethical discussions about technology are rare within tech companies. Salesforce was a pioneer in hiring an ethics officer, and Google followed with a focus on AI ethics. However, ethical considerations should extend beyond AI. Many current ethical issues arise from private companies not factoring in ethics during technology development. Implicit biases of technologists can seep into their creations, leading to unintended negative consequences.

As technology moves into the public sector, we must assume the best intentions while preparing for the worst outcomes. For example, when New Orleans adopted surveillance technology, the intention was to support police work. However, the contracted company, Palantir, used the technology to predict crimes, raising privacy concerns. Both the government and private companies should consider the broader implications of their technological initiatives.

Actions to Take

To address these issues, we need to:

  • Increase Awareness: Educate ourselves about how technology infringes on our rights and advocate for change.

  • Demand Transparency: Government agencies should be transparent about the technologies they use and how they are implemented.

  • Foster Community Dialogue: Encourage community discussions about the impact of technology on our lives, similar to public meetings about parks, schools, and voting.

  • Consider Ethical Implications: Always evaluate technology from a perspective of potential discrimination or misuse.

By involving the community in these conversations, we can ensure that technology serves us, not the other way around. We must collectively address privacy and ethical concerns to create a future where technology enhances our lives without compromising our values and freedoms.


References

Catherinecheney. “Taking the Conversation on Ethics and Technology beyond Silicon Valley.” Devex, 17 Jan. 2019, www.devex.com/news/taking-the-conversation-on-ethics-and-technology-beyond-silicon-valley-93943. Chappell, Bill. “FAA Certifies Google's Wing Drone Delivery Company To 

Operate As An Airline.” NPR, NPR, 23 Apr. 2019, www.npr.org/2019/04/23/716360818/faa-certifies-googles-wing-drone-delivery-company-to-operate-as-an-airline. Ghaffary, Shirin. “Marc Benioff Defends Salesforce's Contract with Customs and Border Protection.” Vox, Vox, 19 Nov. 2018, www.vox.com/2018/11/18/18097398/salesforce-contract-customs-border-protection-marc-benioff-immigration. Green, Dr. Jemma. “As Facebook And Government Erode Our 

Online Privacy And Trust Decentralized Technology Is Responding.” Forbes, Forbes Magazine, 20 Dec. 2018, www.forbes.com/sites/jemmagreen/2018/12/20/as-facebook-and-government-erode-our-online-privacy-and-trust-decentralized-technology-is-responding/#3168649a4ecc. 

Kelly, Makena. “Immigration Nonprofit Refuses $250,000 Salesforce Donation over Its Contract with US Government.” The Verge, The Verge, 19 July 2018, www.theverge.com/2018/7/19/17590240/immigration-non-profit-raices-refuses-salesforce-donation. Komando, Kim. “How to Protect Your Privacy on Google.” USA Today, 

Gannett Satellite Information Network, 17 May 2013, www.usatoday.com/story/tech/columnist/komando/2013/05/17/google-maps-duckduckgo-web-history/2155759/. 

“Privacy.” Electronic Frontier Foundation, www.eff.org/issues/privacy. “Tech Ethics: 

Putting People Ahead of Technology.” YouTube, 27 Apr. 2018, youtu.be/JBC57Pqhg5o. 

Valentino-DeVries, Jennifer, et al. “Websites Vary Prices, Deals Based on Users' Information.” The Wall Street Journal, Dow Jones & Company, 24 Dec. 2012, www.wsj.com/articles/SB10001424127887323777204578189391813881534. 

“Websites That Charge Different Customers Different Prices.” Findlaw, supreme.findlaw.com/legal-commentary/websites-that-charge-different-customers-different-prices.html. Winston, Ali. 

“Palantir Has Secretly Been Using New Orleans to Test Its Predictive Policing Technology.” The Verge, The Verge, 27 Feb. 2018,

Read More
TA Garduno TA Garduno

Google’s Unethical Development of Facial Recognition Technology

Since its founding in 1998, Google's mission has been “to organize the world’s information and make it universally accessible and useful” (Bock, Work Rules!). This mission has propelled Google to become one of the largest and most influential technology companies globally. Google's impact is so profound that the term "to Google" has become part of everyday language. With innovations like Gmail, which boasts one billion active users, and Google Maps, with over a billion users monthly, Google's algorithms underpin these widely-used services, demonstrating few competitors.
However, the same algorithms that drive Google's success are not without flaws. Google’s algorithms have consistently produced racist results across various applications, including search, mapping, image searches, and facial recognition technology. Google’s search algorithm, for example, organizes results based on factors like keyword frequency and user interaction. While this system aims to deliver relevant results, it often reflects and amplifies societal biases.

Instances of Google's algorithms producing racist outcomes are well-documented. For example, searches for "three Black teenagers" once yielded mugshots, while "three white teenagers" returned stock photos of happy youths. Similarly, searches for "Black girls" resulted in pornography, while searches for "white girls" did not. During President Barack Obama’s tenure, Google Maps redirected searches for racist terms to the White House, highlighting the algorithm’s vulnerability to manipulation. Moreover, Google Photos' image recognition once categorized photos of Black people as "gorillas," prompting Google to remove the gorilla label entirely—a solution that avoided addressing the underlying issue.

Google's development of facial recognition technology further exemplifies ethical lapses. Through Google Photos, users could label faces, allowing the AI to recognize and tag individuals in future uploads. As Google expanded into devices like Nest security cameras and Pixel phones, the need for a robust facial recognition algorithm became critical. However, Google's methods for gathering the necessary data were deeply problematic.

In 2019, to improve the Pixel 4’s facial recognition capabilities, particularly for Black and darker-skinned people, Google employees solicited facial scans in exchange for $5 gift cards. Workers were instructed to target homeless people and college students, groups less likely to scrutinize the process or consent forms. Participants were often misled about the purpose of the data collection, believing it was for a different application like Snapchat.

This approach raises significant ethical concerns. Google exploited vulnerable populations, providing minimal compensation for valuable personal data. The lack of transparency and respect for participants underscores a troubling disregard for dignity and consent. Such practices continue a pattern of devaluing and exploiting Black individuals, historically marginalized and discriminated against.

Ethically, Google's actions are indefensible. From a Kantian perspective, the company's deceptive practices violate the categorical imperative, which demands treating individuals as ends in themselves, not merely as means to an end. If all companies adopted Google's approach, societal trust and ethical standards would erode.

Utilitarianism might argue that the benefits of advanced facial recognition technology for many outweigh the harm to a few. However, this perspective overlooks the profound impact on those exploited and the broader implications of normalizing unethical practices.

To atone and reconcile with affected communities, Google should take several steps:

  1. Delete Unethically Gathered Data: Removing data collected under false pretenses respects participants' autonomy and rights.

  2. Compensate Participants Fairly: Providing adequate compensation acknowledges the value of the data and the wrongs committed.

  3. Issue a Public Apology: A formal apology is a necessary step toward accountability and rebuilding trust.

  4. Implement Ethical Guidelines: Establishing and enforcing robust ethical standards for data collection will prevent future abuses.

  5. Advocate for Industry Standards: Google should lead in promoting ethical practices across the tech industry.

While Google's mission is to "organize the world’s information and make it universally accessible and useful," it should equally strive to do so ethically. By innovating in ethical data practices, Google can set a precedent for the tech industry, ensuring technology development respects and protects individual rights and dignity.

References

“Chapter 2.” Work Rules!: Insights from inside Google That Will Transform How You Live and Lead, by Laszlo Bock, Twelve, New York, 2015, pp. 33–33.

Cloud, Shuttle. “The Most Popular Email Providers in the U.S.A.” Email Migration Blog, 11 Aug. 2016, blog.shuttlecloud.com/the-most-popular-email-providers-in-the-u-s-a/.

Wikipedia, Wikipedia. “Google Maps.” Wikipedia, Wikimedia Foundation, 23 June 2020, en.wikipedia.org/wiki/Google_Maps.

Google. “How Search Algorithms Work.” Google Search, Google, 2020, www.google.com/search/howsearchworks/algorithms/.

Noble, Safia Umoja. “Google's Algorithm: History of Racism Against Black Women.” Time, Time, 26 Mar. 2018, time.com/5209144/google-search-engine-algorithm-bias-racism/.

Griffin, Andrew. “Google Has Apologised for One of the Most Racist Maps Ever Made.” The Independent, Independent Digital News and Media, 21 May 2015, www.independent.co.uk/life-style/gadgets-and-tech/news/nigga-house-racist-search-term-forces-google-to-apologise-for-taking-users-to-white-house-10266388.html.

Vincent, James. “Google 'Fixed' Its Racist Algorithm by Removing Gorillas from Its Image-Labeling Tech.” The Verge, The Verge, 12 Jan. 2018, www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai.

Bohn, Dieter. “Google's 'Field Research' Offered People $5 to Scan Their Faces for the Pixel 4.” The Verge, The Verge, 29 July 2019, www.theverge.com/2019/7/29/8934804/google-pixel-4-face-scanning-data-collection.

Ginger Adams Otis, Nancy Dillon. “Google Using Dubious Tactics to Target People with 'Darker Skin' in Facial Recognition Project: Sources.” Nydailynews.com, 2 Oct. 2019, www.nydailynews.com/news/national/ny-google-darker-skin-tones-facial-recognition-pixel-20191002-5vxpgowknffnvbmy5eg7epsf34-story.html.

Nicas, Jack. “Atlanta Asks Google Whether It Targeted Black Homeless People.” The New York Times, The New York Times, 4 Oct. 2019, www.nytimes.com/2019/10/04/technology/google-facial-recognition-atlanta-homeless.html.

Read More