Meet CBI Faculty Host: Sauvik Das

Sauvik Das

Aug 16, 2023

Profile photo of Sauvik Das

Today, AI primarily benefits a few powerful institutions–governments, financial institutions, and big tech–while its costs are primarily borne by "the people:" the masses of individuals subject to ubiquitous, expansive and impersonal surveillance. Ubiquitous surveillance is everywhere–not just on one's phone, or one's web browser, but in the home, in the car, on the street. Expansive surveillance goes beyond the shallow–not just clickstreams or which websites one visits, but inferences about one's politics, sexuality, driving habits, and one's influence on one's friends. Impersonal surveillance operates at scale–not just carried out by a specific human analyst on a person of interest, but carried out dispassionately on all people at all times.

Facial recognition, for example, has been used by law enforcement in the US to find the online presence of BLM protestors[1] and in China to track and control Uyghur muslims.[2] Language models have been used to help online content moderators automatically detect hate speech and sift through toxic content,[3] but disproportionately flag content by queer individuals[4] and have been used to censor LGBTQ content in places like Russia.[4] More recently, researchers have explored the use of machine learning to classify an individual as gay or straight based on their dating profile images[5] and brain activity as recorded by an EEG,[6] ushering in a new age of AI-facilitated physiognomy. In short, the existing ethos of a lot of AI research is to construct an automated algorithmic surveillance infrastructure in the name of enhanced profits, security, and even "social good."

The impact of this ubiquitous, expansive, and impersonal algorithmic surveillance can produce widespread chilling effects that stifle free expression and exacerbate systemic inequities. In the U.S., for example, over 60% of internet users believe their online activity is monitored by the government.[7] Moreover, as the examples above illustrate, this surveillance disproportionately affects historically oppressed populations.[8] In short, advances in AI, to date, have been instrumental in maintaining existing power structures. But there are alternative trajectories for AI that can subvert, rather than enhance, the power of surveillance capitalists and intelligence agencies.

Exploring these alternative trajectories is a key focus of my lab–the SPUD (Security, Privacy, Usability, and Design) Lab at the Human-Computer Interaction Institute. The Carnegie Bosch Institute postdoctoral fellowship piqued my interest because of its focus on cybersecurity and AI. As conversations about AI harms ethics, and safety take center stage, research at the intersection of security and AI will be of utmost importance. Without critical reflection, however, too much of this research will center on technical approaches like differential privacy, federated learning, and adversarial robustness that further privilege the powerful over the powerless, the watchers over the surveilled. Indeed, that sort of research is far easier to fund. The CBI fellowship program looked to me like a rare opportunity to recruit a trained researcher who was interested and passionate about doing research at this critical nexus between AI and cybersecurity from a more subversive, human-centered perspective: what I call subversive AI.[9]

To that end, I am very excited to introduce the (soon-to-be-Dr.) William Agnew, one of the inaugural cohort of Carnegie Bosch Institute postdoctoral fellows, to the SPUD Lab to help further this work. I first became aware of Agnew and his work when writing a vision paper for the Resistance AI workshop at NeurIPS 2020.[9] Agnew was one of the student organizers who helped put together a blockbuster workshop exploring critical perspectives on how AI can be used to reinforce entrenched power inequities and optimistic perspectives on how AI can be used to disrupt and address those inequities. His involvement in organizing that workshop is indicative of his desire to do not just technically interesting AI work, but technically interesting work that addresses societal problems. Agnew approached me to discuss how we might create technologies that protect marginalized–particularly queer–populations from algorithmic surveillance. What stood out to me was that he was not proposing to simply create a new algorithm or tool as many AI-first researchers might be inclined to do: Agnew wanted to start by understanding people. He suggested running workshops that were mutually beneficial: where queer folks could come to learn about how to counter algorithmic surveillance, and where he could ask them questions about their needs to better develop models of queer data ownership.

There is a large group of individuals who care about community building and the societal impacts of AI, and there is a large group of individuals who have strong technical AI skills, but there is only a small overlap between the two. Agnew is in that rare overlap, which should uniquely position him to have a substantive impact on improving equity in the development and deployment of AI systems. There is already evidence of significant recognition and impact in his CV (including two best paper awards at the ACM FAccT conference). I am very excited to host him at CMU for the next two years as a CBI fellow!

References

  1. Rihl, J. (2021). Emails show Pittsburgh police officers accessed Clearview facial recognition after BLM protests. Public Source.  https://www.publicsource.org/pittsburgh-police-facial-recognition-blm-protests-clearview/
  2. Buckley, C., & Mozur, P. (2019). How China uses high-tech surveillance to subdue minorities. New York Times, 22.  https://www.nytimes.com/2019/05/22/world/asia/china-surveillance-xinjiang.html
  3. Schmidt, A., & Wiegand, M. (2017, April). A survey on hate speech detection using natural language processing. In Proceedings of the fifth international workshop on natural language processing for social media (pp. 1-10).
  4. Thiago, D. O., Marcelo, A. D., & Gomes, A. (2021). Fighting hate speech, silencing drag queens? Artificial intelligence in content moderation and risks to LGBTQ voices online. Sexuality & culture, 25(2), 700-732.
  5. Wang, Y., & Kosinski, M. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of personality and social psychology, 114(2), 246.
  6. Ziogas, A., Mokros, A., Kawohl, W., de Bardeci, M., Olbrich, I., Habermeyer, B., ... & Olbrich, S. (2023). Deep Learning in the Identification of Electroencephalogram Sources Associated with Sexual Orientation. Neuropsychobiology, 1-12.
  7. Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M., & Turner, E. (2019). Americans and privacy: Concerned, confused and feeling lack of control over their personal information.
  8. Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke University Press.
  9. Das, S. (2020). Subversive AI: Resisting automated algorithmic surveillance with human-centered adversarial machine learning. In Resistance AI Workshop at NeurIPS (p. 4).