Meet CBI fellow William Agnew: Applying technology to empower marginalized communities

William Agnew

Aug 23, 2023

Picture of William Agnew

My name is William Agnew (he/they), and I'm thrilled to be one of the new CBI Fellowship postdocs at CMU. I received my Ph.D. from University of Washington with Sidd Srinivasa, where I worked on AI ethics, critical AI, and robotics. I also helped found Queer in AI. In my free time, I love backpacking, climbing, mountaineering, biking, running, cooking, board games, and DnD.

I am interested in developing and sharing tools and ideas that go beyond participatory design and allow marginalized individuals and communities to own and meaningfully control their data and models derived from that data. Building on ideas from usable security/privacy, usage licenses, and indigenous data sovereignty, I want to contribute to data and AI futures where individuals and communities know where their data is and can remove, add, or change their data in different datasets.

I will develop concepts of model ownership, where both the people whose data models are trained on, and the people models are performing inference about, have control over those models, including the ability to change inferences or opt out of inference altogether. By shifting ownership and control of datasets and models from large corporations and small groups of experts to the data and model subjects, very broad groups of people will be able to participate in reducing bias, combatting harms, controlling if and how data is shared, and aligning AI to their needs.

The CBI fellowship is an incredible opportunity to focus on new questions at the intersection of AI, security, and human-computer interaction (HCI). CMU has an excellent community of people working on this, including my mentor Sauvik Das and with wider CMU HCI and computer science communities. I want to empower people to take control of their data and models. This includes not just technical questions like how one may control how one's data is used in models or datasets, but also HCI questions around building broadly usable tools and building knowledge with impacted communities.

There are several technical obstacles, including how to craft AI defenses, such as Glaze, which protects art styles from theft, that can work against novel models or architectures, not just the target models they are tuned for. Another important challenge is building connections with impacted communities to ensure we are both working on the right problems and that our work is actually helpful. I am extremely excited to join CMU and learn from and collaborate with the many communities work on AI ethics.