My contributions span uncertainty estimation, privacy and security of generative-AI systems, distributed training, hyperparameter optimization, and model selection.
I also do work in tech policy and law, and spend a lot of time finding ways to effectively communicate the capabilities and limits of AI/ML to interdisciplinary audiences and the public.
My research has received spotlights, orals, and best-paper accolades at top AI/ML and computing venues, including NeurIPS, ICML, AAAI, and AIES.
Law collaborations on copyright and Generative AI have been lauded as "landmark" work among technology law scholars and the popular press.
In the past I interned at Microsoft Research and at Google Research, and was named a "Rising Star in EECS" by MIT.
My doctoral work was generously supported by the John T. and Catherine D. MacArthur Foundation through AIPP.
Prior to my research career, I worked for several years as a software engineer at companies both (really) big and (really) small. I specialized in designing, building, and monitoring large-scale backend data-processing systems.
(I am not recruiting students for Fall 2025 -- my appointment at Yale starts in 2026.)
A. Feder Cooper*, Christopher A. Choquette-Choo*, Miranda Bogen*, Matthew Jagielski*, Katja Filippova*, Ken Ziyu Liu* et al. "Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice." 2024. [arxiv] arXiv
A. Feder Cooper*, Katherine Lee*, and James Grimmelmann*. "Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain." Journal of the Copyright Society, 2024. [ssrn | arxiv | journal] Journal
Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper et al. "Stealing Part of a Production Language Model." ICML 2024. [arxiv | proceedings] ProceedingsBest Paper Award
A. Feder Cooper et al. "Arbitrariness and Social Prediction: The Confounding Role of Variance in Fair Classification." AAAI 2024. [arxiv | proceedings] ProceedingsBest Student Paper Honorable Mention
Aaron Gokaslan, A. Feder Cooper et al. "CommonCanvas: Open Diffusion Models Trained on Creative-Commons Images." CVPR 2024. [arxiv | proceedings] Proceedings
Milad Nasr*, Nicholas Carlini*, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper et al. "Scalable Extraction of Training Data from (Production) Language Models." 2023. [arxiv] arXiv
A. Feder Cooper*, Wentao Guo*, Khiem Pham* et al. "Coordinating Distributed Example Orders for Provably Accelerated Training." NeurIPS 2023. [arxiv | proceedings] Proceedings
A. Feder Cooper et al. "Hyperparameter Optimization Is Deceiving Us, and How to Stop It." NeurIPS 2021. [arxiv | proceedings] Proceedings
A. Feder Cooper*, Ruqi Zhang*, and Christopher De Sa. "Asymptotically Optimal Exact Minibatch Metropolis-Hastings." NeurIPS 2020. [arxiv | proceedings] ProceedingsSpotlight