Incoming Assistant Professor, Yale CS
Postdoctoral Researcher, Microsoft Research
Postdoctoral Affiliate, Stanford
afedercooper [AT] gmail [DOT] com || acoo [AT] microsoft [DOT] com
My research develops metrics for ML capabilities, and makes sure that we can effectively and reliably measure these metrics at scale and in practice.
My contributions span uncertainty estimation, privacy and security of generative-AI systems, distributed training, hyperparameter optimization, and model selection.
I also do work in tech policy and law, and spend a lot of time finding ways to effectively communicate the capabilities and limits of AI/ML to interdisciplinary audiences and the public.
In the past I interned at Microsoft Research and at Google Research, and was named a "Rising Star in EECS" by MIT.
My doctoral work was generously supported by the John T. and Catherine D. MacArthur Foundation through AIPP.
Prior to my research career, I worked for several years as a software engineer at companies both (really) big and (really) small. I specialized in designing, building, and monitoring large-scale backend data-processing systems.
(I am not recruiting students for Fall 2025 -- my appointment at Yale starts in 2026.)
A. Feder Cooper*, Katherine Lee*, and James Grimmelmann*. "Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain." Journal of the Copyright Society, 2024. [ssrn | arxiv | journal] Journal
Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper et al. "Stealing Part of a Production Language Model." ICML 2024. [arxiv | proceedings] ProceedingsBest Paper Award
A. Feder Cooper et al. "Arbitrariness and Social Prediction: The Confounding Role of Variance in Fair Classification." AAAI 2024. [arxiv | proceedings] ProceedingsBest Student Paper Honorable Mention
Aaron Gokaslan, A. Feder Cooper et al. "CommonCanvas: Open Diffusion Models Trained on Creative-Commons Images." CVPR 2024. [arxiv | proceedings] Proceedings
Milad Nasr*, Nicholas Carlini*, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, et al. "Scalable Extraction of Training Data from (Production) Language Models." 2023. [arxiv] Preprint
A. Feder Cooper*, Wentao Guo*, Khiem Pham* et al. "Coordinating Distributed Example Orders for Provably Accelerated Training." NeurIPS 2023. [arxiv | proceedings] Proceedings
A. Feder Cooper*, Emanuel Moss* et al. "Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning." FAccT 2022. [arxiv | proceedings] Proceedings
A. Feder Cooper et al. "Hyperparameter Optimization Is Deceiving Us, and How to Stop It." NeurIPS 2021. [arxiv | proceedings] Proceedings
A. Feder Cooper*, Ruqi Zhang*, and Christopher De Sa. "Asymptotically Optimal Exact Minibatch Metropolis-Hastings." NeurIPS 2020. [arxiv | proceedings] ProceedingsSpotlight