I am a scalable machine-learning (ML) researcher, working on reliable measurement and evaluation of ML. I am a co-founder of the GenLaw Center, a CS Ph.D. candidate at Cornell University, and an Affiliate at the Berkman Klein Center for Internet & Society at Harvard University.
I am an incoming postdoctoral researcher at Microsoft Research, and will be affiliated with Stanford HAI, RegLab, and CRFM working with Percy Liang and Dan Ho on research and policy questions regarding the governance of foundation models.
My research develops quality metrics for ML capabilities, and makes sure that we can effectively measure these metrics at scale and in practice.
My contributions span uncertainty estimation, privacy and security of generative-AI systems, distributed training, hyperparameter optimization, and model selection.
I also do work in tech policy and law, and spend a lot of time finding ways to effectively communicate the capabilities and limits of AI/ML to interdisciplinary audiences and the public.
Prior to my research career, I worked for several years as a software engineer at companies both (really) big and (really) small. I specialized in designing, building, and monitoring large-scale backend data-processing systems.
A. Feder Cooper*, Katherine Lee*, and James Grimmelmann*. "Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain." Forthcoming, Journal of the Copyright Society, 2024. [ssrn | arxiv]
Milad Nasr*, Nicholas Carlini*, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. "Scalable Extraction of Training Data from (Production) Language Models." 2023. [arxiv]
Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr. "Stealing Part of a Production Language Model." 2024. [arxiv]
Aaron Gokaslan, A. Feder Cooper, Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel, Jonathan Frankle, Cory Stephenson, and Volodymyr Kuleshov. "CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images." CVPR 2024. [arxiv]
A. Feder Cooper*, Ruqi Zhang*, and Christopher De Sa. "Asymptotically Optimal Exact Minibatch Metropolis-Hastings." NeurIPS 2020Spotlight. [arxiv | proceedings]
A. Feder Cooper, Yucheng Lu, Jessica Zosa Forde, and Christopher De Sa. "Hyperparameter Optimization Is Deceiving Us, and How to Stop It." NeurIPS 2021. [arxiv | proceedings]
A. Feder Cooper, Katherine Lee, Madiha Choksi, Solon Barocas, Christopher De Sa, James Grimmelmann, Jon Kleinberg, Siddhartha Sen, and Baobao Zhang. "Arbitrariness and Social Prediction: The Confounding Role of Variance in Fair Classification." AAAI 2024Best Student Paper Honorable Mention. [arxiv]
A. Feder Cooper*, Wentao Guo*, Khiem Pham*, Tiancheng Yuan, Charlie F. Ruan, Yucheng Lu, and Christopher De Sa. "Coordinating Distributed Example Orders for Provably Accelerated Training." NeurIPS 2023. [arxiv | proceedings]