I am a scalable machine-learning (ML) researcher, working on reliable measurement and evaluation of ML and ML systems. I am a CS Ph.D. candidate at Cornell University
, an Affiliate at the Berkman Klein Center for Internet & Society
at Harvard University, a lead organizer of GenLaw
, and an incoming student researcher at Google AI Research
My research involves coming up with nuanced quality metrics for ML behaviors, and making sure that we can effectively measure these metrics at scale and in practice.
My contributions span distributed training, hyperparameter optimization, uncertainty estimation, model selection, and generative modeling.
To make sure that our evaluation metrics can meaningfully measure what we want ML to do in the world, I engage in related research in tech policy and law. I also spend a lot of time working to effectively communicate the capabilities and limits of machine learning to a wider audience.
In the past I interned at Microsoft Research
and was named a "Rising Star in EECS"
by MIT. My work has been generously supported by the John T. and Catherine D. MacArthur Foundation through Cornell AIPP
Prior to my research career, I worked for several years as a software engineer at companies both (really) big and (really) small. I specialized in designing, building, and monitoring large-scale backend data-processing systems.
*Equal contribution; full list available here
- A. Feder Cooper*, Katherine Lee*, and James Grimmelmann*. "Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain." Forthcoming, Journal of the Copyright Society, 2024. [ssrn | arxiv]
- Milad Nasr*, Nicholas Carlini*, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. "Scalable Extraction of Training Data from (Production) Language Models." 2023. [arxiv]
- A. Feder Cooper*, Wentao Guo*, Khiem Pham*, Tiancheng Yuan, Charlie F. Ruan, Yucheng Lu, and Christopher De Sa. "CD-GraB: Coordinating Distributed Example Orders for Provably Accelerated Training." Forthcoming, NeurIPS 2023. [arxiv | proceedings]
- Aaron Gokaslan, A. Feder Cooper, Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel, Jonathan Frankle, Cory Stephenson, and Volodymyr Kuleshov. "CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images." Workshop on ML for Creativity and Design at NeurIPS 2023. [arxiv]
- A. Feder Cooper*, Katherine Lee*, James Grimmelmann, and Daphne Ippolito. "AI and Law: The Next Generation (An explainer series)." 2023. [blog | ssrn]
- A. Feder Cooper, Katherine Lee, Madiha Choksi, Solon Barocas, Christopher De Sa, James Grimmelmann, Jon Kleinberg, Siddhartha Sen, and Baobao Zhang. "Arbitrariness and Prediction: The Confounding Role of Variance in Fair Classification." Workshop on Algorithmic Fairness through the Lens of Time at NeurIPS 2023. [arxiv]
- A. Feder Cooper*, Emanuel Moss*, Benjamin Laufer, and Helen Nissenbaum. "Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning." FAccT 2022. [arxiv | proceedings]
- A. Feder Cooper, Yucheng Lu, Jessica Zosa Forde, and Christopher De Sa. "Hyperparameter Optimization Is Deceiving Us, and How to Stop It." NeurIPS 2021. [arxiv | proceedings]
- A. Feder Cooper*, Ruqi Zhang*, and Christopher De Sa. "Asymptotically Optimal Exact Minibatch Metropolis-Hastings." NeurIPS 2020, Spotlight. [arxiv | proceedings]
- Ruqi Zhang, A. Feder Cooper, and Christopher De Sa. "AMAGOLD: Amortized Metropolis Adjustment for Efficient Stochastic Gradient MCMC." AISTATS 2020. [arxiv | proceedings]