I am a scalable machine-learning researcher who does work at the intersection of computer science and law. Currently, I am a CS Ph.D. candidate at Cornell University
, an Affiliate at the Berkman Klein Center for Internet & Society
at Harvard University, a lead co-organizer of GenLaw
, and an incoming student researcher at Google AI Research
My Ph.D. work centers on how to make more reliable conclusions when using machine-learning methods at scale and in practice. My contributions span distributed training, hyperparameter optimization, uncertainty estimation, model selection, and generative AI (in particular, open and scalable text-to-image modeling). I engage in related research in tech policy and law, for which I also spend a lot of time working to effectively communicate the capabilities and limits of machine learning to a wider audience.
In the past I interned at Microsoft Research
and was named a "Rising Star in EECS"
by MIT. I am also an alum of Cornell's initiative on Artificial Intelligence, Policy, and Practice
(AIPP), which has very generously supported my work through funding from the John T. and Catherine D. MacArthur Foundation.
Prior to my research career, I worked for several years as a software engineer at companies both (really) big and (really) small. I specialized in designing, building, and monitoring large-scale backend data-processing systems.
Selected papers and blog posts
- Katherine Lee*, A. Feder Cooper*, and James Grimmelmann*. "Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain." Under submission, 2023. [ssrn | arxiv]
- A. Feder Cooper*, Wentao Guo*, Khiem Pham*, Tiancheng Yuan, Charlie F. Ruan, Yucheng Lu, and Christopher De Sa. "CD-GraB: Coordinating Distributed Example Orders for Provably Accelerated Training." Forthcoming, NeurIPS 2023. [arxiv | proceedings]
- Katherine Lee*, A. Feder Cooper*, James Grimmelmann, and Daphne Ippolito. "AI and Law: The Next Generation (An explainer series)." 2023. [blog | ssrn]
- A. Feder Cooper, Katherine Lee, Madiha Zahrah Choksi, Solon Barocas, Christopher De Sa, James Grimmelmann, Jon Kleinberg, Siddhartha Sen, and Baobao Zhang. "Is My Prediction Arbitrary? Confounding Effects of Variance in Fair Classification." Under submission, 2023. [arxiv]
- A. Feder Cooper, Jonathan Frankle, and Christopher De Sa. "Non-Determinism and the Lawlessness of Machine Learning Code." CSLAW 2022. [arxiv | proceedings]
- A. Feder Cooper*, Emanuel Moss*, Benjamin Laufer, and Helen Nissenbaum. "Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning." FAccT 2022. [arxiv | proceedings]
- A. Feder Cooper, Yucheng Lu, Jessica Zosa Forde, and Christopher De Sa. "Hyperparameter Optimization Is Deceiving Us, and How to Stop It." NeurIPS 2021. [arxiv | proceedings]
- Ruqi Zhang*, A. Feder Cooper*, and Christopher De Sa. "Asymptotically Optimal Exact Minibatch Metropolis-Hastings." NeurIPS 2020, Spotlight. [arxiv | proceedings]
- Ruqi Zhang, A. Feder Cooper, and Christopher De Sa. "AMAGOLD: Amortized Metropolis Adjustment for Efficient Stochastic Gradient MCMC." AISTATS 2020. [arxiv | proceedings]