I'm a New York City-based researcher studying the overlap between machine learning and theoretical computer science. My research is broadly motivated by a desire to improve the interpretability, transparency, and accountability of neural networks by understanding their mathematical properties. My work is funded by an NSF GRFP fellowship, which I received in March 2021.

Before starting my PhD at Columbia, I studied applied math and computer science as an undergrad at Brown and worked as a data scientist at LinkedIn. I've completed internships at LinkedIn (data science), Lumi Labs (engineering at a 15-person startup), Allen Institute for AI (climate modeling research), and Microsoft Research (transformer research). I am currently part-time at Google Research.

I am currently looking for full-time machine learning research opportunities in NYC for after my graduation in Spring 2024. I've studied the fundamental capabilities and limitations of neural networks (including transformers) and applied ML and mathematical techniques across a wide range of domains. If I might be a good fit for your organization, please get in touch.

My work

Most of my research consists of mathematical results about the capabilities and limitations of neural networks and other machine learning algorithms. In particular, I study:

I applied machine learning techniques to climate modeling research as an internship at the Allen Institute for AI (AI2) during Summer 2022. My contributions led to my recognition as an Outstanding Intern.

As an undergraduate, I completed a variety of research projects on mathematical modeling (as a winner of the COMAP Interdisciplinary Contest for Modeling), dynamical systems (with Bjorn Sandstede), molecular biology (with William Fairbrother), and machine learning theory (with Eli Upfal).

I primarily code in Python and am experienced with core data science and deep learning packages (e.g. Pytorch, Tensorflow, Sklearn, and Pandas). I programmed in Java and Scala during my undergraduate years and when I worked at LinkedIn and Lumi Labs. The climate modeling repository I contributed to while interning at AI2 is publicly available, and my Microsoft Research code will be published once we upload our preprint.

Teaching experience

I TA'd five different courses at Brown: accelerated intro to CS, discrete math, CS theory, intro to dynamical systems, and algorithms, for which I was Head TA.

I was a graduate TA for courses on computational learning theory with Rocco Servedio, natural and artificial neural networks with Christos Papadimitriou and John Morrison, and ML and climate with Alp Kucukelbir. Natural and artificial neural networks was a new course, for which I developed a lab component from scratch in collaboration with Sam Deng; all materials are available online.

As a graduate student, I coordinated undergraduate seminars on deep learning theory during Summer 2021, Fall 2021, and Spring 2023.

Departmental and academic service

I served as a reviewer for ICLR 2024, NeurIPS 2023, JMLR, SODA 2023, and STOC 2022.

I am currently a PhD representative alongside the great Tim Randolph. We represent computer science PhD student interests and concerns to CS faculty and administrators.

In the Columbia CS theory group, I coordinated the student retreat for fall 2021 and 2022 and started (but no longer run) the CS Theory Student Seminar.

I was the leader of qSTEM, an organization for LGBTQ+ students in the Columbia School of Engineering and Applied Sciences.

Find me online:

[linkedin] [github] [arxiv] [google scholar] [dblp]

I am indebted to many fantastic mentors over many years among my teachers, professors, advisors, coworkers, and friends. I am always happy to pass it forward and chat with anyone interested in learning more about PhD programs in ML and/or theory, Columbia CS, NYC living, or niche neighborhood concerns about Morningside Heights.

Machine Learning Theory


[SHT24] Clayton Sanford, Daniel Hsu, Matus Telgarsky. "Transformers, parallel computation, and logarithmic depth." Preprint. [arxiv]

[SHT23] Clayton Sanford, Daniel Hsu, Matus Telgarsky. "Representational Strengths and Limitations of Transformers." NeurIPS 2023. [paper] [arxiv] [UCSD seminar slides] [Google NYC algorithms seminar slides] [Columbia StatML workshop poster] [Columbia StatML workshop slides]

[AHS23] Navid Ardeshir*, Daniel Hsu*, Clayton Sanford*. "Intrinsic dimensionality and generalization properties of the R-norm inductive bias." COLT 2023. [paper] [arxiv] [blog post] [COLT poster]

[BBSS22] Alberto Bietti*, Joan Bruna*, Clayton Sanford*, Min Jae Song*. "Learning single-index models with shallow neural networks." NeurIPS 2022. [paper] [NeurIPS poster] [arxiv]

[CPSS22] Vaggos Chatziafratis*, Ioannis Panageas*, Clayton Sanford*, Stelios Stavroulakis*. "On Scrambling Phenomena for Randomly Initialized Recurrent Networks." NeurIPS 2022. [paper] [arxiv]

[HSSV22] Daniel Hsu*, Clayton Sanford*, Rocco Servedio*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*. "Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals." COLT 2022. [paper] [arxiv] [NYU seminar slides] [conference talk]

[SC22] Clayton Sanford, Vaggos Chatziafratis. "Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem." AISTATS 2022. [paper] [arxiv] [conference talk]

[ASH21] Navid Ardeshir*, Clayton Sanford*, Daniel Hsu. "Support vector machines and linear regression coincide with very high-dimensional features." NeurIPS 2021. [paper] [arxiv] [blog post] [reviews] [conference talk] [Brown seminar slides] [UCSC seminar slides]

[HSSV21] Daniel Hsu*, Clayton Sanford*, Rocco Servedio*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*. "On the Approximation Power of Two-Layer Networks of Random ReLUs." COLT 2021. [paper] [arxiv] [blog post] [conference talks] [Columbia DSI poster session] [MIT and BU seminar slides] [UW seminar slides]


ML + Climate


[SKW+23] Clayton Sanford, Anna Kwa, Oliver Watt-Meyer, Spencer Clark, Noah Brenowitz, Jeremy McGibbon, Christopher Bretherton. "Improving the reliability of ML-corrected climate models with novelty detection." Appearing in Journal of Advances in Modeling Earth Systems (JAMES). [journal submission] [NeurIPS workshop paper arxiv] [NeurIPS workshop slides] [AMS workshop abstract] [AMS workshop slides]


Undergraduate Research


[CRSSCS22] Tracy Chin*, Jacob Ruth*, Clayton Sanford*, Rebecca Santorella*, Paul Carter*, Bjorn Sandstede*. "Enabling equation-free modeling via diffusion maps." Journal of Dynamics and Differential Equations, 2022. [journal] [arxiv]

[S18] Clayton Sanford. "Applying Rademacher-Like Bounds to Combinatorial Samples and Function Selection." Honors Thesis, Brown Department of Computer Science, 2018. [thesis]

[CSF17] Kamil Cygan*, Clayton Sanford*, William Fairbrother. "Spliceman2 - A Computational Web Server That Predicts Sequence Variations in Pre-mRNA Splicing." Bioinformatics 33 (18), 2017. [paper]

[GSK16] Julia Gross*, Clayton Sanford*, Geoff Kocks*. "Projected Water Needs and Intervention Strategies in India." Undergraduate Mathematics and its Applications 37 (2), 2016. [paper] [article]

A few random things...