The broad motivation of my research is to understand mathematically why neural networks and other machine learning models work so well in practice and to apply those mathematical foundations to make ML a more equitable tool. To that end, my research specifically focuses on the representational capacities of neural networks (see HSSV21 and SC22) and the generalization properties of over-parameterized ML models (see ASH21). I intend to keep working in these spaces, and I also am interested in finding some more applied projects in the intersection of AI and climate.
As an undergraduate, I did a variety of research projects on mathematical modeling, dynamical systems, molecular biology, and ML under the supervision of Bjorn Sandstede, William Fairbrother, and Eli Upfal.
In my "gap year" between Brown and Columbia, I lived in San Francisco as a data scientist at LinkedIn and a software engineering intern at Lumi Labs.
I love teaching. At Brown, I TA'd five computer science and applied math classes: CSCI 190 (accelerated intro to CS), CSCI 220 (discrete math), CSCI 1010 (CS theory), APMA 1360 (intro to dynamical systems), and CSCI 1570 (algorithms), for which I was Head TA. At Columbia, I was a graduate TA for COMS 4252 (computational learning theory) in spring 2021 with Rocco Servedio. I am currently co-teaching a supplemental lab with Sam Deng for Natural and Articifial Neural Networks, which is taught by Christos Papadimitriou and John Morrison. We developed the lab from scratch and all materials are available.
I started the CS Theory Student Seminar at Columbia (which is now coordinated by Shivam Nadimpalli) and coordinated the CS theory retreat for fall 2021 with Tim Randolph. I was a mentor for the Undergrad TCS Student Seminar on ML theory for summer and fall 2021.
I am grateful for funding from an NSF GRFP fellowship, which I received in March 2021.
[SAH22] Clayton Sanford*, Navid Ardeshir*, Daniel Hsu. "Intrinsic dimensionality and generalization properties of the R-norm inductive bias." [arxiv]
[HSSV22] Daniel Hsu*, Clayton Sanford*, Rocco Servedio*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*. "Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals." COLT 2022. [arxiv] [NYU seminar slides]
[SC22] Clayton Sanford, Vaggos Chatziafratis. "Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem." AISTATS 2022. [arxiv]
[ASH21] Navid Ardeshir*, Clayton Sanford*, Daniel Hsu. "Support vector machines and linear regression coincide with very high-dimensional features." NeurIPS 2021. [paper] [arxiv] [blog post] [reviews] [conference talk] [Brown seminar slides]
[HSSV21] Daniel Hsu*, Clayton Sanford*, Rocco Servedio*, Emmanouil-Vasileios Vlatakis-Gkaragkounis*. "On the Approximation Power of Two-Layer Networks of Random ReLUs." COLT 2021. [paper] [arxiv] [blog post] [conference talks] [Columbia DSI poster session] [MIT and BU seminar slides]
[CRSSCS22] Tracy Chin*, Jacob Ruth*, Clayton Sanford*, Rebecca Santorella*, Paul Carter*, Bjorn Sandstede*. "Enabling equation-free modeling via diffusion maps." Journal of Dynamics and Differential Equations, 2022. [journal] [arxiv]
[Sanford18] Clayton Sanford. "Applying Rademacher-Like Bounds to Combinatorial Samples and Function Selection." Honors Thesis, Brown Department of Computer Science, 2018. [thesis]
[CSF17] Kamil Cygan*, Clayton Sanford*, William Fairbrother. "Spliceman2 - A Computational Web Server That Predicts Sequence Variations in Pre-mRNA Splicing." Bioinformatics 33 (18), 2017. [paper]