portrait of Ben

It is difficult to stop the impulse to reveal secrets in conversation, as if information had the desire to live and the power to multiply. – Nassim Taleb, The Bed of Procrustes

About

I am a fifth year applied mathematics Ph.D. student at UCLA working under the supervision of Guido Montúfar. In the past I studied primarily the training/optimization process of neural networks, seeking to understand how the parameterization and algorithm influence the properties of the network throughout time and at convergence. Currently, I focus more on developing scalable machine learning methods that respect individual’s data usage and privacy rights.

Some of my research interests:

Before UCLA I studied computational mathematics at Penn State. Outside of work I enjoy coffee, cooking, and cocktails.

News

02/2023 Our paper À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting was accepted to CVPR 2023

01/2023 Our paper Characterizing the Spectrum of the NTK via a Power Series Expansion was accepted to ICLR 2023

06/2022 Our paper Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime was accepted to NeurIPS 2022

06/2022 This summer I will be an Applied Scientist Intern at Amazon (AWS)

01/2022 Our paper Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks was accepted to ICLR 2022. (If you are interested in this work, our more recent work offers significant improvements).

06/2021 I will be spending the summer in Leipzig, Germany as a summer researcher at the Max Planck Institute for Mathematics in the Sciences working on training dynamics of neural networks