I mostly work with sound and language.
My interests span multimodal AI, cognitive science, signal processing, musical instrument design, jazz, and poetry.
I am a Hertz Fellow and Steve Jobs Archive Fellow.
I'm currently a PhD student at Stanford.
I went to college at MIT, where I studied computer science, math, music, and literature.
At Stanford, I spend my time between the Cognitive Tools Lab, Maneesh Agrawala's group, and Stanford CCRMA.
At the MIT Music Technology Lab, I developed machine listening systems under Eran Egozy.
At MIT CSAIL, I researched how people communicate sounds using their voices.
I was advised by Josh Tenenbaum and Jonathan Ragan-Kelley, and mentored by Kartik Chandra and Karima Ma.
I developed multimodal LLM systems and bioinformatics algorithms at Apple.
Some of my work lives on in Apple Intelligence and the Health app.