top of page

 Alexa Tartaglini

I’m a Research Scientist @ the NYU Human & Machine Learning Lab, where I currently think about how machines think. My primary research interest is deep neural network interpretability with an emphasis on vision. How do neural networks see the world, and how can we improve + better align computer vision?

About

//*-- bio --*//

headshot.jpg

Prior to my current position, I was an undergraduate at NYU’s Courant Institute of Mathematical Sciences, where I completed a double B.A. in mathematics and computer science (2018-2023). I joined the Human & Machine Learning lab in 2019 as an Undergraduate Researcher under the supervision of Brenden Lake and Wai Keen Vong. During this time, I worked on a number of projects that aimed to make progress on the following questions: (1) what do deep neural networks actually learn from training on ImageNet?, (2) what are the limitations of using pre-trained ImageNet models as off-the-shelf “eyes” for downstream tasks?, and (3) how can we design benchmarks that enable truly informative and "species-fair" comparisons between human and machine intelligence? My honors thesis, "Human-Machine Perceptual Divergence: Two Investigations on How Neural Networks See the World," was the recipient of the NYU Minds, Brains, and Machines Initiative's Robert J. Glushko Prize. 

​

In addition to this work, I was selected as a trainee for the NIH-affiliated Training Program in Computational Neuroscience at NYU’s Center for Neural Science under the mentorship of Wei Ji Ma (2020-2021). I learned a lot about the methods used to understand human visual intelligence as well as the various strengths and failure modes of the primate visual system, which now serves as a source of inspiration for some of my current ideas. In particular, I'm interested in applying neuroscience frameworks and methodologies to the study of machine intelligence, as well as studying tasks that are easy for biological systems yet difficult for computers.

​

In my current position (2023-), I've been working on teaching Vision Transformers to understand abstract visual relations and figuring out why they're so good at it with Ellie Pavlick and Brown University's Language Understanding and Representation Lab

 

In the future, I hope to pivot my research towards mechanistic interpretability for {vision, language, vision + language} models and topics related to representational alignment. If you'd like to collaborate in these areas (or exchange ideas about anything), please reach out!

bottom of page