I am a second-year PhD student at Berkeley and a member of Berkeley AI Research. I am advised by Alyosha Efros.

I am broadly interested in computational models of perception, embodied cognition, and intelligence.

Currently, I am interested in self-supervised learning, continual learning, and autonomous skill acquisition.

Previously, I was a Research Engineer at Facebook AI Research.

Before that, I studied Computer Science at Princeton (B.S., 2015).

I am gratefully funded by the PD Soros Fellowship and BAIR.

Google Scholar


Learning Correspondence from the Cycle-Consistency of Time. (Oral). CVPR 2019
X Wang*, A Jabri*, A Efros.
New paper on learning general representations of visual correspondence from unlabeled video. [ paper and project page coming soon ]
I was selected as a PD Soros Fellow for the class of 2018!
I met so many inspiring and brilliant peers throughout the interview process and I am honored to have been selected.
Read more here.
Universal Planning Networks. ICML 2018
A Srinivas, A Jabri, P Abbeel, S Levine, C Finn.
New paper on learning representations for planning that capture semantics of visuomotor tasks! [ project page ]
CommAI: Evaluating the first steps towards a useful general AI. ICLR 2017 Workshop
M Baroni, A Joulin, A Jabri, G Kruszewski, A Lazaridou, K Simonic, T Mikolov.
A short paper on the nature of tasks we are studying in the CommAI project.
Learning Visual N-Grams from Web Data. ICCV 2017
A Li, A Jabri, A Joulin, L van der Maaten.
In which we show a recursive smoothing loss allows us to learn visual representations grounded in compositional phrases, at scale.
I co-organized the Machine Intelligence Workshop at NIPS 2016, where I also gave a talk about CommAI-env.
Fellow organizers: M Baroni, A Joulin, T Mikolov, G Kruszewski, A Lazaridou, K Simonic.
We open-sourced CommAI-env , a platform for developing AI systems as described in "A Roadmap towards Machine Intelligence".
Joint work with M Baroni, A Joulin, T Mikolov, G Kruszewski, A Lazaridou, K Simonic.
Laurens van der Maaten and I were frustrated with available visualization tools, so we made visdom at an FB hackathon.
We open-sourced it. Hope it is useful for others!
Revisiting Visual Question Answering Baselines. ECCV 2016
A Jabri, A Joulin, L van der Maaten
In which we show that SOTA VQA models may not be learning what we think they are, and that there are even easier ways to cheat. #datasetbias
Learning Visual Features from Large Weakly Supervised Data. ECCV 2016
A Joulin, L van der Maaten, A Jabri, N Vasilache
In which we show that one can learn strong visual features without explicit labels.

ajabri at gmail