I am a PhD student at Berkeley and a member of Berkeley AI Research, advised by Alexei A. Efros.

I am broadly interested in computational models of perception, embodied cognition, and intelligence. I am especially interested in how sensorimotor representations and skills can be acquired and adapted with as little supervision as possible.

Previously, I was a Research Engineer at Facebook AI Research in New York. Earlier still, I studied Computer Science at Princeton (B.S. 2015). I am grateful to be funded by the PD Soros Fellowship and BAIR.

Publications



See my Google Scholar page.
Space-Time Correspondence as a Contrastive Random Walk.
NeurIPS 2020, Oral Presentation.
A Jabri, A Owens, A Efros.

Dense representation learning from unlabeled video, by learning to walk on a space-time graph.
[ paper ] [ project page ] [ code ]
Unsupervised Curricula for Visual Meta-Reinforcement Learning.
NeurIPS 2019, Spotlight Presentation.
A Jabri, K Hsu, B Eysenbach, A Gupta, S Levine, C Finn.

Unsupervised discovery and meta-learning of visuomotor skills, by deep clustering your own trajectories.
[ paper ][ project page ]
Towards Practical Multi-Object Manipulation using Relational Reinforcement Learning.
ICRA 2020.
R Li, A Jabri, T Darrell, P Agrawal.

Training a graph neural net policy with a simple curriculum leads to task decomposition that generalizes to new configurations.
[ paper ][ project page ][ code ]
Learning Correspondence from the Cycle-Consistency of Time.
CVPR 2019, Oral Presentation.
X Wang*, A Jabri*, A Efros.

Learn a generic representation for visual correspondence from unlabeled video, using cycle consistency in time.
[ paper ][ project page ][ code ]
Universal Planning Networks.
ICML 2018.
A Srinivas, A Jabri, P Abbeel, S Levine, C Finn.

Learn a visual representation that captures task semantics by differentiating through model-based planning.
[ paper ] [ project page ] [ code ]
CommAI: Evaluating the first steps towards a useful general AI.
ICLR 2017 Workshop

M Baroni, A Joulin, A Jabri, G Kruszewski, A Lazaridou, K Simonic, T Mikolov.
A short paper on the nature of tasks we are studying in the CommAI project.
Learning Visual N-Grams from Web Data.
ICCV 2017.
A Li, A Jabri, A Joulin, L van der Maaten.

A smoothed n-gram loss for learning visual representations from compositional phrases, at scale.
[ paper ]
Revisiting Visual Question Answering Baselines.
ECCV 2016.
A Jabri, A Joulin, L van der Maaten.

SOTA VQA models may not be learning what we think they are... #datasetbias
[ paper ]
Learning Visual Features from Large Weakly Supervised Data.
ECCV 2016.
A Joulin, L van der Maaten, A Jabri, N Vasilache.

Learn strong visual features from tons of hashtag data, with interesting byproducts like translation by visual grounding.
[ paper ]

ajabri at gmail