I am a second-year PhD student at Berkeley and a member of Berkeley AI Research.
I am broadly interested in computational models of perception, embodied cognition, and intelligence.
Currently, I am interested in self-supervised learning, continual learning, and autonomous skill acquisition.
Previously, I was a Research Engineer at Facebook AI Research.
Before that, I studied Computer Science at Princeton (B.S., 2015).
I am gratefully funded by the PD Soros Fellowship and BAIR.
News
I met so many inspiring and brilliant peers throughout the interview process and I am honored to have been selected.
Read more here.
A Srinivas, A Jabri, P Abbeel, S Levine, C Finn.
New paper on learning representations for planning that capture semantics of visuomotor tasks! [ project page ]
M Baroni, A Joulin, A Jabri, G Kruszewski, A Lazaridou, K Simonic, T Mikolov.
A short paper on the nature of tasks we are studying in the CommAI project.
A Li, A Jabri, A Joulin, L van der Maaten.
In which we show a recursive smoothing loss allows us to learn visual representations grounded in compositional phrases, at scale.
Fellow organizers: M Baroni, A Joulin, T Mikolov, G Kruszewski, A Lazaridou, K Simonic.
Joint work with M Baroni, A Joulin, T Mikolov, G Kruszewski, A Lazaridou, K Simonic.
We open-sourced it. Hope it is useful for others!
A Jabri, A Joulin, L van der Maaten
In which we show that SOTA VQA models may not be learning what we think they are, and that there are even easier ways to cheat. #datasetbias
ajabri at gmail