Deep Reinforcement Learning
tomzahavy (at) gmail (dot) com
I am a research scientist at DeepMind in the field of Reinforcement Learning. I come from a small town in 🇮🇱 on the Mediterranean Sea. I am currently living in London 🇬🇧 and I spent some time in the 🇺🇸. My family is coming from 🇩🇪🇮🇩🇱🇺 and by DNA I am 🇮🇩🇮🇹🇮🇷(50/30/20). I am married to Gili, a singer-songwriter from 🇮🇩🇲🇦🇮🇱. I love spending my free time outdoors in camping, hiking, 4X4 driving, mountaineering, skiing, and scuba diving. When I am at home, my hobbies are running, basketball, and reading science-fiction. I completed my Ph.D. at the Technion where I was advised by Shie Mannor and interned at Microsoft, Walmart, Facebook, and Google.
My high-level research goal is to build an artificial intelligence via Reinforcement Learning. In my PhD I studied aspects of scalability, structure discovery, hierarchy, abstraction, and exploration in DeepRL. Since I joined the Discovery team @DeepMind, I focus on two topics:
Building reinforcement learning algorithms that discover an internal knowledge base (hyper parameters, loss function, options, reward), in order to solve the original problem better.
A Self-Tuning Actor-Critic Algorithm, NeurIPS 2020
Tl;dr: We propose a self-tuning actor-critic algorithm (STACX) that adapts all the differentiable hyper parameters of IMPALA including those of auxiliary tasks and achieves impressive gains in performance in Atari and DM control.
Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado van Hasselt, David Silver, Satinder Singh
Bootstrapped Meta Learning, ICLR 2022 (outstanding paper award)
Tl;dr: We propose a novel meta learning algorithm that first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target. When applied to STACX it achieves SOTA results in Atari.
Sebastian Flennerhag, Yannick Schroecker, Tom Zahavy, Hado van Hasselt, David Silver, Satinder Singh
Discovery of Options via Meta-Learned Subgoals, NeurIPS 2021
Tl;dr: we use meta gradients to discover subgoals, in the form of intrinsic rewards, uses these subgoals to learn options, and control these options with an HRL policy.
Vivek Veeriah, Tom Zahavy, Matteo Hessel, Zhongwen Xu, Junhyuk Oh, Iurii Kemaev, Hado van Hasselt, David Silver, Satinder Singh
Meta Gradients in Non Stationary Environments, CoLLAs 2022 (Oral)
Tl;dr: We study meta gradients in non stationary RL environments.
Jelena Luketina, Sebastian Flennerhag, Yannick Schroecker, David Abel, Tom Zahavy Satinder Singh
Non linear RL problems
RL Objectives expressed as nonlinear functions of the state occupancy and algorithms for solving them.
Reward is enough for convex MDPs, NeurIPS 2021 (spotlight)
Tl;dr: we study non linear and unsupervised objectives that are defined over the state occupancy of an RL agent in an MDP. These include Apprenticeship Learning, diverse skill discovery, constrained MDPs and pure exploration. We show that maximizing the gradient of such an objective, as an intrinsic reward, solves the problem efficiently. We also propose a meta algorithm and show that many existing algorithms in the literature can be explained as instances of it.
Tom Zahavy, Brendan O'Donoghue, Guillaume Desjardins, Satinder Singh
Discovering a set of policies for the worst case reward, ICLR 2021 (spotlight)
Tl;dr We propose a method for discovering a set of policies that perform well w.r.t the worst case reward when composed together.
Discovering Diverse Nearly Optimal Policies with Successor Features
Tl;dr We propose a method for discovering policies that are diverse in the space of Successor Features, while assuring that they are near optimal using a constrained MDP.
Tom Zahavy, Brendan O'Donoghue, Andre Barreto, Volodymyr Mnih, Sebastian Flennerhag, Satinder Singh