In this work, the authors successfully use Multi-Task Learning methodology to boost the training of Reinforcement Learning models (policy learning). Directly employing Multi-Task Learning to the RL may suffer from several problems: 1) the gradients from different tasks might conflict with each other and 2) an easy task might dominate the training since it receives reward much easier and earlier than other tasks. To utilize MTL properly, they aim to learn a distilled (averaged) policy that is used to regularize the task-specific policy .
More specifically, the objective is:
, where is a discount coefficient, and are hyperparameters to balance between two regularization terms
asking the task specific policy space to be consistent with the distill policy and preventing the policy converge to a greedy policy which locally maximizes expected returns. The second line is just a transformation of the first line, where .
Not limited to optimizing equation (1) alternatively over and , which resembles EM, they proposed an improved parameterization to train and simultaneously. They use optimal Boltzmann policy to form the framework. Consider the estimated distilled policy is parameterized by
and the estimation of soft advantage on task is
, then the estimation policy looks like
This two-column design resembles traditional multi-task methods, where a shared component is adjusted by a task-specific component. Diving into the gradient of w.r.t. ,
we might find that the optimal solution for is the centroid of all task policies . Compared with other multi-task methods, which transfer knowledge in the space of parameters (by sharing or regularizing parameters), the proposed method operates in the space of policies.
Compared with baselines
DisTraL converges much faster and better. The ablation experiment is comprehensive and the comparison is threefold: 1) KL divergence vs entropy divergence, 2) alternative training vs joint optimization, and 3) seperate policy vs two-column distilled policy.
[DR013]: Directions of furture research are already given in the paper:
- Combining Distral with techniques which use auxiliary losses [12, 15, 14]
- Exploring use of multiple distilled policies or latent variables in the distilled policy to allow for more diversity of behaviours
- Exploring settings for continual learning where tasks are encountered sequentially
- And exploring ways to adaptively adjust the KL and entropy costs to better control the amounts of transfer and exploration.
Apart from those, I think there are several other detailed prospecitves we can investigate:
- The DisTraL model distills policy on every set of , which is sub-optimal. We should pay greater attention to those policies that consented by more than tasks, where is a threshold. This is because on states where different task-policies disagree with each other the optimal is a misleading policy that fails to standarize diverse policies.
The distilled policy interacts with not only the current task-specific policy but also the context, or the history of that policy, namely . I don’t know how to formulate it the best way. One possible idea is to change the regularization term fromYou can achieve this by including the previous position/action into the definition of “state”.
to
. It is like N-gram Model in NLP in that we build the shared policy over multiple steps.- Transfering across different environments are more meaningful and convincing than transfering between different tasks in a same environment. Since in the latter setting, we might gain improvement because multiple agents are exploring the same map simultaneously. It will be more general to design some experiments where the input spaces are different across different tasks (but the action space might be the same). The shared policy can take the intermediate feature as the input.