Deep Neural Networks (DNNs) are often criticized because they lack the ability to learn more than one task at a time: Multitask Learning is an emerging research area whose aim is to overcome this issue. In this work, we introduce the Pareto Multitask Learning framework as a tool that can show how effectively a DNN is learning a shared representation common to a set of tasks. We also experimentally show that it is possible to extend the optimization process so that a single DNN simultaneously learns how to master two or more Atari games: using a single weight parameter vector, our network is able to obtain sub-optimal results for up to four games.

Multi-task Learning by Pareto Optimality

Nicosia G.
2019-01-01

Abstract

Deep Neural Networks (DNNs) are often criticized because they lack the ability to learn more than one task at a time: Multitask Learning is an emerging research area whose aim is to overcome this issue. In this work, we introduce the Pareto Multitask Learning framework as a tool that can show how effectively a DNN is learning a shared representation common to a set of tasks. We also experimentally show that it is possible to extend the optimization process so that a single DNN simultaneously learns how to master two or more Atari games: using a single weight parameter vector, our network is able to obtain sub-optimal results for up to four games.
2019
978-3-030-37598-0
978-3-030-37599-7
Atari 2600 Games; Deep artificial neural networks; Deep neuroevolution; Evolution Strategy; Hypervolume; Kullback-Leibler Divergence; Multitask learning; Neural and evolutionary computing
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/414408
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact