This paper proposes an interactive multiobjective evolutionary algorithm (MOEA) that attempts to learn a value function capturing the users' true preferences. At regular intervals, the user is asked to rank a single pair of solutions. This information is used to update the algorithm's internal value function model, and the model is used in subsequent generations to rank solutions incomparable according to dominance. This speeds up evolution toward the region of the Pareto front that is most desirable to the user. We take into account the most general additive value function as a preference model and we empirically compare different ways to identify the value function that seems to be the most representative with respect to the given preference information, different types of user preferences, and different ways to use the learned value function in the MOEA. Results on a number of different scenarios suggest that the proposed algorithm works well over a range of benchmark problems and types of user preference

Learning value functions in interactive evolutionary multiobjective optimization

GRECO, Salvatore;
2015

Abstract

This paper proposes an interactive multiobjective evolutionary algorithm (MOEA) that attempts to learn a value function capturing the users' true preferences. At regular intervals, the user is asked to rank a single pair of solutions. This information is used to update the algorithm's internal value function model, and the model is used in subsequent generations to rank solutions incomparable according to dominance. This speeds up evolution toward the region of the Pareto front that is most desirable to the user. We take into account the most general additive value function as a preference model and we empirically compare different ways to identify the value function that seems to be the most representative with respect to the given preference information, different types of user preferences, and different ways to use the learned value function in the MOEA. Results on a number of different scenarios suggest that the proposed algorithm works well over a range of benchmark problems and types of user preference
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/18130
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 67
  • ???jsp.display-item.citation.isi??? 50
social impact