This paper proposes an interactive multiobjective evolutionary algorithm (MOEA) that attempts to learn a value function capturing the users' true preferences. At regular intervals, the user is asked to rank a single pair of solutions. This information is used to update the algorithm's internal value function model, and the model is used in subsequent generations to rank solutions incomparable according to dominance. This speeds up evolution toward the region of the Pareto front that is most desirable to the user. We take into account the most general additive value function as a preference model and we empirically compare different ways to identify the value function that seems to be the most representative with respect to the given preference information, different types of user preferences, and different ways to use the learned value function in the MOEA. Results on a number of different scenarios suggest that the proposed algorithm works well over a range of benchmark problems and types of user preference
Learning value functions in interactive evolutionary multiobjective optimization
GRECO, Salvatore;
2015-01-01
Abstract
This paper proposes an interactive multiobjective evolutionary algorithm (MOEA) that attempts to learn a value function capturing the users' true preferences. At regular intervals, the user is asked to rank a single pair of solutions. This information is used to update the algorithm's internal value function model, and the model is used in subsequent generations to rank solutions incomparable according to dominance. This speeds up evolution toward the region of the Pareto front that is most desirable to the user. We take into account the most general additive value function as a preference model and we empirically compare different ways to identify the value function that seems to be the most representative with respect to the given preference information, different types of user preferences, and different ways to use the learned value function in the MOEA. Results on a number of different scenarios suggest that the proposed algorithm works well over a range of benchmark problems and types of user preferenceFile | Dimensione | Formato | |
---|---|---|---|
Learning-value-functions-in-interactive-evolutionary-multiobjective-optimization2015IEEE-Transactions-on-Evolutionary-Computation.pdf
solo gestori archivio
Tipologia:
Versione Editoriale (PDF)
Dimensione
1.78 MB
Formato
Adobe PDF
|
1.78 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.