(DL)We present a novel approach to rank Deep Learning hyper-parameters through the application of Sensitivity Analysis (SA). DL hyper-parameter tuning is crucial to model accuracy however, choosing optimal values for each parameter is time and resource-intensive. SA provides a quantitative measure by which hyper-parameters can be ranked in terms of contribution to model accuracy. Learning rate decay was ranked highest, with model performance being sensitive to this parameter regardless of architecture or dataset. The influence of a model's initial learning rate was proven to be low, contrary to the literature. Additionally, the importance of a parameter is closely linked to model architecture. Shallower models showed susceptibility to hyper-parameters affecting the stochasticity of the learning process whereas deeper models showed sensitivity to hyper-parameters affecting the convergence speed. Furthermore, the complexity of the dataset can affect the margin of separation between the sensitivity measures of the most and the least influential parameters, making the most influential hyper-parameter an ideal candidate for tuning compared to the other parameters.

Sensitivity Analysis for Deep Learning: Ranking Hyper-parameter Influence

Nicosia G.
2021-01-01

Abstract

(DL)We present a novel approach to rank Deep Learning hyper-parameters through the application of Sensitivity Analysis (SA). DL hyper-parameter tuning is crucial to model accuracy however, choosing optimal values for each parameter is time and resource-intensive. SA provides a quantitative measure by which hyper-parameters can be ranked in terms of contribution to model accuracy. Learning rate decay was ranked highest, with model performance being sensitive to this parameter regardless of architecture or dataset. The influence of a model's initial learning rate was proven to be low, contrary to the literature. Additionally, the importance of a parameter is closely linked to model architecture. Shallower models showed susceptibility to hyper-parameters affecting the stochasticity of the learning process whereas deeper models showed sensitivity to hyper-parameters affecting the convergence speed. Furthermore, the complexity of the dataset can affect the margin of separation between the sensitivity measures of the most and the least influential parameters, making the most influential hyper-parameter an ideal candidate for tuning compared to the other parameters.
2021
978-1-6654-0898-1
Deep Learning
Hyper-parameter Influence
Hyper-parameter rank
Hyperparameter Tuning
Sensitivity Analysis
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/521678
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 11
social impact