The notion of equivalent number of degrees of freedom (e.d.f.) has been recently proposed in the context of neural network modeling for small data sets. This quantity is much smaller than the number of the parameters in the network and it does not depend on the number of input variables. In this paper, we present numerical studies on both real and simulated data sets assuring the validity of e.d.f. in a general framework. Results confirm that e.d.f. performs more reliably than the total number W of adaptive parameters - which are usually assumed equal to the degrees of freedom of the model in common statistical softwares - for analyzing and comparing neural models. Numerical studies also point out that e.d.f. works well in estimating the error variance and constructing approximate confidence intervals. We then propose a comparison among some model selection criteria and results show that for neural networks GCV performs slightly better. We finally present a simple forward procedure which can be easily implemented for automatically selecting a neural model with good trade-off between learning error and generalization properties.

Computational studies with equivalent degrees of freedoms in neural networks

INGRASSIA, Salvatore;
2009-01-01

Abstract

The notion of equivalent number of degrees of freedom (e.d.f.) has been recently proposed in the context of neural network modeling for small data sets. This quantity is much smaller than the number of the parameters in the network and it does not depend on the number of input variables. In this paper, we present numerical studies on both real and simulated data sets assuring the validity of e.d.f. in a general framework. Results confirm that e.d.f. performs more reliably than the total number W of adaptive parameters - which are usually assumed equal to the degrees of freedom of the model in common statistical softwares - for analyzing and comparing neural models. Numerical studies also point out that e.d.f. works well in estimating the error variance and constructing approximate confidence intervals. We then propose a comparison among some model selection criteria and results show that for neural networks GCV performs slightly better. We finally present a simple forward procedure which can be easily implemented for automatically selecting a neural model with good trade-off between learning error and generalization properties.
2009
neural models; nested test; degrees of freedom
File in questo prodotto:
File Dimensione Formato  
r_IngrassiaMorlini_ADAS (2009).pdf

solo gestori archivio

Licenza: Non specificato
Dimensione 1.84 MB
Formato Adobe PDF
1.84 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/5114
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact