One of the most promising architectures for performing deep neural network inferences on resource-constrained embedded devices is based on massive parallel and specialized cores interconnected by means of a Network-on-Chip (NoC). In this paper, we extensively evaluate NoC-based deep neural network accelerators by exploring the design space spanned by several architectural parameters. We show how latency is mainly dominated by the on-chip communication whereas energy consumption is mainly accounted by memory (both on-chip and off-chip).

Analyzing networks-on-chip based deep neural networks

Ascia G.;Catania V.;Monteleone S.;Palesi M.;Patti D.;
2019-01-01

Abstract

One of the most promising architectures for performing deep neural network inferences on resource-constrained embedded devices is based on massive parallel and specialized cores interconnected by means of a Network-on-Chip (NoC). In this paper, we extensively evaluate NoC-based deep neural network accelerators by exploring the design space spanned by several architectural parameters. We show how latency is mainly dominated by the on-chip communication whereas energy consumption is mainly accounted by memory (both on-chip and off-chip).
2019
9781450367004
Deep Neural Network; Design space exploration; Network-on-Chip; Performance and energy evaluation
File in questo prodotto:
File Dimensione Formato  
2019_NOCS2019.pdf

solo gestori archivio

Tipologia: Versione Editoriale (PDF)
Dimensione 582.97 kB
Formato Adobe PDF
582.97 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/382964
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? ND
social impact