The automotive industry is rapidly evolving with the integration of AI-powered functionalities to enhance safety, driving experience, and operational efficiency. However, deploying deep learning models on embedded automotive hardware faces challenges due to power, latency, and computational constraints. Knowledge Distillation (KD) emerges as a key solution, enabling the transfer of knowledge from large teacher models to compact student models, optimizing computational efficiency while maintaining performance. This study introduces DISTILLO, an advanced knowledge distillation framework designed for image classification in embedded automotive systems. The framework proposes two configurations: DISTILLO-Classical, which employs a dynamically weighted KD strategy, and DISTILLOEnsemble, which integrates multiple distilled models through a voting mechanism to improve accuracy. Experimental results on CIFAR-100 dataset with VGG-11 and ResNet-18 student models showed significant accuracy improvements over traditional KD, increasing from 62.03% to 74.65% for the former and from 62.67% to 73.46% for the latter. The framework stands out for its scalability and computational efficiency, making it suitable for deployment in resource-constrained embedded automotive systems
Distillo Framework: A Novel Knowledge Distillation Platform for Advanced Intelligent Solution Deployment over Automotive-Grade Devices
Castagnolo G.;Battiato S.;Rundo F.
2025-01-01
Abstract
The automotive industry is rapidly evolving with the integration of AI-powered functionalities to enhance safety, driving experience, and operational efficiency. However, deploying deep learning models on embedded automotive hardware faces challenges due to power, latency, and computational constraints. Knowledge Distillation (KD) emerges as a key solution, enabling the transfer of knowledge from large teacher models to compact student models, optimizing computational efficiency while maintaining performance. This study introduces DISTILLO, an advanced knowledge distillation framework designed for image classification in embedded automotive systems. The framework proposes two configurations: DISTILLO-Classical, which employs a dynamically weighted KD strategy, and DISTILLOEnsemble, which integrates multiple distilled models through a voting mechanism to improve accuracy. Experimental results on CIFAR-100 dataset with VGG-11 and ResNet-18 student models showed significant accuracy improvements over traditional KD, increasing from 62.03% to 74.65% for the former and from 62.67% to 73.46% for the latter. The framework stands out for its scalability and computational efficiency, making it suitable for deployment in resource-constrained embedded automotive systemsI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.