The detection of moving objects, animals, or pedestrians, as well as static objects such as road signs, is one of the fundamental tasks for assisted or self-driving vehicles. This accomplishment becomes even more difficult in low light conditions such as driving at night or inside road tunnels. Since the objects found in the driving scene represent a significant collision risk, the aim of this scientific contribution is to propose an innovative pipeline that allows real time low-light driving salient objects tracking. Using a combination of the time-transient non-linear cellular networks and deep architectures with self-attention, the proposed solution will be able to perform a real-time enhancement of the low-light driving scenario frames. The downstream deep network will learn from the frames thus improved in terms of brightness in order to identify and segment salient objects by bounding-box based approach. The proposed algorithm is ongoing to be ported over a hybrid architecture consisting of a an embedded system with SPC5x Chorus MCU integrated with an automotive-grade system based on STA1295 MCU core. The performances (accuracy of about 90% and correlation coefficient of about 0.49) obtained in the experimental validation phase confirmed the effectiveness of the proposed method.

Intelligent Real-Time Deep System for Robust Objects Tracking in Low-Light Driving Scenario

Francesco Rundo
2021-01-01

Abstract

The detection of moving objects, animals, or pedestrians, as well as static objects such as road signs, is one of the fundamental tasks for assisted or self-driving vehicles. This accomplishment becomes even more difficult in low light conditions such as driving at night or inside road tunnels. Since the objects found in the driving scene represent a significant collision risk, the aim of this scientific contribution is to propose an innovative pipeline that allows real time low-light driving salient objects tracking. Using a combination of the time-transient non-linear cellular networks and deep architectures with self-attention, the proposed solution will be able to perform a real-time enhancement of the low-light driving scenario frames. The downstream deep network will learn from the frames thus improved in terms of brightness in order to identify and segment salient objects by bounding-box based approach. The proposed algorithm is ongoing to be ported over a hybrid architecture consisting of a an embedded system with SPC5x Chorus MCU integrated with an automotive-grade system based on STA1295 MCU core. The performances (accuracy of about 90% and correlation coefficient of about 0.49) obtained in the experimental validation phase confirmed the effectiveness of the proposed method.
2021
Assisted driving
Deep learning
Intelligent driving scenario understanding
Low-light driving saliency detection
Low-light self-driving
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/706651
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 4
social impact