Nowadays, thousands of information, such as images, videos, audio signals, sensor data, etc., can be collected by many devices. The idea of multi-sensor data fusion is to combine the data coming from different sensors to provide more accurate information than that a single sensor alone. Sensors have been proposed to emulate the human capability to combine all senses to capture information. So, the goal of Multimodal Learning is to create models that are able to process information from different modalities, semantically related, creating a shared representation to improve accuracies than could be achieved by the use a single input. In other words, the challenge is constructed an embedding space where objects, which are correlated, are close to them. Human can also anticipate the future action because our brain is able to decode all information received to understand the future occurrences and to make a decision. The overall design of machines that anticipate future actions is still an open issue in Computer Vision. To contribute to ongoing research in this area, the goal of this thesis is to analyse the way to build a shared representation related to data coming from different domains, such as images, signal audio, heart rate, acceleration, etc., in order to anticipate daily activities of a user wearing multimodal sensors. To our knowledge, in the state of the art, there are not results on action anticipation from multimodal data, so the prediction accuracy of the tested models is compared with respect to the classic action classification which is considered as a baseline. Results demonstrate that the presented system is effective in predicting activity from an unknown observation and suggest that multimodality improves both classification and prediction in some cases. This confirms that data from different sensors can be exploited to enhance the representation of the surrounding context, similarly to what happens for human beings, that elaborate information coming from their eyes, ears, skin, etc. to have a global and more reliable view of the surrounding world.

Multi-Sensor Data Fusion / Rotondo, Tiziana. - (2020 Feb 17).

Multi-Sensor Data Fusion

ROTONDO, TIZIANA
2020-02-17

Abstract

Nowadays, thousands of information, such as images, videos, audio signals, sensor data, etc., can be collected by many devices. The idea of multi-sensor data fusion is to combine the data coming from different sensors to provide more accurate information than that a single sensor alone. Sensors have been proposed to emulate the human capability to combine all senses to capture information. So, the goal of Multimodal Learning is to create models that are able to process information from different modalities, semantically related, creating a shared representation to improve accuracies than could be achieved by the use a single input. In other words, the challenge is constructed an embedding space where objects, which are correlated, are close to them. Human can also anticipate the future action because our brain is able to decode all information received to understand the future occurrences and to make a decision. The overall design of machines that anticipate future actions is still an open issue in Computer Vision. To contribute to ongoing research in this area, the goal of this thesis is to analyse the way to build a shared representation related to data coming from different domains, such as images, signal audio, heart rate, acceleration, etc., in order to anticipate daily activities of a user wearing multimodal sensors. To our knowledge, in the state of the art, there are not results on action anticipation from multimodal data, so the prediction accuracy of the tested models is compared with respect to the classic action classification which is considered as a baseline. Results demonstrate that the presented system is effective in predicting activity from an unknown observation and suggest that multimodality improves both classification and prediction in some cases. This confirms that data from different sensors can be exploited to enhance the representation of the surrounding context, similarly to what happens for human beings, that elaborate information coming from their eyes, ears, skin, etc. to have a global and more reliable view of the surrounding world.
17-feb-2020
multimodal learning, action anticipation
Multi-Sensor Data Fusion / Rotondo, Tiziana. - (2020 Feb 17).
File in questo prodotto:
File Dimensione Formato  
Tesi di dottorato - ROTONDO TIZIANA 20191130102922.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: PUBBLICO - Pubblico con Copyright
Dimensione 5 MB
Formato Adobe PDF
5 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/581298
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact