Contextual awareness in wearable computing allows to build intelligent systems which are able to interact with the user in a more natural way. In this paper, we study how personal locations arising from the user’s daily activities can be recognized from egocentric videos. We assume that few training samples are available for learning purposes. Considering the diversity of the devices available on the market, we introduce a benchmark dataset containing egocentric videos of 8 personal locations acquired by a user with 4 different wearable cameras. To make our analysis useful in real-world scenarios, we propose a method to reject negative locations, i.e., those not belonging to any of the categories of interest for the end-user. We assess the performances of the main state-of-the-art representations for scene and object classification on the considered task, as well as the influence of device-specific factors such as the Field of View (FOV) and the wearing modality. Concerning the different device-specific factors, experiments pointed out that best resultsare obtained using a head-mounted, wide-angular device. Our analysis shows the effectiveness of using representations based onConvolutional Neural Networks (CNN), employing basic transferlearning techniques and an entropy-based rejection algorithm.

Recognizing Personal Locations From Egocentric Videos

Furnari A;FARINELLA, GIOVANNI MARIA;BATTIATO, SEBASTIANO
2017-01-01

Abstract

Contextual awareness in wearable computing allows to build intelligent systems which are able to interact with the user in a more natural way. In this paper, we study how personal locations arising from the user’s daily activities can be recognized from egocentric videos. We assume that few training samples are available for learning purposes. Considering the diversity of the devices available on the market, we introduce a benchmark dataset containing egocentric videos of 8 personal locations acquired by a user with 4 different wearable cameras. To make our analysis useful in real-world scenarios, we propose a method to reject negative locations, i.e., those not belonging to any of the categories of interest for the end-user. We assess the performances of the main state-of-the-art representations for scene and object classification on the considered task, as well as the influence of device-specific factors such as the Field of View (FOV) and the wearing modality. Concerning the different device-specific factors, experiments pointed out that best resultsare obtained using a head-mounted, wide-angular device. Our analysis shows the effectiveness of using representations based onConvolutional Neural Networks (CNN), employing basic transferlearning techniques and an entropy-based rejection algorithm.
File in questo prodotto:
File Dimensione Formato  
Recognizing personal locations from egocentric videos.pdf

solo gestori archivio

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 1.82 MB
Formato Adobe PDF
1.82 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/21891
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 39
  • ???jsp.display-item.citation.isi??? 27
social impact