This paper investigates how to exploit eye gaze data for understanding visual content. In particular, we propose a human-in-The-loop approach for object segmentation in videos, where humans provide signifficant cues on spatio-temporal relations between object parts (i.e. superpixels in our approach) by simply looking at video sequences. Such constraints, together with object appearance properties, are encoded into an energy function so as to tackle the segmen-tation problem as a labeling one. The proposed method uses gaze data from only two people and was tested on two challenging visual benchmarks: 1) SegTrack v2 and 2) FBMS-59. The achieved performance showed how our method outperformed more complex video object segmentation approaches, while reducing the effort needed for collecting human feedback.

Using the eyes to "see" the objects

SPAMPINATO, CONCETTO;Palazzo S;GIORDANO, Daniela
2015-01-01

Abstract

This paper investigates how to exploit eye gaze data for understanding visual content. In particular, we propose a human-in-The-loop approach for object segmentation in videos, where humans provide signifficant cues on spatio-temporal relations between object parts (i.e. superpixels in our approach) by simply looking at video sequences. Such constraints, together with object appearance properties, are encoded into an energy function so as to tackle the segmen-tation problem as a labeling one. The proposed method uses gaze data from only two people and was tested on two challenging visual benchmarks: 1) SegTrack v2 and 2) FBMS-59. The achieved performance showed how our method outperformed more complex video object segmentation approaches, while reducing the effort needed for collecting human feedback.
2015
978-145033459-4
Human-in-the-loop; Gaze data; Object segmentation
File in questo prodotto:
File Dimensione Formato  
using the eyes-ACM2015.pdf

solo gestori archivio

Licenza: Non specificato
Dimensione 8.13 MB
Formato Adobe PDF
8.13 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/73330
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact