This paper investigates how to exploit eye gaze data for understanding visual content. In particular, we propose a human-in-The-loop approach for object segmentation in videos, where humans provide signifficant cues on spatio-temporal relations between object parts (i.e. superpixels in our approach) by simply looking at video sequences. Such constraints, together with object appearance properties, are encoded into an energy function so as to tackle the segmen-tation problem as a labeling one. The proposed method uses gaze data from only two people and was tested on two challenging visual benchmarks: 1) SegTrack v2 and 2) FBMS-59. The achieved performance showed how our method outperformed more complex video object segmentation approaches, while reducing the effort needed for collecting human feedback.
Using the eyes to "see" the objects
SPAMPINATO, CONCETTO;Palazzo S;GIORDANO, Daniela
2015-01-01
Abstract
This paper investigates how to exploit eye gaze data for understanding visual content. In particular, we propose a human-in-The-loop approach for object segmentation in videos, where humans provide signifficant cues on spatio-temporal relations between object parts (i.e. superpixels in our approach) by simply looking at video sequences. Such constraints, together with object appearance properties, are encoded into an energy function so as to tackle the segmen-tation problem as a labeling one. The proposed method uses gaze data from only two people and was tested on two challenging visual benchmarks: 1) SegTrack v2 and 2) FBMS-59. The achieved performance showed how our method outperformed more complex video object segmentation approaches, while reducing the effort needed for collecting human feedback.File | Dimensione | Formato | |
---|---|---|---|
using the eyes-ACM2015.pdf
solo gestori archivio
Licenza:
Non specificato
Dimensione
8.13 MB
Formato
Adobe PDF
|
8.13 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.