In this paper we present an innovative approach to support efficient large scale video annotation by exploiting the crowdsourcing. In particular, we collect big noisy annotations by an on-line Flash game which aims at taking photos of objects appearing through the game levels. The data gathered (suitably processed) from the game is then used to drive image segmentation approaches, namely the Region Growing and Grab Cut, which allow us to derive meaningful annotations. A comparison against hand-labeled ground truth data showed that the proposed approach constitutes a valid alternative to the existing video annotation approaches and allow a reliable and fast collection of large scale ground truth data for performance evaluation in computer vision. © 2013 ACM.

A crowdsourcing approach to support video annotation

GIORDANO, Daniela;Kavasidis I.
2013-01-01

Abstract

In this paper we present an innovative approach to support efficient large scale video annotation by exploiting the crowdsourcing. In particular, we collect big noisy annotations by an on-line Flash game which aims at taking photos of objects appearing through the game levels. The data gathered (suitably processed) from the game is then used to drive image segmentation approaches, namely the Region Growing and Grab Cut, which allow us to derive meaningful annotations. A comparison against hand-labeled ground truth data showed that the proposed approach constitutes a valid alternative to the existing video annotation approaches and allow a reliable and fast collection of large scale ground truth data for performance evaluation in computer vision. © 2013 ACM.
2013
ground truth generation; image segmentation; online game
File in questo prodotto:
File Dimensione Formato  
VIGTA13.pdf

solo gestori archivio

Tipologia: Versione Editoriale (PDF)
Licenza: Non specificato
Dimensione 601.8 kB
Formato Adobe PDF
601.8 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/96742
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? ND
social impact