Current methods for Image-Based Localization (IBL) require the collection and labelling of large datasets of images or videos for training purpose. Such procedure generally requires a significant effort, involving the use of dedicated sensors or the employment of structure from motion techniques. To overcome the difficulties of acquiring a dataset suitable to train models to study IBL, we propose a tool to generate simulated egocentric data starting from 3D models of real indoor environments. The generated data are automatically associated to the 3D camera pose information to be exploited during training, hence avoiding the need to perform “manual” labelling. To asses the effectiveness of the proposed tool, we generate and release a huge dataset of egocentric images using a 3D model of a real environment belonging to the S3DIS dataset. We also perform a benchmark study for 3 Degrees of Freedom (3DoF) indoor localization by considering an IBL pipeline based on image-retrieval and a triplet network. Results highlight that the generated dataset is useful to study the IBL problem and allows to perform experiments with different settings in a simple way.
|Titolo:||Image based localization with simulated egocentric navigations|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|