Sensor fusion-based indoor localization is an evolving application that uses fused information to determine the location of smartphone users. However, the heterogeneity of sensors across various smartphones has significantly compromised the accuracy of localization algorithms. Therefore, this paper introduces MH-ViL, an infrastructure-free and calibration-free frame-work built on top of the Vision Transformer neural network. MH-ViL seamlessly integrates magnetic field signals (MFS) and visual images for localization tasks. A novel magnetic feature projection (MFP) model is proposed to effectively map MFS onto visual image features, enhancing positional accuracy within the self-attention mechanism. Real-time experiments demonstrate that MH-ViL surpasses alternative models, with an impressive 92% accuracy. We provide a 95% confidence interval with an error below 0.5 meters. Code: https://github.com/Hamaad1/MH-ViL.git

Fusing Visuals with Magnetic Signals to Improve Indoor Localization Using Vision Transformer

Rafique H.
Investigation
;
Patti D.
Supervision
;
Palesi M.
Supervision
;
Gaetano Carmelo La Delfa
2024-01-01

Abstract

Sensor fusion-based indoor localization is an evolving application that uses fused information to determine the location of smartphone users. However, the heterogeneity of sensors across various smartphones has significantly compromised the accuracy of localization algorithms. Therefore, this paper introduces MH-ViL, an infrastructure-free and calibration-free frame-work built on top of the Vision Transformer neural network. MH-ViL seamlessly integrates magnetic field signals (MFS) and visual images for localization tasks. A novel magnetic feature projection (MFP) model is proposed to effectively map MFS onto visual image features, enhancing positional accuracy within the self-attention mechanism. Real-time experiments demonstrate that MH-ViL surpasses alternative models, with an impressive 92% accuracy. We provide a 95% confidence interval with an error below 0.5 meters. Code: https://github.com/Hamaad1/MH-ViL.git
2024
Fingerprinting
Indoor Localization
Neural Networks
Transformers
Vision Transformers
File in questo prodotto:
File Dimensione Formato  
Rafique_Fusing_Visuals_with_Magnetic_Signals_to_Improve_Indoor_Localization_Using_Vision_Transformer 2024.pdf

solo gestori archivio

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.13 MB
Formato Adobe PDF
2.13 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/711343
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
social impact