Visual navigation algorithms allow a mobile agent to sense the environment and autonomously find its way to reach a target (e.g. an object in the environment). While many recent approaches tackled this task using reinforcement learning, which neglects any prior knowledge about the environments, more classic approaches strongly rely on self-localization and path planning. In this study, we compare the performance of single-target and multi-target visual navigation approaches based on the reinforcement learning paradigm, and simple baselines which rely on image-based localization. Experiments performed on discrete-state environments of different sizes, comprised of both real and virtual images, show that the two paradigms tend to achieve complementary results, hence suggesting that a combination of the two approaches to visual navigation may be beneficial.
A comparison of visual navigation approaches based on localization and reinforcement learning in virtual and real environments
Rosano M.Primo
;Furnari A.Secondo
;Farinella G. M.Ultimo
2020-01-01
Abstract
Visual navigation algorithms allow a mobile agent to sense the environment and autonomously find its way to reach a target (e.g. an object in the environment). While many recent approaches tackled this task using reinforcement learning, which neglects any prior knowledge about the environments, more classic approaches strongly rely on self-localization and path planning. In this study, we compare the performance of single-target and multi-target visual navigation approaches based on the reinforcement learning paradigm, and simple baselines which rely on image-based localization. Experiments performed on discrete-state environments of different sizes, comprised of both real and virtual images, show that the two paradigms tend to achieve complementary results, hence suggesting that a combination of the two approaches to visual navigation may be beneficial.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.