Autonomous navigation in unstructured outdoor environments presents significant challenges due to the complexity and variability of terrain and obstacles. Traditional navigation methods struggle with adaptability, while learning-based approaches, particularly deep Reinforcement Learning (RL), have shown promise but faces difficulties in generalization and data efficiency. To address these challenges, we tested the performance of various deep RL algorithms for point-goal navigation, on MIDGARD, our photorealistic simulation platform built on Unreal Engine. We compare PPO, A2C, SAC, and TD3, evaluating their effectiveness based on success rates and reward progression. Our results indicate that PPO outperforms other algorithms, achieving the highest success rate, while off-policy methods struggle due to inefficient exploration and policy updates.
Performance Evaluation of Reinforcement Learning Algorithms for Navigation in Unstructured Environments
Cancelliere F.;Sutera G.;Guastella D. C.;Palazzo S.;Muscato G.;Spampinato C.
2025-01-01
Abstract
Autonomous navigation in unstructured outdoor environments presents significant challenges due to the complexity and variability of terrain and obstacles. Traditional navigation methods struggle with adaptability, while learning-based approaches, particularly deep Reinforcement Learning (RL), have shown promise but faces difficulties in generalization and data efficiency. To address these challenges, we tested the performance of various deep RL algorithms for point-goal navigation, on MIDGARD, our photorealistic simulation platform built on Unreal Engine. We compare PPO, A2C, SAC, and TD3, evaluating their effectiveness based on success rates and reward progression. Our results indicate that PPO outperforms other algorithms, achieving the highest success rate, while off-policy methods struggle due to inefficient exploration and policy updates.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.