Navigating autonomous ground robots through dense and unstructured environments, such as forests, remains a major challenge due to the complexity of natural terrains and the computational burden of traditional navigation pipelines. Conventional approaches often rely on detailed prior maps, rigid rule-based systems, and computationally heavy sensor fusion techniques, which tend to lack generalization across varying environments. In this work, we present a reinforcement learning (RL) strategy for mapless navigation of ground robots in dense forested areas, relying solely on a 3D LiDAR sensor and the position of the target and the agent. The agent is trained using Proximal Policy Optimization (PPO) within a photorealistic simulation framework tailored for outdoor navigation tasks. We adopt a curriculum learning scheme that incrementally increases obstacle density during training. Experimental results in simulation show that the trained agent is capable of navigating challenging forest scenarios effectively, consistently reaching target locations even under high obstacle density. Evaluation based on Success Rate and Success Path Length (SPL) metrics highlights the robustness and adaptability of the learned policy, underscoring its potential for real-world deployment on resource-constrained robotic platforms.

Reinforcement Learning-Based Mapless Navigation for Ground Vehicles in Dense Forests Using 3D LiDAR

Cancelliere F.;Guastella D.;Sutera G.;Palazzo S.;Muscato G.;Spampinato C.
2025-01-01

Abstract

Navigating autonomous ground robots through dense and unstructured environments, such as forests, remains a major challenge due to the complexity of natural terrains and the computational burden of traditional navigation pipelines. Conventional approaches often rely on detailed prior maps, rigid rule-based systems, and computationally heavy sensor fusion techniques, which tend to lack generalization across varying environments. In this work, we present a reinforcement learning (RL) strategy for mapless navigation of ground robots in dense forested areas, relying solely on a 3D LiDAR sensor and the position of the target and the agent. The agent is trained using Proximal Policy Optimization (PPO) within a photorealistic simulation framework tailored for outdoor navigation tasks. We adopt a curriculum learning scheme that incrementally increases obstacle density during training. Experimental results in simulation show that the trained agent is capable of navigating challenging forest scenarios effectively, consistently reaching target locations even under high obstacle density. Evaluation based on Success Rate and Success Path Length (SPL) metrics highlights the robustness and adaptability of the learned policy, underscoring its potential for real-world deployment on resource-constrained robotic platforms.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/712529
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact