The fifth generation (5G) of mobile networks has the goal of providing ultra-high-speed access everywhere and enabling connectivity of massive number of devices in an ultra-reliable and affordable way. However, in many environments that are considered strategic for 5G applications, a structured network is not available. A solution to extend the features provided by Multi-access Edge Computing (MEC), one of the main enabler of 5G, in these contexts, is to use fleets of MEC UAVs, each equipped with a computing element (CE), and organized in Flying Ad-hoc Networks (FANET).In this paper, we propose a FANET platform with 'horizontal' offload from the most overloaded UAVs to the least overloaded ones, aimed at balancing load among UAVs. A decision policy called UAV Smart Offloading (USO), based on Deep Reinforcement Learning, is also defined to optimize performance in terms of delay perceived by the ground devices connected to the FANET. A numerical analysis is introduced to evaluate performance achieved by the proposed platform.

Deep Q-Learning for Job Offloading Orchestration in a Fleet of MEC UAVs in 5G Environments

Grasso C.;Raftopoulos R.;Schembra G.
2021-01-01

Abstract

The fifth generation (5G) of mobile networks has the goal of providing ultra-high-speed access everywhere and enabling connectivity of massive number of devices in an ultra-reliable and affordable way. However, in many environments that are considered strategic for 5G applications, a structured network is not available. A solution to extend the features provided by Multi-access Edge Computing (MEC), one of the main enabler of 5G, in these contexts, is to use fleets of MEC UAVs, each equipped with a computing element (CE), and organized in Flying Ad-hoc Networks (FANET).In this paper, we propose a FANET platform with 'horizontal' offload from the most overloaded UAVs to the least overloaded ones, aimed at balancing load among UAVs. A decision policy called UAV Smart Offloading (USO), based on Deep Reinforcement Learning, is also defined to optimize performance in terms of delay perceived by the ground devices connected to the FANET. A numerical analysis is introduced to evaluate performance achieved by the proposed platform.
2021
978-1-6654-0522-5
5G
Deep Reinforcement Learning
Edge Computing
Markov Models
UAVs
File in questo prodotto:
File Dimensione Formato  
Deep_Q-Learning_for_Job_Offloading_Orchestration_in_a_Fleet_of_MEC_UAVs_in_5G_Environments.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 820.29 kB
Formato Adobe PDF
820.29 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/524646
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? ND
social impact