Network slices for delay-constrained applications in 5G systems require computing facilities at the edge of the network to guarantee ultra-low latency in processing data flows generated by connected devices, which is challenging with larger volumes of data, and larger distances to the edge of the network. To address this challenge, we propose to extend 5G network slices with Unmanned Aerial Vehicles (UAV) equipped with multi-access edge computing (MEC) facilities. However, onboard computing elements (CE) consume UAV's battery power thus impacting its flight duration. We propose a framework where a System Controller (SC) can turn on and off UAV's CEs, with the possibility of offloading jobs to other UAVs, to maximize an objective function defined in terms of power consumption, job loss, and incurred delay. Management of this framework is achieved by reinforcement learning. A Markov model of the system is introduced to enable reinforcement learning and provide guidelines for the selection of system parameters. A use case is considered to demonstrate the gain achieved by the proposed framework and discuss numerical results.

Design of a 5G Network Slice Extension with MEC UAVs Managed with Reinforcement Learning

Faraci G.;Grasso C.;Schembra G.
2020-01-01

Abstract

Network slices for delay-constrained applications in 5G systems require computing facilities at the edge of the network to guarantee ultra-low latency in processing data flows generated by connected devices, which is challenging with larger volumes of data, and larger distances to the edge of the network. To address this challenge, we propose to extend 5G network slices with Unmanned Aerial Vehicles (UAV) equipped with multi-access edge computing (MEC) facilities. However, onboard computing elements (CE) consume UAV's battery power thus impacting its flight duration. We propose a framework where a System Controller (SC) can turn on and off UAV's CEs, with the possibility of offloading jobs to other UAVs, to maximize an objective function defined in terms of power consumption, job loss, and incurred delay. Management of this framework is achieved by reinforcement learning. A Markov model of the system is introduced to enable reinforcement learning and provide guidelines for the selection of system parameters. A use case is considered to demonstrate the gain achieved by the proposed framework and discuss numerical results.
2020
5G
Markov decision processes (MDP)
network slicing
reinforcement learning
UAVs
File in questo prodotto:
File Dimensione Formato  
R64.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Dimensione 2.03 MB
Formato Adobe PDF
2.03 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/496986
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 76
  • ???jsp.display-item.citation.isi??? 60
social impact