Network slices for delay-constrained applications in 5G systems require computing facilities at the edge of the network to guarantee ultra-low latency in processing data flows generated by connected devices, which is challenging with larger volumes of data, and larger distances to the edge of the network. To address this challenge, we propose to extend 5G network slices with Unmanned Aerial Vehicles (UAV) equipped with multi-access edge computing (MEC) facilities. However, onboard computing elements (CE) consume UAV's battery power thus impacting its flight duration. We propose a framework where a System Controller (SC) can turn on and off UAV's CEs, with the possibility of offloading jobs to other UAVs, to maximize an objective function defined in terms of power consumption, job loss, and incurred delay. Management of this framework is achieved by reinforcement learning. A Markov model of the system is introduced to enable reinforcement learning and provide guidelines for the selection of system parameters. A use case is considered to demonstrate the gain achieved by the proposed framework and discuss numerical results.
Design of a 5G Network Slice Extension with MEC UAVs Managed with Reinforcement Learning
Faraci G.;Grasso C.;Schembra G.
2020-01-01
Abstract
Network slices for delay-constrained applications in 5G systems require computing facilities at the edge of the network to guarantee ultra-low latency in processing data flows generated by connected devices, which is challenging with larger volumes of data, and larger distances to the edge of the network. To address this challenge, we propose to extend 5G network slices with Unmanned Aerial Vehicles (UAV) equipped with multi-access edge computing (MEC) facilities. However, onboard computing elements (CE) consume UAV's battery power thus impacting its flight duration. We propose a framework where a System Controller (SC) can turn on and off UAV's CEs, with the possibility of offloading jobs to other UAVs, to maximize an objective function defined in terms of power consumption, job loss, and incurred delay. Management of this framework is achieved by reinforcement learning. A Markov model of the system is introduced to enable reinforcement learning and provide guidelines for the selection of system parameters. A use case is considered to demonstrate the gain achieved by the proposed framework and discuss numerical results.File | Dimensione | Formato | |
---|---|---|---|
R64.pdf
solo utenti autorizzati
Tipologia:
Versione Editoriale (PDF)
Dimensione
2.03 MB
Formato
Adobe PDF
|
2.03 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.