In the last few years, Virtual Reality (VR) is assuming a prominent role and is recognized as a pivotal technology in various sectors. However, transmission of immersive videos produced by real-time 360 degrees cameras or stored on remote servers to reproduce 3D environments, or streamed by video games accessible through headsets, would require a lot of network bandwidth that, in many cases, is not available or too expensive to be obtained. In this paper, we leverage on network softwarization provided by new 5G&B networks, and introduce an Adaptive Closed-loop Encoding VNF named 360-ST for adaptive compression of 360 degrees video streaming. This VNF is able to apply a hierarchical compression that takes into account both the bandwidth currently available in the network, and the user viewport. The agent that is in charge of deciding the different compression ratio at runtime uses Deep Reinforcement Learning to optimize a reward function and adapt to the changes of the network bandwidth, the end-to-end latency, the user movements within the scene and the video content. The results indicate that our proposed method consistently outperforms state-of-the-art algorithms by an average of 8% to 46% in terms of achieved Peak Signal-to-Noise Ratio (PSNR).

An Adaptive Closed-Loop Encoding VNF for Virtual Reality Applications

Caruso A.;Grasso C.;Raftopoulos R.;Schembra G.
2024-01-01

Abstract

In the last few years, Virtual Reality (VR) is assuming a prominent role and is recognized as a pivotal technology in various sectors. However, transmission of immersive videos produced by real-time 360 degrees cameras or stored on remote servers to reproduce 3D environments, or streamed by video games accessible through headsets, would require a lot of network bandwidth that, in many cases, is not available or too expensive to be obtained. In this paper, we leverage on network softwarization provided by new 5G&B networks, and introduce an Adaptive Closed-loop Encoding VNF named 360-ST for adaptive compression of 360 degrees video streaming. This VNF is able to apply a hierarchical compression that takes into account both the bandwidth currently available in the network, and the user viewport. The agent that is in charge of deciding the different compression ratio at runtime uses Deep Reinforcement Learning to optimize a reward function and adapt to the changes of the network bandwidth, the end-to-end latency, the user movements within the scene and the video content. The results indicate that our proposed method consistently outperforms state-of-the-art algorithms by an average of 8% to 46% in terms of achieved Peak Signal-to-Noise Ratio (PSNR).
2024
Virtual Reality
360 degrees video encoding
Softwarized Networks
VNF
Deep Reinforcement Learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/677576
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
social impact