UnderWater (UW) Acoustic networks face unique challenges due to limited bandwidth, high latency, and dynamic channel conditions, necessitating adaptive communication protocols to optimize performance under strict energy constraints. Modulation schemes play a crucial role in determining the efficiency and reliability of these networks; dynamically adjusting modulation depending on channel conditions can significantly enhance network performance. While Machine Learning algorithms offer valuable solutions for real-time adaptation, many existing methods are based on deep learning, which often demands computational resources beyond the capabilities of typical UW devices. In contrast, Multi-Armed Bandit (MAB) algorithms offer a simpler yet effective solution, well-suited for environments with limited computational resources. In this paper, we present AMUSE, a scalable and efficient framework designed to leverage the MAB approach for dynamic modulation selection, while enabling the optimization of various key performance metrics. Specifically, to illustrate the high level of flexibility of AMUSE in addressing multi-objective optimization, we here focus on the trade-off of Packet Error Rate (PER) and energy consumption across changing conditions, so as to make both reliability and energy efficiency the basis of the modulation adaptation decision-making process. Through extensive simulation in the DESERT simulator, we evaluate AMUSE performance against other state-of-the-art algorithms based on Deep Reinforcement Learning (DRL). Despite its simple design, AMUSE proves to be more efficient and responsive than the baselines, making it a powerful solution for improving UW communication performance. The results show that, in spite of the lightweight nature of AMUSE, our framework is able to outperform the DRL baselines by achieving an improvement of up to 23.64% in the network PER, and up to 80.65% in energy saving.

AMUSE: A Multi-Armed Bandit Framework for Energy-Efficient Modulation Adaptation in Underwater Acoustic Networks

F. Busacca;L. Galluccio;S. Palazzo;R. Raftopoulos
2025-01-01

Abstract

UnderWater (UW) Acoustic networks face unique challenges due to limited bandwidth, high latency, and dynamic channel conditions, necessitating adaptive communication protocols to optimize performance under strict energy constraints. Modulation schemes play a crucial role in determining the efficiency and reliability of these networks; dynamically adjusting modulation depending on channel conditions can significantly enhance network performance. While Machine Learning algorithms offer valuable solutions for real-time adaptation, many existing methods are based on deep learning, which often demands computational resources beyond the capabilities of typical UW devices. In contrast, Multi-Armed Bandit (MAB) algorithms offer a simpler yet effective solution, well-suited for environments with limited computational resources. In this paper, we present AMUSE, a scalable and efficient framework designed to leverage the MAB approach for dynamic modulation selection, while enabling the optimization of various key performance metrics. Specifically, to illustrate the high level of flexibility of AMUSE in addressing multi-objective optimization, we here focus on the trade-off of Packet Error Rate (PER) and energy consumption across changing conditions, so as to make both reliability and energy efficiency the basis of the modulation adaptation decision-making process. Through extensive simulation in the DESERT simulator, we evaluate AMUSE performance against other state-of-the-art algorithms based on Deep Reinforcement Learning (DRL). Despite its simple design, AMUSE proves to be more efficient and responsive than the baselines, making it a powerful solution for improving UW communication performance. The results show that, in spite of the lightweight nature of AMUSE, our framework is able to outperform the DRL baselines by achieving an improvement of up to 23.64% in the network PER, and up to 80.65% in energy saving.
2025
Underwater communications, modulation adaptation, machine learning, multi-armed bandit
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/678649
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact