Abstract |
In tactical networks, Software Defined Coalitions (SDC) are often subject to internal fragmentation due to dynamic coalition formation, damage to infrastructure, and unit move- ments. This makes the deployment of centralised SDC controller- based resource management challenging, because parts of the network may not be accessible. To address this, we propose a novel state representation for more decentralised decisions, where individual network nodes use reinforcement learning to learn efficient decision policies over varying numbers of reachable nodes, given their local perception of the network state. We also extend existing multi-agent reinforcement learning algorithms to the SDC setting and show that nodes can share their experiences to achieve faster convergence. |