Abstract |
Future military coalition operations will increasingly rely on machine learning (ML) methods to improve situational understanding. The coalition context presents unique challenges for ML: the tactical environment creates significant computing and communications limitations while also having to deal with an adversarial presence. Further, coalition operations must operate in a distributed manner, while coping with the constraints posed by the operational environment. Envisioned ML deployments in military assets must be resilient to these challenges. Here, we focus on the susceptibility of distributed ML models to be poisoned during training. We present results from investigations into model poisoning attacks on distributed learning systems without a central parameter aggregation node (peer-to-peer learning). This paper is a summary of research originally published at SPIE, 2019. |