Abstract |
Policies are critical for governing operations and decisions of coalitions. Specifying and enforcing proper policies for most dynamic coalitions however is very challenging, as they often operate in constantly evolving environments. The use of machine learning (ML) techniques is a novel approach which allows quick learning and enforcement of policies by training and using ML models. However, a common issue in such an approach is the scarcity of labeled training data. In this paper, we propose using domain adaptation (DA) to reduce the amount of labeled data required for training a deep neural network (DNN) model. DA is a transfer learning technique that allows one to transfer knowledge from a source domain with adequate training data, to a different but similar target domain with minimal new training data. For example, a DNN object detection model prepared for identifying vehicle movements by training on a labeled dataset containing only images of vehicles in rural areas might not perform well in urban areas. However, the source model can be re-purposed to operate in the target environment by using DA and only a small labeled set of images of vehicles in urban areas. Other relevant examples can be found, such models trained to recognize adversary 1’s vehicles that needs to be re-purposed to recognize adversary 2’s vehicles. An even more important example is a model trained in a synthetic example that needs to be adapted for use in real world settings. In our work, we use a specific form of DA, referred to as adversarial DA, that leverages generative adversarial networks (GANs) for creating a domain-invariant mapping of the source and target datasets. We demonstrate that our proposed approach can create highly accurate deep learning classification models even when the number of labeled samples in the target dataset is significantly small. |