A Comparison Between Statistical and Symbolic Learning Approaches for Generative Policy Models

Abstract Generative Policy Models (GPMs) have been pro- posed as a method for future autonomous decision making in a distributed, coalition environment. To learn a GPM, previous pol- icy examples that contain policy features and the corresponding policy decisions are used. Recently, GPMs have been constructed using both symbolic and statistical learning algorithms. In either case, the goal of the learning process is to create a model across a wide range of contexts from which specific policies may be generated in a given context. Empirically, we expect each learning approach to provide certain advantages over the other. This paper assesses the relative performance of each learning approach in order to examine these advantages and disadvantages. Several carefully prepared data sets are used to train a variety of models across different learning algorithms, where models for each learning algorithm are trained with varying amounts of labelled examples. The performance of each model is evaluated across a variety of metrics which indicates the strength of each learning algorithm for the different scenarios presented and the amount of training data provided. Finally, future research directions are outlined to fully realise GPMs in a distributed, coalition environment.
Authors
  • Graham White (IBM UK)
  • Daniel Cunnington (IBM UK)
  • Mark Law (Imperial)
  • Alessandra Russo (Imperial)
  • Elisa Bertino (Purdue)
Date Sep-2019
Venue Annual Fall Meeting of the DAIS ITA, 2019
Variants