||Neural network classifiers are typically trained to estimate the class membership probability of a given data-point. The Evidential Learning framework expands on this to allow the neural network to report its uncertainty in this probability output, based on how dissimilar the input is from the training data. Here we investigate an extension to the Evidential Learning framework to recurrent neural network models for classifying time-series. This requires a modification to the data generation portion of the generative Evidential Learning approach to make it appropriate for capturing the structure of time-series. We describe a new approach to achieve this, and present preliminary results on synthetic data.
- Pablo Ortega San Miguel (Imperial)
- Richard Tomsett (IBM UK)
- Murat Sensoy
- Lance Kaplan (ARL)
||4th Annual Fall Meeting of the DAIS ITA, 2020