DeepSQA: Understanding Sensor Data via a Deep Learning Approach to Sensory Question Answering

Abstract In recent years, there has been an explosion in the use of mobile, wearable, and IoT devices which generate huge volumes of sensory data. With the advancement of deep learning techniques, neural networks can analyze these data and make various inferences effectively. However, deep learning models are usually trained to perform predefined tasks, and new models must be trained when new tasks are introduced. Furthermore, they cannot provide much information to users beyond a limited set of high-level labels. This makes it necessary to find a way for human decision-makers to understand and explain the answers produced by VQA networks. We introduce DEEPSQA, the first Sensory Question Answer- ing model that aims to enable natural language questions about raw sensory data in heterogeneous IoT networks. Given the sensory data context and natural language questions about the data, the task is to provide the correct natural language answer. Unlike other QA tasks (e.g., VQA), the question and answer on raw sensory data are less deterministic due to label uncertainty and, thus, cannot rely on gathering crowd-sourced data. So, as a first step, we propose an SQA data generation tool that takes labeled source sensory dataset as input, and outputs a realistic SQA dataset, which is then used to train SQA models. We evaluate DEEPSQA across several state-of-the-art QA models and lay the foundation and challenges for future SQA research.
  • Tianwei Xing (UCLA)
  • Marc Roig Vilamala (Cardiff)
  • Luis Garcia (UCLA)
  • Federico Cerutti
  • Lance Kaplan (ARL)
  • Alun Preece (Cardiff)
  • Mani Srivastava (UCLA)
Date Sep-2020
Venue 4th Annual Fall Meeting of the DAIS ITA, 2020