Using machine learning to emulate human hearing for predictive maintenance of equipment

Abstract At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.
  • Dinesh Verma (IBM US)
  • Graham Bent (IBM UK)
Date Apr-2017
Venue SPIE - Defense + Commercial Sensing 2017