Abstract |
Neural-symbolic inference and learning is attracting increasing attention in response to a growing awareness of the limitations of pure neural machine learning methods. Solving tasks that require complex form of reasoning, commonsense knowledge and default assumptions is not currently achievable with a pure neural approach. This form of reasoning is typical in human decision making and symbolic inference is naturally capable of capturing it. The challenge is how to combine these two forms of computational methods to enable effective decisions while supporting autonomous context-aware situation under- standing. Recent approaches have been proposed for performing symbolic reasoning in continuous vector space, but these are limited to only simple relations over objects learned during the training process. In this short paper we lay out the theoretical foundations for exact computation of non-monotonic semantics in continuous vector space by means of a gradient-based search algorithm. This semantics underpins normal logic programs which are often used to capture commonsense reasoning. The proposed method relies on a vector representation of interpre- tations and a matrix representation of program reduct, and is proved to maintain the semantics of the original program under appropriate conditions. Experiments demonstrate the feasibility of the approach and ways to improve its convergence. This is a first stepping stone towards an innovative solution for integrating symbolic representation of background knowledge into differentiable computations. |
Authors |
- Yaniv Aspis (Imperial)
- Krysia Broda (Imperial)
- Alessandra Russo (Imperial)
- Jorge Lobo (Imperial)
- Elisa Bertino (Purdue)
- Supriyo Chakraborty (IBM US)
|
Date |
Sep-2020 |
Venue |
4th Annual Fall Meeting of the DAIS ITA, 2020 |
|