neuralRank: Searching and Ranking Deep Neural Network Model Repositories

Abstract Widespread applications of deep learning has led to a plethora of pre-trained neural network models for common tasks. Such models are often adapted from other models via transfer learning. The models may have varying training sets, training algorithms, network architectures, and hyperparameters. For a given application, what is the most suitable model in a model repository? This is a critical question for practical deployments, but it has not received much attention. This paper introduces the novel problem of searching and ranking models based on suitability relative to a target dataset and proposes a ranking algorithm called neuralRank. The key idea behind this algorithm is to base model suitability on the discriminating power of a model, using a novel metric to measure it. With experimental results on the MNIST, Fashion, and CIFAR10 datasets, we demonstrate that (1) neuralRank is independent of the domain, the training set, or the network architecture and (2) that the models ranked highly by neuralRank tend to have higher model accuracy in practice.
  • Wei-Han Lee
  • Sebastian Stein (Southampton)
  • Jae-Wook Ahn (IBM US)
  • Linsong Chu (IBM US)
  • Nirmit Desai (IBM US)
  • Raghu Ganti (IBM US)
  • Mudhakar Srivatsa (IBM US)
Date Sep-2020
Venue 4th Annual Fall Meeting of the DAIS ITA, 2020