||Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed pre- dictions. Some recent approaches quantify classification un- certainty directly by training the model to output high un- certainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of- distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimen- sional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demon- strate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of- the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.