We present a significant improvement to a methodology which was described in several earlier articles by Barr, et al. where we demonstrated a workflow which classifies the source code of large open source projects for vulnerability. Whereas in the past, to deal with dearth of minority examples we've applied upsampling and simulation technique, this present approach demonstrates that a clever choice of cost function sans upsampling results in excellent performance surpassing previous results. In this iteration a feed-forward neural network classifier was trained on Area Under Min(FP, FN) (AUM) loss. The AUM method is described in Hillman & Hocking. Similar to earlier work, to overcome the out-of-vocabulary challenge, an intermediate step Byte-Pair Encoding which 'compresses' the data and subsequently, with the compressed data, long short-term memory (LSTM) network is used to embed the tokens from which we assemble an embedding of function labels. This results in 128D embedding which along with additional 'interpretable', heuristics-based features which are used to classify CVEs. The resulting labeled dataset is extremely sparse, with a minority class consisting of roughly 0.5% of total. Demonstratively, the AUM cost function is undeterred by sparsity of data; this is amply demonstrated by the performance of the classifier.