[D] OOD and Spandrels, or What you should know about EBM.

Energy-based model

This article will compare EBMs to multi-layered perceptrons, and addresses a lingering question : Whether or not EBMs are simply an “equivalent reformulation” of traditional MLPs with gradient descent. Given the same training data, and the same parameter count, do EBM simply converge to what would result from a traditional MLP trained by gradient descent?

It turns out the answer is no. EBMs differ most sharply from MLP in how they categorize OOD points that are near the boundary of points that occurred in the training set. Below are some diagrams that best demonstrate this difference.

Energy-Based Models (EBMs) capture dependencies by associating a scalar energy (a measure of compatibility) to each configuration of the variables. Inference, i.e., making a prediction or decision, consists in setting the value of observed variables and finding values of the remaining variables that minimize the energy. Learning consists in finding an energy function that associates low energies to correct values of the remaining variables, and higher energies to incorrect values.

Spandrels

Three functions in 2-dimensions were trained with IID sampling

  • split circle (no noise)

  • twist (no noise)

  • kissing pyramids (with noise)

Then a ReLU-MLP and an EBM of equivalent size were both trained on the same data. Then both competing models were queried in a very dense way in a box around the training data. The querying produced a density scalar for each point and those were plotted and color-coded.

  • Brown and white indicate the model believes the query point does not belong to the true distribution.

  • Blue and green indicate the model believes the query point is very likely part of the true distribution underlying the training set.

The following figure shows the results of dense querying, where (a) (b) and (c) are the behavior of querying the EBM on split circle twist and kissing pyramids respectfully. (d), (e), and (f) are the results of the queries to the ReLU-MLP.

https://i.imgur.com/J15lquv.png

The thing that immediately pops out here is the profusion of “spandrels” in the out-of-distribution regions. This is starkly contrasted with the complete lack of these “spandrels” in the behavior of the EBM.

So what are these spandrels in the OOD regions? These are artifacts that result from a key weakness to ReLU-MLP. The MLP will a often perform piecewise linear extrapolation of the piecewise linear portion of the model nearest to the edge of the training data domain. This spandrel forming is most intense when the distribution has (genuine) discontinuities. We find that MLP has a natural intrinsic assumption that the distribution it is sampling “must” be continuous, even when it is not. Or worse — that the distribution “must” be linear, when it is not. This is the reason why the kissing pyramids were used as an example set.

EBM, however, does not make such assumptions.

Discontinuous distributions

Next we want to see how far we can push EBM when the sampled distribution is suggestive of a continuity, but the continuity itself is accidentally not sampled during training. To do so, we prepare sampled training sets taken of piecewise linear functions. Pieces meet near a kink, but the kink is not sampled. The same procedure as above was repeated for the competing EBM and ReLU-MLP. The resulting behavior is shown in the figure below.

The ReLU-MLP exhibits the suspected weak behavior. In the absence of any data from the kink, it places one there, and does so in a way that is suspiciously linear. The EBM, on the other hand, is un-phased by this magic trick. In the absence of training samples occurring in such a valley, the EBM assumes the underlying function really has no data in those regions.

https://i.imgur.com/l7HFrb6.png

In general we find that EBM really is a different kind of technique for learning. EBM models will make different predictions, even when all other hyperparameters are maintained. In regions very near the training sample points, and for distributions with (genuine) discontinuities, these differences from other learning methods are most intense.

read more

submitted by /u/moschles
[link] [comments]

Liked Liked