What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation

Vitaly Feldman, Chiyuan Zhang Equal contribution

Abstract: Deep learning algorithms are well-known to have a propensity for fitting the training data very well and often fit even outliers and mislabeled data points. Such fitting requires memorization of training data labels, a phenomenon that has attracted significant research interest but has not been given a compelling explanation so far. A recent work of Feldman [Fel19] proposes a theoretical explanation for this phenomenon based on a combination of two insights. First, natural image and data distributions are (informally) known to be long-tailed, that is have a significant fraction of rare and atypical examples. Second, in a simple theoretical model such memorization is necessary for achieving close-to-optimal generalization error when the data distribution is long-tailed. However, no direct empirical evidence for this explanation or even an approach for obtaining such evidence were given.

In this work we design experiments to test the key ideas in this theory. The experiments require estimation of the influence of each training example on the accuracy at each test example as well as memorization values of training examples. Estimating these quantities directly is computationally prohibitive but we show that closely-related subsampled influence and memorization values can be estimated much more efficiently. Our experiments demonstrate the significant benefits of memorization for generalization on several standard benchmarks. They also provide quantitative and visually compelling evidence for the theory put forth in [Fel19].

ImageNet Pre-computed Memorization and Influence Value Estimates

We provide pre-computed memorization and influence value estimates on ImageNet for download here. The estimates are computed by training 2,000 ResNet-50 models, each on a random 70% subset of the full ImageNet training set.

  • High-influence pairs contains four arrays of equal length. tr_idx and tt_idx contains the index of the training and test examples, respectively, from each of the selected high-influence pairs. infl contains the influence value estimates of each pair, and mem contains the memorization value estimates of the training example in each of the selected pairs.
  • ImageNet index contains indexing information. Since there is no pre-defined order of the ImageNet images, we choose an arbitrary data order in our experiments. In this file, we provide the image filenames and labels listed by the data order in our experiments to help identifying the images associated with each influence and memorization value estimates. In particular, tr_filenames and tr_labels contains the filenames and labels of the training set. tt_filenames and tt_labels contains the filenames and labels of the test set. We also provide tr_mem which contains the memorization value estimates for all the training examples. See here for an example of using this information to build an ImageNet tfrecord dataset with index information from the raw ImageNet images.
  • Class-wise influence matrices contains the n_train-by-n_test influence matrices for each class. Because the influence matrix over the entire training and test set is too big (250 GB+), we only provide the per-class influence matrices. For each class K, the array tr_classidx_{K} and tt_classidx_{K} provides the index of examples that belong to class K in the training set and test set, respectively. The value infl_matrix_class{K}[i, j] is the influence value of the i-th training example in class K on the j-th test example in class K.

    Due to the single-file-size limit of 100 MB, we split this file into part-1, part-2, and part-3. The full .npz file can be reconstructed by concatenating the parts together:

    cat imagenet_infl_matrix_split_*.bin > imagenet_infl_matrix.npz
    The md5sum for the concatenated file is 20290f49a0468de7973892dc47f85e54.

CIFAR-100 Pre-computed Memorization and Influence Value Estimates

We provide pre-computed memorization and influence value estimates on CIFAR-100 for download here. The estimates are computed by training 4,000 ResNet-50 models, each on a random 70% subset of the full CIFAR-100 training set. The estimates are provided in the original data order from the official CIFAR-100 website. We also provide tr_labels and tt_labels to help sanity check the data ordering.

  • High-influence pairs contains four arrays of equal length. tr_idx and tt_idx contains the index of the training and test examples, respectively, from each of the selected high-influence pairs. infl contains the influence estimates for all pairs, and mem contains the memorization value estimates of the training example in each of the selected pairs.
  • Class-wise influence matrices contains the n_train-by-n_test influence matrices for each class K in the array with name infl_matrix_class{K}. The array tr_classidx_{K} and tt_classidx_{K} provides the index of examples that belong to class K in the training set and test set, respectively. tr_labels and tt_labels provide the labels on the training set and test set, respectively, to help sanity check the data ordering. Finally, tr_mem contains the memorization value estimates for all the training examples.

Pre-trained Model Checkpoints

We also released the checkpoints for the models trained with different heldout subsets. The download links and details on how to load those checkpoints can be found here.

ImageNet Memorization Value Examples

We show a histogram of the memorization value estimates of all the training examples. Use the slider below to select example visualizations around different memorization values for a random subset of classes. The caption on top of each image shows the memorization value estimate of that image.

ImageNet High-Influence Value Examples

We show a histogram of the influence value estimates of selected high-influence pairs (mem ≥ 0.25, infl ≥ 0.15, see the paper for more details). Use the slider below to select example visualizations around different values of influence estimates. In each row of the visualization, the first column shows an image from the training set (the caption on the top of the image shows the memorization value estimate). The remaining columns show images from the test set, ranked by the influence value estimate (also shown in the caption). So the first two images in each row form a high-influence pair.

CIFAR-100 Memorization Value Examples

We show a histogram of the memorization value estimates of all the training examples. Use the slider below to select example visualizations around different memorization values for a random subset of classes. The caption on top of each image shows the memorization value estimate of that image.

CIFAR-100 High-Influence Value Examples

We show a histogram of the influence value estimates of selected high-influence pairs (mem ≥ 0.25, infl ≥ 0.15, see the paper for more details). Use the slider below to select example visualizations around different values of influence estimates. In each row of the visualization, the first column shows an image from the training set (the caption on the top of the image shows the memorization value estimate). The remaining columns show images from the test set, ranked by the influence value estimate (also shown in the caption). So the first two images in each row form a high-influence pair.