[edit]
Investigating the relationship between diversity and generalization in deep neural networks
Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL), PMLR 307:375-387, 2026.
Abstract
In ensembles, improved generalization is frequently attributed to \emph{diversity} among members of the ensemble. By viewing a single neural network as an \emph{implicit ensemble}, we perform an exploratory investigation that applies well-known ensemble diversity measures to a neural network in order to study the relationship between diversity and generalization in the over-parameterized regime. Our results show that i) deeper layers of the network generally have higher levels of diversity—particularly for MLPs—and ii) layer-wise accuracy positively correlates with diversity. Additionally, we study the effects of well-known regularizers such as Dropout, DropConnect and batch size, on diversity and generalization. We generally find that increasing the strength of the regularizer increases the diversity in the neural network and this increase in diversity is positively correlated with model accuracy. We show that these results hold for several benchmark datasets (such as Fashion-MNIST and CIFAR-10) and architectures (MLPs and CNNs). Our findings suggest new avenues of research into the generalization ability of deep neural networks.