[edit]
Deep Layer-wise Networks Have Closed-Form Weights
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:188-225, 2022.
Abstract
There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network one layer at a time with only a "single forward pass" has been proposed as an alternative to bypass BP; we refer to these networks as "layer-wise" networks. We continue the work on layer-wise networks by answering two outstanding questions. First, do they have a closed-form solution? Second, how do we know when to stop adding more layers? This work proves that the "Kernel Mean Embedding" is the closed-form solution that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the Neural Indicator Kernel.