Each hidden layer in neural networks can be formulated as one HOPE model [1]
Reference:
[1] Shiliang Zhang and Hui Jiang, “Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks,” arXiv:1502.00702.
[2] Shiliang Zhang, Hui Jiang, Lirong Dai, “The New HOPE Way to Learn Neural Networks”, Proc. of Deep Learning Workshop at ICML 2015, July 2015. (paper)
[3] S. Zhang, H. Jiang, L. Dai, “ Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Learn Neural Networks,” Journal of Machine Learning Research (JMLR), 17(37):1−33, 2016.
Software:
The matlab codes to reproduce the MNIST results in [1,2] can be downloaded here.