====== HOPE ====== ===== Hybrid Orthogonal Projection & Estimation (HOPE) ===== {{:projects:current:one-nn-layer.png?0x220 |NN hidden layer}}{{ :projects:current:one-hope-layer.png?0x220| HOPE}} \\ Each hidden layer in neural networks can be formulated as one HOPE model [1] \\ * The HOPE model combines a linear orthogonal projection and a mixture model under a uni fied generative modelling framework; * The HOPE model can be used as a novel tool to probe why and how NNs work; * The HOPE model provides several new learning algorithms to learn NNs in either supervised or unsupervised ways. **Reference:** \\ [1] //Shiliang Zhang and Hui Jiang//, "Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks," [[http://arxiv.org/abs/1502.00702|arXiv:1502.00702]]. [2] //Shiliang Zhang, Hui Jiang, Lirong Dai//, "The New HOPE Way to Learn Neural Networks", Proc. of Deep Learning Workshop at ICML 2015, July 2015. ([[https://8109f4a4-a-62cb3a1a-s-sites.googlegroups.com/site/deeplearning2015/9.pdf?attachauth=ANoY7coK1o0yBn1dsWEI0HgzM8TRfaBZAVw5BlivHyIiOJt3Hz1Q-fSv3qKwTo4eeS2bdcIrseorD-GCFaggMP9n6EKExhoYnm8smj5LogA6kmlM5SPmMWJxf3uhMRKk5T0M9_cEdtCVwCBSQh4Ya3PYW5OogwFzibs-vpFIZfN6OQG50CmwNBK4x3-tyi0nrihKfNFeMMbUyMWZeeSj9ToElpkc2qB17A%3D%3D&attredirects=0|paper]]) [3] //S. Zhang, H. Jiang, L. Dai//, “[[http://jmlr.org/papers/v17/15-335.html| Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Learn Neural Networks]],” //Journal of Machine Learning Research (JMLR)//, 17(37):1−33, 2016. **Software:** \\ The matlab codes to reproduce the MNIST results in [1,2] can be downloaded {{:projects:hope:hope4mnist_matlab.rar|here}}.