====== Current Projects ====== ===== Hybrid Orthogonal Projection & Estimation (HOPE) ===== {{:projects:current:one-nn-layer.png?0x220 |NN hidden layer}}{{ :projects:current:one-hope-layer.png?0x220| HOPE}} Each hidden layer in neural networks can be formulated as one HOPE model[1] \\ * The HOPE model combines a linear orthogonal projection and a mixture model under a uni ed generative modelling framework; * The HOPE model can be used as a novel tool to probe why and how NNs work; * The HOPE model provides several new learning algorithms to learn NNs either supervisedly or unsupervisedly. \\ **Reference:** \\ [1] //Shiliang Zhang and Hui Jiang//, "Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks," [[http://arxiv.org/abs/1502.00702|arXiv:1502.00702]]. **Software:** \\ The matlab codes to reproduce the MNIST results in [1] can be downloaded here.