This is an old revision of the document!
Current Projects
Current Projects
- Hybrid Orthogonal Projection & Estimation (HOPE)
Each hidden layer in neural networks can be formulated as one HOPE model
The HOPE model combines a linear orthogonal projection and a mixture model under a unied generative modelling framework. Each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework
can be used as a novel tool to probe why and how NNs work, more importantly, it also
provides several new learning algorithms to learn NNs either supervisedly or unsupervisedly. In
this work, we have investigated the HOPE framework in learning NNs for several standard tasks,
including image recognition on MNIST and speech recognition on TIMIT. Experimental results
show that the HOPE framework yields signicant performance gains over the current stateof-
the-art methods in various types of NN learning problems, including unsupervised feature
learning, supervised or semi-supervised learning.