projects:current:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
projects:current:start [2015/02/11 01:40] – hj | projects:current:start [2015/02/11 01:51] (current) – hj | ||
---|---|---|---|
Line 2: | Line 2: | ||
- | ===== Current Projects | + | ===== Hybrid Orthogonal Projection & Estimation (HOPE) |
- | - **Hybrid Orthogonal Projection & Estimation (HOPE)** | + | {{: |
- | + | Each hidden layer in neural networks can be formulated as one HOPE model[1] | |
- | {{: | + | |
- | Each hidden layer in neural networks can be formulated as one HOPE model | + | |
\\ | \\ | ||
+ | * The HOPE model combines a linear orthogonal projection and a mixture model under a unied generative modelling framework; | ||
+ | * The HOPE model can be used as a novel tool to probe why and how NNs work; | ||
+ | * The HOPE model provides several new learning algorithms to learn NNs either supervisedly or unsupervisedly. | ||
\\ | \\ | ||
- | In this paper, we propose a universal model for high-dimensional data, called the Hybrid | + | **Reference: |
- | Orthogonal Projection and Estimation (HOPE) | + | \\ |
- | and a nite mixture model under a unied generative modelling framework. The HOPE | + | [1] //Shiliang Zhang and Hui Jiang//, "Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks," [[http:// |
- | model itself can be learned unsupervisedly from un-labelled data based on the maximum likelihood | + | |
- | estimation as well as trained discriminatively from labelled data. More interestingly, | + | **Software: |
- | have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense | + | \\ |
- | that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework | + | The matlab codes to reproduce |
- | can be used as a novel tool to probe why and how NNs work, more importantly, | + | |
- | provides several new learning algorithms | + | |
- | this work, we have investigated | + | |
- | including image recognition on MNIST and speech recognition on TIMIT. Experimental | + | |
- | show that the HOPE framework yields signicant performance gains over the current stateof- | + | |
- | the-art methods | + | |
- | learning, supervised or semi-supervised learning. | + |
projects/current/start.1423618808.txt.gz · Last modified: by hj