projects:current:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
projects:current:start [2015/02/11 01:42] – hj | projects:current:start [2015/02/11 01:51] (current) – hj | ||
---|---|---|---|
Line 2: | Line 2: | ||
- | ===== Current Projects ===== | + | ===== Hybrid Orthogonal Projection & Estimation (HOPE) |
- | + | ||
- | - **Hybrid Orthogonal Projection & Estimation (HOPE)** | + | |
{{: | {{: | ||
- | Each hidden layer in neural networks can be formulated as one HOPE model | + | Each hidden layer in neural networks can be formulated as one HOPE model[1] |
\\ | \\ | ||
- | * The HOPE model combines a linear orthogonal projection and a mixture model under a unied generative modelling framework. Each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework | + | * The HOPE model combines a linear orthogonal projection and a mixture model under a unied generative modelling framework; |
- | can be used as a novel tool to probe why and how NNs work, more importantly, | + | * The HOPE model can be used as a novel tool to probe why and how NNs work; |
- | provides several new learning algorithms to learn NNs either supervisedly or unsupervisedly. | + | * The HOPE model provides several new learning algorithms to learn NNs either supervisedly or unsupervisedly. |
- | this work, we have investigated the HOPE framework in learning NNs for several standard tasks, | + | \\ |
- | including image recognition on MNIST and speech recognition on TIMIT. Experimental results | + | **Reference: |
- | show that the HOPE framework yields signicant performance gains over the current stateof- | + | \\ |
- | the-art methods | + | [1] //Shiliang Zhang and Hui Jiang//, " |
- | learning, supervised or semi-supervised learning. | + | |
+ | **Software: | ||
+ | \\ | ||
+ | The matlab codes to reproduce |
projects/current/start.1423618953.txt.gz · Last modified: by hj