projects:current:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
projects:current:start [2015/02/11 01:41] – hj | projects:current:start [2015/02/11 01:51] (current) – hj | ||
---|---|---|---|
Line 2: | Line 2: | ||
- | ===== Current Projects ===== | + | ===== Hybrid Orthogonal Projection & Estimation (HOPE) |
- | + | ||
- | - **Hybrid Orthogonal Projection & Estimation (HOPE)** | + | |
{{: | {{: | ||
- | Each hidden layer in neural networks can be formulated as one HOPE model | + | Each hidden layer in neural networks can be formulated as one HOPE model[1] |
\\ | \\ | ||
+ | * The HOPE model combines a linear orthogonal projection and a mixture model under a unied generative modelling framework; | ||
+ | * The HOPE model can be used as a novel tool to probe why and how NNs work; | ||
+ | * The HOPE model provides several new learning algorithms to learn NNs either supervisedly or unsupervisedly. | ||
\\ | \\ | ||
+ | **Reference: | ||
\\ | \\ | ||
- | The HOPE model combines a linear orthogonal projection | + | [1] //Shiliang Zhang and Hui Jiang//, " |
- | can be used as a novel tool to probe why and how NNs work, more importantly, | + | |
- | provides several new learning algorithms to learn NNs either supervisedly or unsupervisedly. In | + | **Software: |
- | this work, we have investigated the HOPE framework in learning NNs for several standard tasks, | + | \\ |
- | including image recognition on MNIST and speech recognition on TIMIT. Experimental results | + | The matlab codes to reproduce |
- | show that the HOPE framework yields signicant performance gains over the current stateof- | + | |
- | the-art methods | + | |
- | learning, supervised or semi-supervised learning. | + |
projects/current/start.1423618910.txt.gz · Last modified: by hj