course_outline
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
course_outline [2016/08/29 19:12] – hj | course_outline [2016/09/12 21:11] (current) – hj | ||
---|---|---|---|
Line 4: | Line 4: | ||
===== Weeks 1 and 2 ===== | ===== Weeks 1 and 2 ===== | ||
+ | |||
+ | **Machine Learning** (basic concepts) | ||
**Math Background**: | **Math Background**: | ||
Line 9: | Line 11: | ||
===== Week 3 ===== | ===== Week 3 ===== | ||
- | **Data, Feature and Model**: | + | **Data, Feature and Model**: |
===== Week 4 ===== | ===== Week 4 ===== | ||
- | **Machine Learning Basics**: | + | **Machine Learning |
===== Week 5 ===== | ===== Week 5 ===== | ||
- | **Generative Models (I)**: model estimation: maximum likelihood, Bayesian learning, | + | **Generative Models (I)**: model estimation: maximum likelihood, Bayesian learning, multivariate Gaussian, Multinomial, |
===== Week 6 ===== | ===== Week 6 ===== | ||
Line 24: | Line 26: | ||
+ | ===== Week 7 ===== | ||
- | ===== Week 6 ===== | + | **Discriminative Models (II) **: Neural Networks and Deep Learning |
- | **Hidden Markov Model (HMM)**: HMM vs. Markov chains; HMM concepts; Three algorithms: forward-backward; | + | ===== Week 8 ===== |
- | + | ||
- | ===== Week 7 ===== | + | |
- | midterm presentation | + | **Generative Models (II)**: EM algorithm; Finite Mixture models, e.g. Gaussian mixture model |
- | ===== Week 8 ===== | + | ===== Week 9 |
- | **Automatic Speech Recognition | + | **Generative Models |
- | HMM for ASR; ASR as an example of pattern classification; | + | |
- | Acoustic modeling: HMM learning (ML, MAP, DT); parameter tying (decision tree based state tying); n-gram models: smoothing, learning, perplexity, class-based. | + | |
- | ===== Week 9 ===== | + | ===== Week 10 |
- | **Automatic Speech Recognition (ASR) (II)**: Search - why search; Viterbi decoding in a large HMM; beam search; tree-based lexicon; dynamic | + | **Selected Advanced Topics**: Hidden Markov Model (HMM), HMM concepts; Three algorithms: forward-backward; Viterbi |
- | + | ||
- | ===== Week 10 ===== | + | ===== Weeks 11-12 ===== |
- | + | ||
- | **Spoken Language Processing (I)**: text categorization | + | |
- | classify text documents: call/email routing, topic detection, etc. | + | |
- | vector-based approach, Naïve Bayes classifier; Bayesian networks, etc. | + | |
- | (2) HMM applications: | + | |
- | | + | |
- | + | ||
- | ===== Week 11 ===== | + | |
- | + | ||
- | **Spoken Language Processing (II)**: statistical machine translation | + | |
- | | + | |
- | | + | |
- | + | ||
- | + | ||
- | ===== Week 12 ===== | + | |
student presentation | student presentation |
course_outline.1472497941.txt.gz · Last modified: 2016/08/29 19:12 by hj