User Tools

Site Tools


course_outline

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
course_outline [2011/11/16 19:17] hjcourse_outline [2011/11/17 02:36] (current) hj
Line 10: Line 10:
 ===== Week 2 ===== ===== Week 2 =====
  
-**Math Background**: probabilities; Bayes theorem; statistics; estimation; regression; hypothesis testing; Entropy; mutual information; decision tree; Optimization theory and convex optimization+**Math Background**: probabilities; Bayes theorem; statistics; estimation; regression; hypothesis testing; entropy; mutual information; decision tree; optimization theory and convex optimization
  
 ===== Week 3 ===== ===== Week 3 =====
Line 18: Line 18:
 ===== Week 4 ===== ===== Week 4 =====
  
-**Generative Models**: Model estimation: maximum likelihood, Bayesian learning, EM algorithm; Gaussian, Gaussian mixture model, Multinomial, Markov Chain model, etc.+**Generative Models**: model estimation: maximum likelihood, Bayesian learning, EM algorithm; multivariate Gaussian, Gaussian mixture model, Multinomial, Markov Chain model, etc.
  
 ===== Week 5 ===== ===== Week 5 =====
Line 34: Line 34:
 ===== Week 8 ===== ===== Week 8 =====
  
-**Automatic Speech Recognition (ASR) (I)**: Acoustic and Language Modeling  +**Automatic Speech Recognition (ASR) (I)**: Acoustic and Language Modeling 
-how to use HMM for ASR;  ASR as an example of pattern classification; +HMM for ASR;  ASR as an example of pattern classification; 
-Acoustic modeling: HMM learning (ML, MAP); Parameter tying (decision tree based state tying)n-grams, smoothing, learning, perplexity, class-based+Acoustic modeling: HMM learning (ML, MAP, DT); parameter tying (decision tree based state tying)n-gram models: smoothing, learning, perplexity, class-based.
  
 ===== Week 9 ===== ===== Week 9 =====
  
-**Automatic Speech Recognition (ASR) (II)**: Search, Why search; Viterbi decoding in a large HMM; Beam search; Tree-based lexicon; dynamic decoding; static decoding; weighted finite state transducer (WFST)+**Automatic Speech Recognition (ASR) (II)**: Search - why search; Viterbi decoding in a large HMM; beam search; tree-based lexicon; dynamic decoding; static decoding; weighted finite state transducer (WFST)
  
 ===== Week 10 ===== ===== Week 10 =====
course_outline.1321471037.txt.gz · Last modified: 2011/11/16 19:17 by hj

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki