User Tools

Site Tools


lecture_notes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
lecture_notes [2012/11/05 19:27] hjlecture_notes [2012/11/19 16:00] (current) hj
Line 14: Line 14:
   * {{:cse6328-f12-w8.pdf|Week 8}}: Automatic Speech Recognition (ASR) (I): ASR introduction; ASR as an example of pattern classification; Acoustic modeling: parameter tying (decision tree based state tying); (Weekly Reading: {{{:htkbook31_part1.pdf|W8}}})   * {{:cse6328-f12-w8.pdf|Week 8}}: Automatic Speech Recognition (ASR) (I): ASR introduction; ASR as an example of pattern classification; Acoustic modeling: parameter tying (decision tree based state tying); (Weekly Reading: {{{:htkbook31_part1.pdf|W8}}})
  
-  * Week 9: Automatic Speech Recognition (ASR) (II): Language Modelling (LM); N-gram models: smoothing, learning, perplexity, class-based.+  * {{:cse6328_f12_w9.pdf|Week 9}}: Automatic Speech Recognition (ASR) (II): Language Modelling (LM); N-gram models: smoothing, learning, perplexity, class-based.
  
-  * Week 10: Automatic Speech Recognition (ASR) (III): Search - why search; Search space in n-gram LM; Viterbi decoding in a large HMM; beam search; tree-based lexicon; dynamic decoding; static decoding; weighted finite state transducer (WFST)  (Additional slides for WFST)+  * {{:cse6328_f12_w10.pdf|Week 10}}: Automatic Speech Recognition (ASR) (III): Search - why search; Search space in n-gram LM; Viterbi decoding in a large HMM; beam search; tree-based lexicon; dynamic decoding; static decoding; weighted finite state transducer (WFST)  (Additional slides for WFST)
lecture_notes.txt · Last modified: 2012/11/19 16:00 by hj