User Tools

Site Tools


course_outline

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

course_outline [2007/07/31 19:53] – external edit 127.0.0.1course_outline [2012/08/30 18:50] (current) hj
Line 5: Line 5:
 ===== Week 1 ===== ===== Week 1 =====
  
-Your notes here.+**Introduction**: application background; a big picture; speech sounds; spoken language 
 +(reading assignment)
  
 ===== Week 2 ===== ===== Week 2 =====
  
-===== Midterm =====+**Math Background**: probabilities; Bayes theorem; statistics; estimation; regression; hypothesis testing; entropy; mutual information; decision tree; optimization theory and convex optimization
  
-===== Drop Deadline =====+===== Week 3 =====
  
-===== Week 13 =====+**Pattern Classification**: pattern classification & pattern verification; Bayesian decision theory 
 + 
 +===== Week ===== 
 + 
 +**Generative Models**: model estimation: maximum likelihood, Bayesian learning, EM algorithm; multivariate Gaussian, Gaussian mixture model, Multinomial, Markov Chain model, etc. 
 + 
 +===== Week 5 ===== 
 + 
 +**Discriminative Models**: Linear discriminant functions; support vector machine (SVM); large margin classifiers; sparse kernel machines; Neural networks 
 + 
 +===== Week 6 ===== 
 + 
 +**Hidden Markov Model (HMM)**: HMM vs. Markov chains; HMM concepts; Three algorithms: forward-backward; Viterbi decoding; Baum-Welch learning.  
 +  
 +===== Week 7  ===== 
 + 
 +midterm presentation 
 + 
 +===== Week 8 ===== 
 + 
 +**Automatic Speech Recognition (ASR) (I)**: Acoustic and Language Modeling:  
 +HMM for ASR;  ASR as an example of pattern classification; 
 +Acoustic modeling: HMM learning (ML, MAP, DT); parameter tying (decision tree based state tying); n-gram models: smoothing, learning, perplexity, class-based. 
 + 
 +===== Week 9 ===== 
 + 
 +**Automatic Speech Recognition (ASR) (II)**: Search - why search; Viterbi decoding in a large HMM; beam search; tree-based lexicon; dynamic decoding; static decoding; weighted finite state transducer (WFST) 
 + 
 +===== Week 10 ===== 
 + 
 +**Spoken Language Processing (I)**: text categorization 
 +classify text documents: call/email routing, topic detection, etc. 
 +vector-based approach, Naïve Bayes classifier; Bayesian networks, etc. 
 +(2) HMM applications: Statistical Part-of-Speech (POS) tagging; 
 + Language understanding: hidden concept models. 
 + 
 +===== Week 11 ===== 
 + 
 +**Spoken Language Processing (II)**: statistical machine translation 
 + IBM’s models for machine translation: lexicon model, alignment model, language model 
 + training process, generation & search 
 + 
 + 
 +===== Week 12 ===== 
 + 
 +student presentation
  
-===== Final Exam ===== 
  
  
course_outline.1185911597.txt.gz · Last modified: 2012/08/30 18:50 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki