User Tools

Site Tools


course_outline

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
course_outline [2016/08/29 19:12] hjcourse_outline [2016/09/12 21:11] (current) hj
Line 4: Line 4:
  
 ===== Weeks 1 and 2 ===== ===== Weeks 1 and 2 =====
 +
 +**Machine Learning** (basic concepts)
  
 **Math Background**: probabilities; Bayes theorem; statistics; estimation; regression; hypothesis testing; entropy; mutual information; decision tree; optimization theory and convex optimization; matrix decomposition  **Math Background**: probabilities; Bayes theorem; statistics; estimation; regression; hypothesis testing; entropy; mutual information; decision tree; optimization theory and convex optimization; matrix decomposition 
Line 9: Line 11:
 ===== Week 3 ===== ===== Week 3 =====
  
-**Data, Feature and Model**:  Feature engineeringfeature extraction (PCA, LDA, etc), data virtualization  +**Data, Feature and Model**:  Feature EngineeringFeature Extraction (PCA, LDA, etc), Data Virtualization  
  
 ===== Week 4 ===== ===== Week 4 =====
  
-**Machine Learning Basics**:  Learnability; Basic concepts; Bayesian decision theory+**Machine Learning (ML) Basics**:  Learnability; Some basic ML Concepts; Bayesian decision theory
  
 ===== Week 5 ===== ===== Week 5 =====
  
-**Generative Models (I)**: model estimation: maximum likelihood, Bayesian learning, EM algorithm; multivariate Gaussian, Gaussian mixture model, Multinomial, Markov Chain model, etc.+**Generative Models (I)**: model estimation: maximum likelihood, Bayesian learning, multivariate Gaussian, Multinomial, Markov Chain model, etc.
  
 ===== Week 6 ===== ===== Week 6 =====
Line 24: Line 26:
  
  
 +===== Week 7 =====
  
-===== Week 6 =====+**Discriminative Models (II) **:  Neural Networks and Deep Learning
  
-**Hidden Markov Model (HMM)**: HMM vs. Markov chains; HMM concepts; Three algorithms: forward-backward; Viterbi decoding; Baum-Welch learning.  +===== Week  =====
-  +
-===== Week  =====+
  
-midterm presentation+**Generative Models (II)**: EM algorithm; Finite Mixture models, e.g. Gaussian mixture model
  
-===== Week =====+===== Week 9  =====
  
-**Automatic Speech Recognition (ASR) (I)**: Acoustic and Language Modeling:  +**Generative Models (III)**:  graphical modelsdirected vs. indirected graphical modelsexact inference, approximate inference (loopy belief propagationvariational inferenceMonte Carlo methods)
-HMM for ASR;  ASR as an example of pattern classification; +
-Acoustic modeling: HMM learning (MLMAPDT); parameter tying (decision tree based state tying); n-gram models: smoothinglearningperplexity, class-based.+
  
-===== Week =====+===== Week 10  =====
  
-**Automatic Speech Recognition (ASR) (II)**: Search - why search; Viterbi decoding in a large HMM; beam search; tree-based lexicondynamic decoding; static decoding; weighted finite state transducer (WFST) +**Selected Advanced Topics**: Hidden Markov Model (HMM), HMM conceptsThree algorithms: forward-backwardViterbi decoding; Baum-Welch learning.  
- +  
-===== Week 10 ===== +===== Weeks 11-12 =====
- +
-**Spoken Language Processing (I)**: text categorization +
-classify text documents: call/email routing, topic detection, etc. +
-vector-based approach, Naïve Bayes classifier; Bayesian networks, etc. +
-(2) HMM applications: Statistical Part-of-Speech (POS) tagging; +
- Language understanding: hidden concept models+
- +
-===== Week 11 ===== +
- +
-**Spoken Language Processing (II)**: statistical machine translation +
- IBM’s models for machine translation: lexicon model, alignment model, language model +
- training process, generation & search +
- +
- +
-===== Week 12 =====+
  
 student presentation student presentation
course_outline.1472497941.txt.gz · Last modified: 2016/08/29 19:12 by hj