User Tools

Site Tools


lecture_notes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
lecture_notes [2019/09/09 14:46] hjlecture_notes [2019/11/04 15:54] hj
Line 1: Line 1:
- 
   * Weeks 1-2: A. {{:ml1-intro.pdf|Machine Learning (basic concepts)}}; B. {{:ml2-math.pdf|Math foundation}}: probabilities and statistics, Bayes theorem,   Entropy, mutual information,  decision tree,  optimization,  matrix factorization (Weekly Reading: [[http://www.cse.yorku.ca/course_archive/2011-12/W/6328/Duda_AppMath.pdf|W2]])    (A useful online manual on [[http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html|Matrix Calculus]])   * Weeks 1-2: A. {{:ml1-intro.pdf|Machine Learning (basic concepts)}}; B. {{:ml2-math.pdf|Math foundation}}: probabilities and statistics, Bayes theorem,   Entropy, mutual information,  decision tree,  optimization,  matrix factorization (Weekly Reading: [[http://www.cse.yorku.ca/course_archive/2011-12/W/6328/Duda_AppMath.pdf|W2]])    (A useful online manual on [[http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html|Matrix Calculus]])
  
-  * Week 3: Feature Extraction in Machine Learning: PCA, LDA, manifold learning (MDS, SNE, LLE, Isomap), data virtualization. (Weakly reading: {{:tutorial-pca.pdf|a tutorial paper on PCA}}, {{:ml-blackart.pdf|a paper on ML}}). +  * Week 3: {{:ml3-feature.pdf|Feature Extraction in Machine Learning}}: PCA, LDA, manifold learning (MDS, SNE, LLE, Isomap), data virtualization. (Weakly reading: {{:tutorial-pca.pdf|a tutorial paper on PCA}}, {{:ml-blackart.pdf|a paper on ML}}). 
  
-  * Week 4-5: Discriminative Models: Statistical Learning theory; Perceptron; Linear Regression; Minimum Classification Error; Support Vector Machines (SVM); Ridge Regression; LASSO; Compressed Sensing+  * Week 4-5: {{:ml5-discriminative.pdf|Discriminative Models}}: Statistical Learning theory; Perceptron; Linear Regression; Minimum Classification Error; Support Vector Machines (SVM); Ridge Regression; LASSO; Compressed Sensing
  
-  * Week 6: Artificial Neural Networks and Deep Learning:  artificial neural networks (model structure and learning criteria); error back-propagation algorithm; fine-tuning tricks; Convolutional neural networks; Recurrent neural networks (Extra reading assignment: [[http://sebastianruder.com/optimizing-gradient-descent/|other gradient descent algorithms for deep learning]]).+  * Week 6: {{:ml6-ann.pdf|Artificial Neural Networks and Deep Learning}}:  artificial neural networks (model structure and learning criteria); error back-propagation algorithm; fine-tuning tricks; Convolutional neural networks; Recurrent neural networks (Extra reading assignment: [[http://sebastianruder.com/optimizing-gradient-descent/|other gradient descent algorithms for deep learning]]).
  
-  * Week 7: Bayesian Decision Rule: MAP decision rule; The Bayes Error; Statistical Data Modelling; Generative vs. Discriminative models+  * Week 7: {{:ml4-decision-rule.pdf|Bayesian Decision Rule}}: MAP decision rule; The Bayes Error; Statistical Data Modelling; Generative vs. Discriminative models
  
-  * Week 8: Generative Models and Parameter Estimations : generative models in general; maximum likelihood estimation; Bayesian Learning; Gaussian, logistic regression, Multinomial, Markov chains, GMM. +  * Week 8: {{:ml7-generative.pdf|Generative Models and Parameter Estimation}} : generative models in general; maximum likelihood estimation; Bayesian Learning; Gaussian, logistic regression, Multinomial, Markov chains, GMM. 
  
   * Week 9: Graphical models: Bayesian Networks vs Markov random fields;  Conditional independence; Inference in graphical models; belief propagation; variational inference   * Week 9: Graphical models: Bayesian Networks vs Markov random fields;  Conditional independence; Inference in graphical models; belief propagation; variational inference
lecture_notes.txt · Last modified: 2019/11/13 16:42 by hj