User Tools

Site Tools


lecture_notes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
lecture_notes [2019/09/16 17:00] hjlecture_notes [2019/11/13 16:42] (current) hj
Line 1: Line 1:
 +  * Weeks 1-2: A. {{:ml1-intro.pdf|Machine Learning (basic concepts)}}; B. {{:ml2-math.pdf|Math foundation}}: probabilities and statistics, Bayes theorem,   Entropy, mutual information,  decision tree,  optimization,  matrix factorization (Weekly Reading: [[http://www.cse.yorku.ca/course_archive/2011-12/W/6328/Duda_AppMath.pdf|W2]])    (A useful online manual on [[http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html|Matrix Calculus]])
  
-  * Weeks 1-2A. {{:ml1-intro.pdf|Machine Learning (basic concepts)}}; B. Math foundationprobabilities and statisticsBayes theorem  Entropymutual information decision tree optimization matrix factorization (Weekly Reading[[http://www.cse.yorku.ca/course_archive/2011-12/W/6328/Duda_AppMath.pdf|W2]])    (A useful online manual on [[http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html|Matrix Calculus]])+  * Week 3: {{:ml3-feature.pdf|Feature Extraction in Machine Learning}}: PCALDAmanifold learning (MDSSNELLEIsomap)data virtualization. (Weakly reading{{:tutorial-pca.pdf|a tutorial paper on PCA}}, {{:ml-blackart.pdf|a paper on ML}})
  
-  * Week 3: Feature Extraction in Machine Learning: PCA, LDA, manifold learning (MDS, SNE, LLE, Isomap), data virtualization. (Weakly reading: {{:tutorial-pca.pdf|a tutorial paper on PCA}}, {{:ml-blackart.pdf|a paper on ML}})+  * Week 4-5: {{:ml5-discriminative.pdf|Discriminative Models}}: Statistical Learning theory; Perceptron; Linear Regression; Minimum Classification Error; Support Vector Machines (SVM); Ridge Regression; LASSO; Compressed Sensing
  
-  * Week 4-5Discriminative ModelsStatistical Learning theoryPerceptronLinear RegressionMinimum Classification ErrorSupport Vector Machines (SVM); Ridge Regression; LASSO; Compressed Sensing+  * Week 6{{:ml6-ann.pdf|Artificial Neural Networks and Deep Learning}}:  artificial neural networks (model structure and learning criteria)error back-propagation algorithmfine-tuning tricksConvolutional neural networksRecurrent neural networks (Extra reading assignment: [[http://sebastianruder.com/optimizing-gradient-descent/|other gradient descent algorithms for deep learning]]).
  
-  * Week 6Artificial Neural Networks and Deep Learning artificial neural networks (model structure and learning criteria); error back-propagation algorithm; fine-tuning tricksConvolutional neural networksRecurrent neural networks (Extra reading assignment: [[http://sebastianruder.com/optimizing-gradient-descent/|other gradient descent algorithms for deep learning]]).+  * Week 7{{:ml4-decision-rule.pdf|Bayesian Decision Rule}}: MAP decision ruleThe Bayes ErrorStatistical Data Modelling; Generative vsDiscriminative models
  
-  * Week 7Bayesian Decision RuleMAP decision ruleThe Bayes ErrorStatistical Data ModellingGenerative vsDiscriminative models+  * Week 8{{:ml7-generative.pdf|Generative Models and Parameter Estimation}} : generative models in generalmaximum likelihood estimationBayesian LearningGaussian, logistic regression, Multinomial, Markov chains, GMM
  
-  * Week 8Generative Models and Parameter Estimations generative models in general; maximum likelihood estimation; Bayesian Learning; Gaussian, logistic regression, Multinomial, Markov chains, GMM +  * Week 9{{:ml8-graphicalmodel.pdf|Graphical models}}: Bayesian Networks vs Markov random fields;  Conditional independence; Inference in graphical models; belief propagation; variational inference
- +
-  * Week 9: Graphical models: Bayesian Networks vs Markov random fields;  Conditional independence; Inference in graphical models; belief propagation; variational inference+
lecture_notes.1568653252.txt.gz · Last modified: 2019/09/16 17:00 by hj