[PDF]&[PPT] MACHINE LEARNING NOTES | ML STUDENT NOTES | JNTU

[PDF]&[PPT] MACHINE LEARNING NOTES | ML STUDENT NOTES | JNTU

 JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY (JNTU)

(ML) MACHINE LEARNING  - STUDENT NOTES

PDF/DOC FREE DOWNLOAD

  

Unit I:What is Machine Learning?: 
Download: Google Drive LInk(Written Notes):Download
ppt Part1.1 - Download
ppt Part1.1.1 - Download
ppt Part1.2 - Download
ppt Part1.3- Download
ppt Part1.4- Download
Unit 2: Evaluating Hypotheses:
Download: Google Drive LInk(Written Notes):Download
ppt Part2.1 - Download
ppt Part2.2 - Download
ppt Part2.3 - Download

Unit 3: Dimensionality Reduction:
Download: Google Drive LInk(Written Notes):Download
ppt part3.1 - Download
ppt part3.2 - Download
ppt part3.3 - Download


Unit 4: Linear Discrimination:
Download: Google Drive LInk(Written Notes): Download
ppt Part4.1 - Download
ppt Part4.2 - Download

Unit 5: Kernel Machines:
Download: Google Drive LInk(Written Notes):Download
ppt Part5.1 - Download 
ppt Part5.2 - Download

 
MACHINE LEARNING (Single PDF) All Topics:
Download: Google Drive LInk: Download


MACHINE LEARNING  SYLLABUS


Unit I:What is Machine Learning?, Examples of machine learning applications, supervised Learning: learning a class from examples, Vapnik- Chervonenkis dimension, probably approximately correct learning, noise, learning multiple classes, regression, model selection and generalization, dimensions of a supervised machine learning algorithm. Decision Tree Learning: Introduction, Decisions Tree representation, Appropriate problems for decision tree learning, the basic decision tree learning algorithm, Hypothesis space search in decision tree learning, Inductive bias in decision tree learning, issues in decision tree learning, Artificial Neural Networks: Introduction, Neural Network  Representation – Problems – Perceptrons – Multilayer Networks and Back Propagation Algorithm, Remarks on the BACKPROPGRATION Algorithm, An illustrative Example: Face Recognition, Advanced Topics in Artificial Neural  Networks.

Unit 2: Evaluating Hypotheses: Motivation, Estimating hypothesis accuracy, basics of sampling theory, a general approach for deriving confidence intervals, differences in error of two hypothesis, comparing learning algorithms, Bayesian Learning: Introduction, Bayes Theorem, Bayes Theorem and Concept Learning, Maximum Likelihood and least squared error hypothesis, Maximum Likelihood hypothesis for predicting probabilities, Minimum Description Length Principle, Bayes Optimal Classifier, Gibbs Algorithm, Naïve Bayes Classifier , Bayesian Belief Network, EM Algorithm

Unit 3: Dimensionality Reduction: Introduction, Subset selection, principle component analysis, feature embedding, factor analysis, singular value decomposition and matrix factorization, multidimensional scaling, linear discriminant analysis, canonical correlation analysis, Isomap, Locally linear embedding, laplacian eigenmaps, Clustering: Introduction, Mixture densities, K- Means clustering, Expectations- Maximization algorithm, Mixture of latent variable models, supervised learning after clustering, spectral clustering, Hierarchal clustering, Choosing the number of clusters, Nonparametric Methods: Introduction, Non Parametric density estimation, generalization to multivariate data, nonparametric classification, condensed nearest neighbor, Distance based classification, outlier detection, Nonparametric regression: smoothing models, how to choose the smoothing parameter

Unit 4: Linear Discrimination: Introduction, Generalizing the linear model, geometry of the linear discrimination, pair wise separation, parametric discrimination revisited, gradient descent, logistic discrimination, discrimination by regression, learning to rank, Multilayer Perceptrons: Introduction, the perceptron, training a perceptron, learning Boolean functions, multilayer perceptrons, MLP as a universal approximator, Back propagation algorithm, Training procedures, Tuning the network size, Bayesian view of learning, dimensionality reduction, learning time, deep learning

Unit 5: Kernel Machines: Introduction, Optimal separating hyperplane, the non separable case: Soft Margin Hyperplane, ν-SVM, kernel Trick, Vectorial kernels, defining kernels, multiple kernel learning, multicast kernel machines, kernel machines for regression, kernel machines for ranking, one-class kernel machines, large margin nearest neighbor classifier, kernel dimensionality reduction, Graphical models: Introduction, Canonical cases for conditional independence, generativeUnit 5: Kernel Machines: models, d separation, belief propagation, undirected Graphs: Markov Random files, Learning the structure of a graphical model, influence diagrams.

Text Books:
1) Machine Learning by Tom M. Mitchell, Mc Graw Hill Education, Indian Edition, 2016.
2) Introduction to Machine learning, Ethem Alpaydin, PHI, 3 rd Edition, 2014

References Books:

1) Machine Learning: An Algorithmic Perspective, Stephen Marsland, Taylor & Francis,CRC Press Book,



@Credits: Syllabus Taken From JNTU
Previous Post
Next Post
Related Posts

0 comments: