Trees, SVM and Unsupervised Learning (Coursera)

Trees, SVM and Unsupervised Learning (Coursera)
Course Auditing
Categories
Effort
Certification
Languages
Completion of Regression and Classification and Resampling, Selection & Splines, course part of Statistical Learning for Data Science specialization.
Misc

MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Trees, SVM and Unsupervised Learning (Coursera)
"Trees, SVM and Unsupervised Learning" is designed to provide working professionals with a solid foundation in support vector machines, neural networks, decision trees, and XG boost. Through in-depth instruction and practical hands-on experience, you will learn how to build powerful predictive models using these techniques and understand the advantages and disadvantages of each. The course will also cover how and when to apply them to different scenarios, including binary classification and K > 2 classes.

MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Additionally, you will gain valuable experience in generating data representations through PCA and clustering. With a focus on practical, real-world applications, this course is a valuable asset for anyone looking to upskill or move into the field of data science.

Course 3 of 3 in the Statistical Learning for Data Science Specialization.


What You Will Learn

- Describe the advantages and disadvantages of trees, and how and when to use them.

- Apply SVMs for binary classification or K > 2 classes.

- Analyze the strengths and weaknesses of neural networks compared to other machine learning algorithms, such as SVMs.


Syllabus


WEEK 1

Welcome!

The module provides an introductory overview of the course and introduces the course instructor.


WEEK 2

Support Vector Machines (SVMs)

To begin the course, we will learn about support vector machines (SVMs). SVMs have become a popular method in the field of statistical learning due to their ability to handle non-linear and high-dimensional data. SVMs seek to maximize the margin, or distance between the decision boundary and the closest data points, to improve generalization performance. Throughout the week, you will learn how to apply SVMs to classify or predict outcomes in a given dataset, select appropriate kernel functions and parameters, and evaluate model performance


WEEK 3

Introduction to Neural Networks

Neural Networks have become increasingly popular in the field of statistical learning due to their ability to model complex relationships in data. In this module, we will cover introductory concepts of neural networks, such as activation functions and backpropagation. You will have the opportunity to apply Neural Networks to classify or predict outcomes in a given dataset and evaluate model performance in the labs for this module.


WEEK 4

Decision Trees-Bagging-Random Forests

Welcome to the final module for the course. This module will focus on the ensemble methods decision trees, bagging, and random forests, which combine multiple models to improve prediction accuracy and reduce overfitting. Decision Trees are a popular machine learning method that partitions the feature space into smaller regions and models the response variable in each region using simple rules. However, Decision Trees can suffer from high variance and instability, which can be addressed by Bagging and Random Forests. Bagging involves generating multiple trees on bootstrapped samples of the data and averaging their predictions, while Random Forests further decorrelate the trees by randomly selecting subsets of features for each tree.



MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Course Auditing
44.00 EUR/month
Completion of Regression and Classification and Resampling, Selection & Splines, course part of Statistical Learning for Data Science specialization.

MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.