Machine Learning - Course and Syllabus
Students

Academic Programs

AI-powered learning for grades 8-12, aligned with major curricula

Professional

Professional Courses

Industry-relevant training in Business, Technology, and Design

Games

Interactive Games

Fun games to boost memory, math, typing, and English skills

Machine Learning

Machine Learning

The module explores advanced topics in machine learning, focusing on the ethical and societal implications related to AI systems. It emphasizes the importance of bias detection and mitigation, accountability, transparency, and privacy within AI development. The introduction of explainable AI (XAI) methods like LIME and SHAP underpins the need for interpretability in complex models to ensure they are ethical and trustworthy in real-world applications.

13 Chapters 70 hr

Course Chapters

Chapter 1

ML Fundamentals & Data Preparation

Machine learning is a branch of artificial intelligence that enables systems to learn from data and improve over time, categorized into supervised, unsupervised, semi-supervised, and reinforcement learning. The module outlines the machine learning workflow, emphasizing the importance of data preparation, including data loading, preprocessing, feature engineering, and exploratory data analysis. Key Python libraries essential for machine learning, such as NumPy, Pandas, and Scikit-learn, are introduced to facilitate these processes.

Chapter 2

Supervised Learning - Regression & Regularization (Weeks 3)

Supervised learning, particularly regression, is explored through linear and polynomial relationships. Key concepts include the mathematical frameworks of simple and multiple linear regression, gradient descent for optimization, and the importance of evaluation metrics like MSE and R². A significant focus is placed on understanding the Bias-Variance Trade-off, which is critical for model generalization.

Chapter 3

Supervised Learning - Regression & Regularization (Weeks 4)

This module explores the critical concepts of supervised learning, focusing on regression techniques and their robustness. It emphasizes the importance of regularization methods such as L1 (Lasso) and L2 (Ridge) to prevent overfitting and improve model generalization. Additionally, the chapter introduces cross-validation methods, including K-Fold and Stratified K-Fold, to assess model performance effectively on unseen data.

Chapter 4

Supervised Learning - Classification Fundamentals (Weeks 5)

Supervised learning shifts focus from regression to classification, wherein the goal is to predict discrete categories based on labeled data. The chapter covers binary classification and multi-class classification concepts, introduces Logistic Regression as a key algorithm for classification, explores performance evaluation metrics like Precision, Recall, and F1-Score, and discusses K-Nearest Neighbors (KNN) as a unique 'lazy learning' method. Core challenges like the curse of dimensionality and practical implementation through hands-on labs are also emphasized.

Chapter 5

Supervised Learning - Classification Fundamentals (Weeks 6)

The chapter focuses on two powerful classification techniques: Support Vector Machines (SVMs) and Decision Trees, exploring their principles, advantages, and detailed implementations. It emphasizes the significance of concepts such as hyperplanes, margins, kernel tricks, and the construction of decision trees along with challenges like overfitting. Finally, practical lab exercises provide hands-on experience in implementing and comparing these algorithms, enhancing understanding of their strengths and weaknesses.

Chapter 6

Advanced Supervised Learning & Evaluation (Weeks 7)

Ensemble methods in supervised learning combine multiple models to enhance prediction accuracy, mitigate overfitting, and improve resilience against noisy data. They primarily consist of two approaches: Bagging, focusing on averaging models to reduce variance, and Boosting, which sequentially trains models to correct errors from previous ones. The chapter explores various algorithms under these methods, such as Random Forest for Bagging and AdaBoost alongside Gradient Boosting Machines for Boosting, highlighting their functionalities and advantages in practical applications.

Chapter 7

Advanced Supervised Learning & Evaluation (Weeks 8)

The module advances students' understanding of supervised learning, focusing on model evaluation and hyperparameter optimization. Key techniques covered include the Receiver Operating Characteristic (ROC) Curve, Area Under the Curve (AUC), and the Precision-Recall Curve, particularly in scenarios involving imbalanced datasets. Furthermore, the chapter addresses hyperparameter tuning strategies via Grid Search and Random Search, along with diagnostic tools like Learning Curves and Validation Curves to enhance model performance evaluation.

Chapter 8

Unsupervised Learning & Dimensionality Reduction (Weeks 9)

The chapter delves into unsupervised learning techniques, particularly focusing on clustering methods, including K-Means, Hierarchical Clustering, and DBSCAN. It introduces key concepts such as the iterative nature of K-Means, the advantages of not requiring pre-specified clusters in Hierarchical methods, and the distinctive capabilities of DBSCAN in discovering complex shapes and outliers. The chapter emphasizes the importance of proper data preprocessing and evaluation of clustering performance through methods such as the Elbow and Silhouette methods.

Chapter 9

Unsupervised Learning & Dimensionality Reduction (Weeks 10)

The focus shifts to unsupervised learning techniques involving clustering and dimensionality reduction. Key concepts include Gaussian Mixture Models (GMMs) for clustering, various anomaly detection algorithms, and mastering Principal Component Analysis (PCA) for reducing dimensionality. Understanding the differences between feature selection and feature extraction further enhances practical application in data analysis.

Chapter 10

Introduction to Deep Learning (Weeks 11)

Deep Learning represents a significant advancement in machine learning, particularly through Neural Networks, which are capable of handling complex, high-dimensional, or unstructured data more effectively than traditional methods. This chapter covers the evolution of Neural Networks from Perceptrons to Multi-Layer Perceptrons (MLPs), emphasizing key concepts such as Activation Functions, Forward Propagation, and Backpropagation. It also discusses Optimizers and provides a practical introduction to building and training MLPs using TensorFlow and Keras.

Chapter 11

Introduction to Deep Learning (Weeks 12)

Deep Learning represents a significant evolution in machine learning, particularly through the utilization of Convolutional Neural Networks (CNNs) which address the limitations of traditional Artificial Neural Networks (ANNs) when dealing with high-dimensional image data. CNNs employ specialized layers such as convolutional and pooling layers to extract features hierarchically, enhancing computational efficiency and robustness to spatial variations. The module also emphasizes essential techniques like Dropout and Batch Normalization for regularization, and introduces Transfer Learning as an effective approach for leveraging pre-trained models in new tasks.

Chapter 12

Advanced ML Topics & Ethical Considerations (Weeks 13)

Advanced machine learning techniques focus on handling complex data types, primarily sequential data commonly found in text, speech, time series, and videos. The chapter explores Recurrent Neural Networks (RNNs), including Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs), addressing their applications in natural language processing, time series forecasting, and association rule mining through the Apriori algorithm. It also examines recommender systems and compares content-based and collaborative filtering approaches.

Chapter 13

Advanced ML Topics & Ethical Considerations (Weeks 14)

The module explores advanced topics in machine learning, focusing on the ethical and societal implications related to AI systems. It emphasizes the importance of bias detection and mitigation, accountability, transparency, and privacy within AI development. The introduction of explainable AI (XAI) methods like LIME and SHAP underpins the need for interpretability in complex models to ensure they are ethical and trustworthy in real-world applications.