Top 8 Recommended Deep Learning Self-Study Materials! [March 2024]
This page introduces the best in educational materials for beginners who are trying to learn Deep Learning on their own.
Table of Contents:
1. Description of this page
We introduce 8 recommended video courses on various platforms for those who want to learn Deep Learning on their own.
What is Deep Learning?
Deep learning is a machine learning technique that uses neural networks with multiple hidden layers. Neural networks have a structure that imitates the neural circuits of the human brain, and can automatically learn features from data. With the use of neural networks with numerous hidden layers, deep learning makes it possible to extract advanced features and achieve higher accuracy than conventional machine learning methods. Deep learning is widely used in various fields such as image recognition, speech recognition, and natural language processing, and is expected to realize more advanced artificial intelligence by mimicking human cognitive abilities.
Our site, "Outlecture," evaluates courses using our proprietary algorithm that balances course rating, freshness of information, number of purchasers and viewers, and recent rate of increase, in order to extract only the most suitable courses for users.
In addition, we will explain the features of each video platform and provide use cases such as "this is better for people in this situation."
We hope this will be a reference for everyone who is going to learn Deep Learning.
2. Top 5 Recommended Udemy Courses
Here are Outlecture's top 5 recommended Udemy courses, carefully selected for you.
Title | Ratings | Subscribers | Subscribers last month (February 2024) | Level | Video Duration | Created | Last updated | Price |
---|---|---|---|---|---|---|---|---|
Machine Learning, Data Science and Generative AI with Python | 4.6 | 193,033 | 2,683 | beginner | 18 hours 49 minutes | Nov 16th, 2015 | Feb 7th, 2024 | $99.99 |
PyTorch for Deep Learning Bootcamp | 4.68 | 20,670 | 914 | beginner | 52 hours 9 minutes | Jun 14th, 2022 | Jun 1st, 2023 | $99.99 |
Tensorflow 2.0: Deep Learning and Artificial Intelligence | 4.76 | 52,210 | 676 | all | 23 hours 52 minutes | Jun 26th, 2019 | Feb 2nd, 2024 | $129.99 |
A deep understanding of deep learning (with Python intro) | 4.69 | 29,462 | 1,207 | beginner | 57 hours 18 minutes | Aug 4th, 2021 | Jan 31st, 2024 | $109.99 |
Deep Learning A-Z 2024: Neural Networks, AI & ChatGPT Prize | 4.55 | 375,059 | 1,754 | all | 22 hours 27 minutes | Mar 20th, 2017 | Feb 5th, 2024 | $99.99 |
Udemy, Inc. is an education technology company that provides the world's largest online learning and teaching platform.
The features of Udemy include:
- Over 155,000 course
- Instructors who are leading experts in their fields
- Affordable prices range from tens to hundreds of dollars per course, with discounts of up to 70-90% during campaigns
- Courses can be viewed without expiration after purchase, and come with a 30-day money-back guarantee
- Courses can be taken at the student's own pace, with playback speeds of 0.5 to 2 times normal speed, and can be viewed offline on a smartphone with a dedicated app
- Students can ask questions directly to the instructor on the course discussion board, allowing them to resolve any doubts and receive support for self-study
These are some of the benefits of using Udemy.
The management team at Outlecture consists of active software engineers, creators, and web designers. We often catch up on learning new programming languages and products by taking courses on Udemy.
As for our experience, we find that Udemy offers courses of very high quality. The instructors are all leading figures in their fields, and they teach cutting-edge knowledge and practical know-how in a clear and detailed manner. You can acquire the knowledge and skills that are actually used in the field and in practical projects, rather than just knowledge for exams.
We highly recommend Udemy courses, especially for those who want to apply what they learn in practical situations or for those who want to start self-studying. Once you purchase a course, you can take it without a time limit, and there is a 30-day money-back guarantee, so you can start learning with peace of mind.
Recommended for
- Planning to use Deep Learning in actual projects
- Wanting to learn the know-how of professionals who are active in the world's cutting-edge fields
- Hesitant to use a subscription service
- Having basic IT knowledge
The details of each course are as follows:
Complete hands-on machine learning and AI tutorial with data science, Tensorflow, GPT, OpenAI, and neural networks
- Ratings
- 4.6
- Subscribers
- 193,033
- Subscribers last month
(February 2024) - 2,683
- Level
- beginner
- Video Duration
- 18 hours 49 minutes
- Created
- Nov 16th, 2015
- Last updated
- Feb 7th, 2024
- Price
- $99.99
New! Updated with extra content and activities on generative AI, transformers, GPT, ChatGPT, the OpenAI API, and self attention based neural networks!
Machine Learning and artificial intelligence (AI) is everywhere; if you want to know how companies like Google, Amazon, and even Udemy extract meaning and insights from massive data sets, this data science course will give you the fundamentals you need. Data Scientists enjoy one of the top-paying jobs, with an average salary of $120,000 according to Glassdoor and Indeed. That's just the average! And it's not just about money - it's interesting work too!
If you've got some programming or scripting experience, this course will teach you the techniques used by real data scientists and machine learning practitioners in the tech industry - and prepare you for a move into this hot career path. This comprehensive machine learning tutorial includes over 130 lectures spanning over 18 hours of video, and most topics include hands-on Python code examples you can use for reference and for practice. I’ll draw on my 9 years of experience at Amazon and IMDb to guide you through what matters, and what doesn’t.
Each concept is introduced in plain English, avoiding confusing mathematical notation and jargon. It’s then demonstrated using Python code you can experiment with and build upon, along with notes you can keep for future reference. You won't find academic, deeply mathematical coverage of these algorithms in this course - the focus is on practical understanding and application of them. At the end, you'll be given a final project to apply what you've learned!
The topics in this course come from an analysis of real requirements in data scientist job listings from the biggest tech employers. We'll cover the A-Z of machine learning, AI, and data mining techniques real employers are looking for, including:
Deep Learning / Neural Networks (MLP's, CNN's, RNN's) with TensorFlow and Keras
How modern generative AI works with transformers (GPT), self-attention, and large language models
Using the OpenAI API for GPT and ChatGPT
Fine-tuning GPT with your own training data (complete with an example of creating your own Commander Data from TV!)
Creating synthetic images with Variational Auto-Encoders (VAE's) and Generative Adversarial Networks (GAN's)
Data Visualization in Python with MatPlotLib and Seaborn
Transfer Learning
Sentiment analysis
Image recognition and classification
Regression analysis
K-Means Clustering
Principal Component Analysis
Train/Test and cross validation
Bayesian Methods
Decision Trees and Random Forests
Multiple Regression
Multi-Level Models
Support Vector Machines
Reinforcement Learning
Collaborative Filtering
K-Nearest Neighbor
Bias/Variance Tradeoff
Ensemble Learning
Term Frequency / Inverse Document Frequency
Experimental Design and A/B Tests
Feature Engineering
Hyperparameter Tuning
...and much more! There's also an entire section on machine learning with Apache Spark, which lets you scale up these techniques to "big data" analyzed on a computing cluster.
If you're new to Python, don't worry - the course starts with a crash course. If you've done some programming before, you should pick it up quickly. This course shows you how to get set up on Microsoft Windows-based PC's, Linux desktops, and Macs.
If you’re a programmer looking to switch into an exciting new career track, or a data analyst looking to make the transition into the tech industry – this course will teach you the basic techniques used by real-world industry data scientists. These are topics any successful technologist absolutely needs to know about, so what are you waiting for? Enroll now!
"I started doing your course... Eventually I got interested and never thought that I will be working for corporate before a friend offered me this job. I am learning a lot which was impossible to learn in academia and enjoying it thoroughly. To me, your course is the one that helped me understand how to work with corporate problems. How to think to be a success in corporate AI research. I find you the most impressive instructor in ML, simple yet convincing." - Kanad Basu, PhD
- Getting Started
- Introduction
- Udemy 101: Getting the Most From This Course
- Important note
- Installation: Getting Started
- [Activity] WINDOWS: Installing and Using Anaconda & Course Materials
- [Activity] MAC: Installing and Using Anaconda & Course Materials
- [Activity] LINUX: Installing and Using Anaconda & Course Materials
- Python Basics, Part 1 [Optional]
- [Activity] Python Basics, Part 2 [Optional]
- [Activity] Python Basics, Part 3 [Optional]
- [Activity] Python Basics, Part 4 [Optional]
- Introducing the Pandas Library [Optional]
- Statistics and Probability Refresher, and Python Practice
- Types of Data (Numerical, Categorical, Ordinal)
- Mean, Median, Mode
- [Activity] Using mean, median, and mode in Python
- [Activity] Variation and Standard Deviation
- Probability Density Function; Probability Mass Function
- Common Data Distributions (Normal, Binomial, Poisson, etc)
- [Activity] Percentiles and Moments
- [Activity] A Crash Course in matplotlib
- [Activity] Advanced Visualization with Seaborn
- [Activity] Covariance and Correlation
- [Exercise] Conditional Probability
- Exercise Solution: Conditional Probability of Purchase by Age
- Bayes' Theorem
- Predictive Models
- [Activity] Linear Regression
- [Activity] Polynomial Regression
- [Activity] Multiple Regression, and Predicting Car Prices
- Multi-Level Models
- Machine Learning with Python
- Supervised vs. Unsupervised Learning, and Train/Test
- [Activity] Using Train/Test to Prevent Overfitting a Polynomial Regression
- Bayesian Methods: Concepts
- [Activity] Implementing a Spam Classifier with Naive Bayes
- K-Means Clustering
- [Activity] Clustering people based on income and age
- Measuring Entropy
- [Activity] WINDOWS: Installing Graphviz
- [Activity] MAC: Installing Graphviz
- [Activity] LINUX: Installing Graphviz
- Decision Trees: Concepts
- [Activity] Decision Trees: Predicting Hiring Decisions
- Ensemble Learning
- [Activity] XGBoost
- Support Vector Machines (SVM) Overview
- [Activity] Using SVM to cluster people using scikit-learn
- Recommender Systems
- User-Based Collaborative Filtering
- Item-Based Collaborative Filtering
- [Activity] Finding Movie Similarities using Cosine Similarity
- [Activity] Improving the Results of Movie Similarities
- [Activity] Making Movie Recommendations with Item-Based Collaborative Filtering
- [Exercise] Improve the recommender's results
- More Data Mining and Machine Learning Techniques
- K-Nearest-Neighbors: Concepts
- [Activity] Using KNN to predict a rating for a movie
- Dimensionality Reduction; Principal Component Analysis (PCA)
- [Activity] PCA Example with the Iris data set
- Data Warehousing Overview: ETL and ELT
- Reinforcement Learning
- [Activity] Reinforcement Learning & Q-Learning with Gym
- Understanding a Confusion Matrix
- Measuring Classifiers (Precision, Recall, F1, ROC, AUC)
- Dealing with Real-World Data
- Bias/Variance Tradeoff
- [Activity] K-Fold Cross-Validation to avoid overfitting
- Data Cleaning and Normalization
- [Activity] Cleaning web log data
- Normalizing numerical data
- [Activity] Detecting outliers
- Feature Engineering and the Curse of Dimensionality
- Imputation Techniques for Missing Data
- Handling Unbalanced Data: Oversampling, Undersampling, and SMOTE
- Binning, Transforming, Encoding, Scaling, and Shuffling
- Apache Spark: Machine Learning on Big Data
- Warning about Java 11 and Spark 3!
- Spark installation notes for MacOS and Linux users
- [Activity] Installing Spark - Part 1
- [Activity] Installing Spark - Part 2
- Spark Introduction
- Spark and the Resilient Distributed Dataset (RDD)
- Introducing MLLib
- Introduction to Decision Trees in Spark
- [Activity] K-Means Clustering in Spark
- TF / IDF
- [Activity] Searching Wikipedia with Spark
- [Activity] Using the Spark DataFrame API for MLLib
- Experimental Design / ML in the Real World
- Deploying Models to Real-Time Systems
- A/B Testing Concepts
- T-Tests and P-Values
- [Activity] Hands-on With T-Tests
- Determining How Long to Run an Experiment
- A/B Test Gotchas
- Deep Learning and Neural Networks
- Deep Learning Pre-Requisites
- The History of Artificial Neural Networks
- [Activity] Deep Learning in the Tensorflow Playground
- Deep Learning Details
- Introducing Tensorflow
- [Activity] Using Tensorflow, Part 1
- [Activity] Using Tensorflow, Part 2
- [Activity] Introducing Keras
- [Activity] Using Keras to Predict Political Affiliations
- Convolutional Neural Networks (CNN's)
- [Activity] Using CNN's for handwriting recognition
- Recurrent Neural Networks (RNN's)
- [Activity] Using a RNN for sentiment analysis
- [Activity] Transfer Learning
- Tuning Neural Networks: Learning Rate and Batch Size Hyperparameters
- Deep Learning Regularization with Dropout and Early Stopping
- The Ethics of Deep Learning
- Generative Models
- Variational Auto-Encoders (VAE's) - how they work
- Variational Auto-Encoders (VAE) - Hands-on with Fashion MNIST
- Generative Adversarial Networks (GAN's) - How they work
- Generative Adversarial Networks (GAN's) - Playing with some demos
- Generative Adversarial Networks (GAN's) - Hands-on with Fashion MNIST
- Learning More about Deep Learning
- Generative AI: GPT, ChatGPT, Transformers, Self Attention Based Neural Networks
- The Transformer Architecture (encoders, decoders, and self-attention.)
- Self-Attention, Masked Self-Attention, and Multi-Headed Self Attention in depth
- Applications of Transformers (GPT)
- How GPT Works, Part 1: The GPT Transformer Architecture
- How GPT Works, Part 2: Tokenization, Positional Encoding, Embedding
- Fine Tuning / Transfer Learning with Transformers
- [Activity] Tokenization with Google CoLab and HuggingFace
- [Activity] Positional Encoding
- [Activity] Masked, Multi-Headed Self Attention with BERT, BERTViz, and exBERT
- [Activity] Using small and large GPT models within Google CoLab and HuggingFace
- [Activity] Fine Tuning GPT with the IMDb dataset
- From GPT to ChatGPT: Deep Reinforcement Learning, Proximal Policy Gradients
- From GPT to ChatGPT: Reinforcement Learning from Human Feedback and Moderation
- The OpenAI API (Developing with GPT and ChatGPT)
- [Activity] The OpenAI Chat Completions API
- [Activity] Using Tools and Functions in the OpenAI Chat Completion API
- [Activity] The Images (DALL-E) API in OpenAI
- [Activity] The Embeddings API in OpenAI: Finding similarities between words
- The Legacy Fine-Tuning API for GPT Models in OpenAI
- [Demo] Fine-Tuning OpenAI's Davinci Model to simulate Data from Star Trek
- The New OpenAI Fine-Tuning API; Fine-Tuning GPT-3.5 to simulate Commander Data!
- [Activity] The OpenAI Moderation API
- [Activity] The OpenAI Audio API (speech to text)
- Retrieval Augmented Generation (RAG)
- Retrieval Augmented Generation (RAG): How it works, with some examples.
- Demo: Using Retrieval Augmented Generation (RAG) to simulate Data from Star Trek
- Final Project
- Your final project assignment: Mammogram Classification
- Final project review
- You made it!
- More to Explore
- Don't Forget to Leave a Rating!
- Bonus Lecture
Learn PyTorch. Become a Deep Learning Engineer. Get Hired.
- Ratings
- 4.68
- Subscribers
- 20,670
- Subscribers last month
(February 2024) - 914
- Level
- beginner
- Video Duration
- 52 hours 9 minutes
- Created
- Jun 14th, 2022
- Last updated
- Jun 1st, 2023
- Price
- $99.99
What is PyTorch and why should I learn it?
PyTorch is a machine learning and deep learning framework written in Python.
PyTorch enables you to craft new and use existing state-of-the-art deep learning algorithms like neural networks powering much of today’s Artificial Intelligence (AI) applications.
Plus it's so hot right now, so there's lots of jobs available!
PyTorch is used by companies like:
Tesla to build the computer vision systems for their self-driving cars
Meta to power the curation and understanding systems for their content timelines
Apple to create computationally enhanced photography.
Want to know what's even cooler?
Much of the latest machine learning research is done and published using PyTorch code so knowing how it works means you’ll be at the cutting edge of this highly in-demand field.
And you'll be learning PyTorch in good company.
Graduates of Zero To Mastery are now working at Google, Tesla, Amazon, Apple, IBM, Uber, Meta, Shopify + other top tech companies at the forefront of machine learning and deep learning.
This can be you.
By enrolling today, you’ll also get to join our exclusive live online community classroom to learn alongside thousands of students, alumni, mentors, TAs and Instructors.
Most importantly, you will be learning PyTorch from a professional machine learning engineer, with real-world experience, and who is one of the best teachers around!
What will this PyTorch course be like?
This PyTorch course is very hands-on and project based. You won't just be staring at your screen. We'll leave that for other PyTorch tutorials and courses.
In this course you'll actually be:
Running experiments
Completing exercises to test your skills
Building real-world deep learning models and projects to mimic real life scenarios
By the end of it all, you'll have the skillset needed to identify and develop modern deep learning solutions that Big Tech companies encounter.
Fair warning: this course is very comprehensive. But don't be intimidated, Daniel will teach you everything from scratch and step-by-step!
Here's what you'll learn in this PyTorch course:
1. PyTorch Fundamentals — We start with the barebone fundamentals, so even if you're a beginner you'll get up to speed.
In machine learning, data gets represented as a tensor (a collection of numbers). Learning how to craft tensors with PyTorch is paramount to building machine learning algorithms. In PyTorch Fundamentals we cover the PyTorch tensor datatype in-depth.
2. PyTorch Workflow — Okay, you’ve got the fundamentals down, and you've made some tensors to represent data, but what now?
With PyTorch Workflow you’ll learn the steps to go from data -> tensors -> trained neural network model. You’ll see and use these steps wherever you encounter PyTorch code as well as for the rest of the course.
3. PyTorch Neural Network Classification — Classification is one of the most common machine learning problems.
Is something one thing or another?
Is an email spam or not spam?
Is credit card transaction fraud or not fraud?
With PyTorch Neural Network Classification you’ll learn how to code a neural network classification model using PyTorch so that you can classify things and answer these questions.
4. PyTorch Computer Vision — Neural networks have changed the game of computer vision forever. And now PyTorch drives many of the latest advancements in computer vision algorithms.
For example, Tesla use PyTorch to build the computer vision algorithms for their self-driving software.
With PyTorch Computer Vision you’ll build a PyTorch neural network capable of seeing patterns in images of and classifying them into different categories.
5. PyTorch Custom Datasets — The magic of machine learning is building algorithms to find patterns in your own custom data. There are plenty of existing datasets out there, but how do you load your own custom dataset into PyTorch?
This is exactly what you'll learn with the PyTorch Custom Datasets section of this course.
You’ll learn how to load an image dataset for FoodVision Mini: a PyTorch computer vision model capable of classifying images of pizza, steak and sushi (am I making you hungry to learn yet?!).
We’ll be building upon FoodVision Mini for the rest of the course.
6. PyTorch Going Modular — The whole point of PyTorch is to be able to write Pythonic machine learning code.
There are two main tools for writing machine learning code with Python:
A Jupyter/Google Colab notebook (great for experimenting)
Python scripts (great for reproducibility and modularity)
In the PyTorch Going Modular section of this course, you’ll learn how to take your most useful Jupyter/Google Colab Notebook code and turn it reusable Python scripts. This is often how you’ll find PyTorch code shared in the wild.
7. PyTorch Transfer Learning — What if you could take what one model has learned and leverage it for your own problems? That’s what PyTorch Transfer Learning covers.
You’ll learn about the power of transfer learning and how it enables you to take a machine learning model trained on millions of images, modify it slightly, and enhance the performance of FoodVision Mini, saving you time and resources.
8. PyTorch Experiment Tracking — Now we're going to start cooking with heat by starting Part 1 of our Milestone Project of the course!
At this point you’ll have built plenty of PyTorch models. But how do you keep track of which model performs the best?
That’s where PyTorch Experiment Tracking comes in.
Following the machine learning practitioner’s motto of experiment, experiment, experiment! you’ll setup a system to keep track of various FoodVision Mini experiment results and then compare them to find the best.
9. PyTorch Paper Replicating — The field of machine learning advances quickly. New research papers get published every day. Being able to read and understand these papers takes time and practice.
So that’s what PyTorch Paper Replicating covers. You’ll learn how to go through a machine learning research paper and replicate it with PyTorch code.
At this point you'll also undertake Part 2 of our Milestone Project, where you’ll replicate the groundbreaking Vision Transformer architecture!
10. PyTorch Model Deployment — By this stage your FoodVision model will be performing quite well. But up until now, you’ve been the only one with access to it.
How do you get your PyTorch models in the hands of others?
That’s what PyTorch Model Deployment covers. In Part 3 of your Milestone Project, you’ll learn how to take the best performing FoodVision Mini model and deploy it to the web so other people can access it and try it out with their own food images.
What's the bottom line?
Machine learning's growth and adoption is exploding, and deep learning is how you take your machine learning knowledge to the next level. More and more job openings are looking for this specialized knowledge.
Companies like Tesla, Microsoft, OpenAI, Meta (Facebook + Instagram), Airbnb and many others are currently powered by PyTorch.
And this is the most comprehensive online bootcamp to learn PyTorch and kickstart your career as a Deep Learning Engineer.
So why wait? Advance your career and earn a higher salary by mastering PyTorch and adding deep learning to your toolkit?
- Introduction
- PyTorch for Deep Learning
- Course Welcome and What Is Deep Learning
- Join Our Online Classroom!
- Exercise: Meet Your Classmates + Instructor
- Free Course Book + Code Resources + Asking Questions + Getting Help
- ZTM Resources
- Machine Learning + Python Monthly Newsletters
- PyTorch Fundamentals
- Why Use Machine Learning or Deep Learning
- The Number 1 Rule of Machine Learning and What Is Deep Learning Good For
- Machine Learning vs. Deep Learning
- Anatomy of Neural Networks
- Different Types of Learning Paradigms
- What Can Deep Learning Be Used For
- What Is and Why PyTorch
- What Are Tensors
- What We Are Going To Cover With PyTorch
- How To and How Not To Approach This Course
- Important Resources For This Course
- Getting Setup to Write PyTorch Code
- Introduction to PyTorch Tensors
- Creating Random Tensors in PyTorch
- Creating Tensors With Zeros and Ones in PyTorch
- Creating a Tensor Range and Tensors Like Other Tensors
- Dealing With Tensor Data Types
- Getting Tensor Attributes
- Manipulating Tensors (Tensor Operations)
- Matrix Multiplication (Part 1)
- Matrix Multiplication (Part 2): The Two Main Rules of Matrix Multiplication
- Matrix Multiplication (Part 3): Dealing With Tensor Shape Errors
- Finding the Min Max Mean and Sum of Tensors (Tensor Aggregation)
- Finding The Positional Min and Max of Tensors
- Reshaping, Viewing and Stacking Tensors
- Squeezing, Unsqueezing and Permuting Tensors
- Selecting Data From Tensors (Indexing)
- PyTorch Tensors and NumPy
- PyTorch Reproducibility (Taking the Random Out of Random)
- Different Ways of Accessing a GPU in PyTorch
- Setting up Device-Agnostic Code and Putting Tensors On and Off the GPU
- PyTorch Fundamentals: Exercises and Extra-Curriculum
- PyTorch Workflow
- Introduction and Where You Can Get Help
- Getting Setup and What We Are Covering
- Creating a Simple Dataset Using the Linear Regression Formula
- Splitting Our Data Into Training and Test Sets
- Building a function to Visualize Our Data
- Creating Our First PyTorch Model for Linear Regression
- Breaking Down What's Happening in Our PyTorch Linear regression Model
- Discussing Some of the Most Important PyTorch Model Building Classes
- Checking Out the Internals of Our PyTorch Model
- Making Predictions With Our Random Model Using Inference Mode
- Training a Model Intuition (The Things We Need)
- Setting Up an Optimizer and a Loss Function
- PyTorch Training Loop Steps and Intuition
- Writing Code for a PyTorch Training Loop
- Reviewing the Steps in a Training Loop Step by Step
- Running Our Training Loop Epoch by Epoch and Seeing What Happens
- Writing Testing Loop Code and Discussing What's Happening Step by Step
- Reviewing What Happens in a Testing Loop Step by Step
- Writing Code to Save a PyTorch Model
- Writing Code to Load a PyTorch Model
- Setting Up to Practice Everything We Have Done Using Device Agnostic code
- Putting Everything Together (Part 1): Data
- Putting Everything Together (Part 2): Building a Model
- Putting Everything Together (Part 3): Training a Model
- Putting Everything Together (Part 4): Making Predictions With a Trained Model
- Putting Everything Together (Part 5): Saving and Loading a Trained Model
- Exercise: Imposter Syndrome
- PyTorch Workflow: Exercises and Extra-Curriculum
- PyTorch Neural Network Classification
- Introduction to Machine Learning Classification With PyTorch
- Classification Problem Example: Input and Output Shapes
- Typical Architecture of a Classification Neural Network (Overview)
- Making a Toy Classification Dataset
- Turning Our Data into Tensors and Making a Training and Test Split
- Laying Out Steps for Modelling and Setting Up Device-Agnostic Code
- Coding a Small Neural Network to Handle Our Classification Data
- Making Our Neural Network Visual
- Recreating and Exploring the Insides of Our Model Using nn.Sequential
- Loss Function Optimizer and Evaluation Function for Our Classification Network
- Going from Model Logits to Prediction Probabilities to Prediction Labels
- Coding a Training and Testing Optimization Loop for Our Classification Model
- Writing Code to Download a Helper Function to Visualize Our Models Predictions
- Discussing Options to Improve a Model
- Creating a New Model with More Layers and Hidden Units
- Writing Training and Testing Code to See if Our Upgraded Model Performs Better
- Creating a Straight Line Dataset to See if Our Model is Learning Anything
- Building and Training a Model to Fit on Straight Line Data
- Evaluating Our Models Predictions on Straight Line Data
- Introducing the Missing Piece for Our Classification Model Non-Linearity
- Building Our First Neural Network with Non-Linearity
- Writing Training and Testing Code for Our First Non-Linear Model
- Making Predictions with and Evaluating Our First Non-Linear Model
- Replicating Non-Linear Activation Functions with Pure PyTorch
- Putting It All Together (Part 1): Building a Multiclass Dataset
- Creating a Multi-Class Classification Model with PyTorch
- Setting Up a Loss Function and Optimizer for Our Multi-Class Model
- Logits to Prediction Probabilities to Prediction Labels with a Multi-Class Model
- Training a Multi-Class Classification Model and Troubleshooting Code on the Fly
- Making Predictions with and Evaluating Our Multi-Class Classification Model
- Discussing a Few More Classification Metrics
- PyTorch Classification: Exercises and Extra-Curriculum
- PyTorch Computer Vision
- What Is a Computer Vision Problem and What We Are Going to Cover
- Computer Vision Input and Output Shapes
- What Is a Convolutional Neural Network (CNN)
- Discussing and Importing the Base Computer Vision Libraries in PyTorch
- Getting a Computer Vision Dataset and Checking Out Its- Input and Output Shapes
- Visualizing Random Samples of Data
- DataLoader Overview Understanding Mini-Batches
- Turning Our Datasets Into DataLoaders
- Model 0: Creating a Baseline Model with Two Linear Layers
- Creating a Loss Function: an Optimizer for Model 0
- Creating a Function to Time Our Modelling Code
- Writing Training and Testing Loops for Our Batched Data
- Writing an Evaluation Function to Get Our Models Results
- Setup Device-Agnostic Code for Running Experiments on the GPU
- Model 1: Creating a Model with Non-Linear Functions
- Mode 1: Creating a Loss Function and Optimizer
- Turing Our Training Loop into a Function
- Turing Our Testing Loop into a Function
- Training and Testing Model 1 with Our Training and Testing Functions
- Getting a Results Dictionary for Model 1
- Model 2: Convolutional Neural Networks High Level Overview
- Model 2: Coding Our First Convolutional Neural Network with PyTorch
- Model 2: Breaking Down Conv2D Step by Step
- Model 2: Breaking Down MaxPool2D Step by Step
- Mode 2: Using a Trick to Find the Input and Output Shapes of Each of Our Layers
- Model 2: Setting Up a Loss Function and Optimizer
- Model 2: Training Our First CNN and Evaluating Its Results
- Comparing the Results of Our Modelling Experiments
- Making Predictions on Random Test Samples with the Best Trained Model
- Plotting Our Best Model Predictions on Random Test Samples and Evaluating Them
- Making Predictions and Importing Libraries to Plot a Confusion Matrix
- Evaluating Our Best Models Predictions with a Confusion Matrix
- Saving and Loading Our Best Performing Model
- Recapping What We Have Covered Plus Exercises and Extra-Curriculum
- PyTorch Custom Datasets
- What Is a Custom Dataset and What We Are Going to Cover
- Importing PyTorch and Setting Up Device Agnostic Code
- Downloading a Custom Dataset of Pizza, Steak and Sushi Images
- Becoming One With the Data (Part 1): Exploring the Data Format
- Becoming One With the Data (Part 2): Visualizing a Random Image
- Becoming One With the Data (Part 3): Visualizing a Random Image with Matplotlib
- Transforming Data (Part 1): Turning Images Into Tensors
- Transforming Data (Part 2): Visualizing Transformed Images
- Loading All of Our Images and Turning Them Into Tensors With ImageFolder
- Visualizing a Loaded Image From the Train Dataset
- Turning Our Image Datasets into PyTorch Dataloaders
- Creating a Custom Dataset Class in PyTorch High Level Overview
- Creating a Helper Function to Get Class Names From a Directory
- Writing a PyTorch Custom Dataset Class from Scratch to Load Our Images
- Compare Our Custom Dataset Class. to the Original Imagefolder Class
- Writing a Helper Function to Visualize Random Images from Our Custom Dataset
- Turning Our Custom Datasets Into DataLoaders
- Exploring State of the Art Data Augmentation With Torchvision Transforms
- Building a Baseline Model (Part 1): Loading and Transforming Data
- Building a Baseline Model (Part 2): Replicating Tiny VGG from Scratch
- Building a Baseline Model (Part 3):Doing a Forward Pass to Test Our Model Shapes
- Using the Torchinfo Package to Get a Summary of Our Model
- Creating Training and Testing loop Functions
- Creating a Train Function to Train and Evaluate Our Models
- Training and Evaluating Model 0 With Our Training Functions
- Plotting the Loss Curves of Model 0
- The Balance Between Overfitting and Underfitting and How to Deal With Each
- Creating Augmented Training Datasets and DataLoaders for Model 1
- Constructing and Training Model 1
- Plotting the Loss Curves of Model 1
- Plotting the Loss Curves of All of Our Models Against Each Other
- Predicting on Custom Data (Part 1): Downloading an Image
- Predicting on Custom Data (Part 2): Loading In a Custom Image With PyTorch
- Predicting on Custom Data (Part3):Getting Our Custom Image Into the Right Format
- Predicting on Custom Data (Part4):Turning Our Models Raw Outputs Into Prediction
- Predicting on Custom Data (Part 5): Putting It All Together
- Summary of What We Have Covered Plus Exercises and Extra-Curriculum
- PyTorch Going Modular
- What Is Going Modular and What We Are Going to Cover
- Going Modular Notebook (Part 1): Running It End to End
- Downloading a Dataset
- Writing the Outline for Our First Python Script to Setup the Data
- Creating a Python Script to Create Our PyTorch DataLoaders
- Turning Our Model Building Code into a Python Script
- Turning Our Model Training Code into a Python Script
- Turning Our Utility Function to Save a Model into a Python Script
- Creating a Training Script to Train Our Model in One Line of Code
- Going Modular: Summary, Exercises and Extra-Curriculum
- PyTorch Transfer Learning
- Introduction: What is Transfer Learning and Why Use It
- Where Can You Find Pretrained Models and What We Are Going to Cover
- Installing the Latest Versions of Torch and Torchvision
- Downloading Our Previously Written Code from Going Modular
- Downloading Pizza, Steak, Sushi Image Data from Github
- Turning Our Data into DataLoaders with Manually Created Transforms
- Turning Our Data into DataLoaders with Automatic Created Transforms
- Which Pretrained Model Should You Use
- Setting Up a Pretrained Model with Torchvision
- Different Kinds of Transfer Learning
- Getting a Summary of the Different Layers of Our Model
- Freezing the Base Layers of Our Model and Updating the Classifier Head
Machine Learning & Neural Networks for Computer Vision, Time Series Analysis, NLP, GANs, Reinforcement Learning, +More!
- Ratings
- 4.76
- Subscribers
- 52,210
- Subscribers last month
(February 2024) - 676
- Level
- all
- Video Duration
- 23 hours 52 minutes
- Created
- Jun 26th, 2019
- Last updated
- Feb 2nd, 2024
- Price
- $129.99
Ever wondered how AI technologies like OpenAI ChatGPT, GPT-4, DALL-E, Midjourney, and Stable Diffusion really work? In this course, you will learn the foundations of these groundbreaking applications.
Welcome to Tensorflow 2.0!
What an exciting time. It's been nearly 4 years since Tensorflow was released, and the library has evolved to its official second version.
Tensorflow is Google's library for deep learning and artificial intelligence.
Deep Learning has been responsible for some amazing achievements recently, such as:
Generating beautiful, photo-realistic images of people and things that never existed (GANs)
Beating world champions in the strategy game Go, and complex video games like CS:GO and Dota 2 (Deep Reinforcement Learning)
Self-driving cars (Computer Vision)
Speech recognition (e.g. Siri) and machine translation (Natural Language Processing)
Even creating videos of people doing and saying things they never did (DeepFakes - a potentially nefarious application of deep learning)
Tensorflow is the world's most popular library for deep learning, and it's built by Google, whose parent Alphabet recently became the most cash-rich company in the world (just a few days before I wrote this). It is the library of choice for many companies doing AI and machine learning.
In other words, if you want to do deep learning, you gotta know Tensorflow.
This course is for beginner-level students all the way up to expert-level students. How can this be?
If you've just taken my free Numpy prerequisite, then you know everything you need to jump right in. We will start with some very basic machine learning models and advance to state of the art concepts.
Along the way, you will learn about all of the major deep learning architectures, such as Deep Neural Networks, Convolutional Neural Networks (image processing), and Recurrent Neural Networks (sequence data).
Current projects include:
Natural Language Processing (NLP)
Recommender Systems
Transfer Learning for Computer Vision
Generative Adversarial Networks (GANs)
Deep Reinforcement Learning Stock Trading Bot
Even if you've taken all of my previous courses already, you will still learn about how to convert your previous code so that it uses Tensorflow 2.0, and there are all-new and never-before-seen projects in this course such as time series forecasting and how to do stock predictions.
This course is designed for students who want to learn fast, but there are also "in-depth" sections in case you want to dig a little deeper into the theory (like what is a loss function, and what are the different types of gradient descent approaches).
Advanced Tensorflow topics include:
Deploying a model with Tensorflow Serving (Tensorflow in the cloud)
Deploying a model with Tensorflow Lite (mobile and embedded applications)
Distributed Tensorflow training with Distribution Strategies
Writing your own custom Tensorflow model
Converting Tensorflow 1.x code to Tensorflow 2.0
Constants, Variables, and Tensors
Eager execution
Gradient tape
Instructor's Note: This course focuses on breadth rather than depth, with less theory in favor of building more cool stuff. If you are looking for a more theory-dense course, this is not it. Generally, for each of these topics (recommender systems, natural language processing, reinforcement learning, computer vision, GANs, etc.) I already have courses singularly focused on those topics.
Thanks for reading, and I’ll see you in class!
WHAT ORDER SHOULD I TAKE YOUR COURSES IN?:
Check out the lecture "Machine Learning and AI Prerequisite Roadmap" (available in the FAQ of any of my courses, including the free Numpy course)
UNIQUE FEATURES
Every line of code explained in detail - email me any time if you disagree
No wasted time "typing" on the keyboard like other courses - let's be honest, nobody can really write code worth learning about in just 20 minutes from scratch
Not afraid of university-level math - get important details about algorithms that other courses leave out
- Welcome
- Introduction
- Outline
- Get Your Hands Dirty, Practical Coding Experience, Data Links
- Where to get the code, notebooks, and data
- Google Colab
- Intro to Google Colab, how to use a GPU or TPU for free
- Tensorflow 2.0 in Google Colab
- Uploading your own data to Google Colab
- Where can I learn about Numpy, Scipy, Matplotlib, Pandas, and Scikit-Learn?
- How to Succeed in This Course
- Temporary 403 Errors
- Machine Learning and Neurons
- What is Machine Learning?
- Code Preparation (Classification Theory)
- Classification Notebook
- Code Preparation (Regression Theory)
- Regression Notebook
- The Neuron
- How does a model "learn"?
- Making Predictions
- Saving and Loading a Model
- Why Keras?
- Suggestion Box
- Feedforward Artificial Neural Networks
- Artificial Neural Networks Section Introduction
- Beginners Rejoice: The Math in This Course is Optional
- Forward Propagation
- The Geometrical Picture
- Activation Functions
- Multiclass Classification
- How to Represent Images
- Color Mixing Clarification
- Code Preparation (ANN)
- ANN for Image Classification
- ANN for Regression
- Convolutional Neural Networks
- What is Convolution? (part 1)
- What is Convolution? (part 2)
- What is Convolution? (part 3)
- Convolution on Color Images
- CNN Architecture
- CNN Code Preparation
- CNN for Fashion MNIST
- CNN for CIFAR-10
- Data Augmentation
- Batch Normalization
- Improving CIFAR-10 Results
- Recurrent Neural Networks, Time Series, and Sequence Data
- Sequence Data
- Forecasting
- Autoregressive Linear Model for Time Series Prediction
- Proof that the Linear Model Works
- Recurrent Neural Networks
- RNN Code Preparation
- RNN for Time Series Prediction
- Paying Attention to Shapes
- GRU and LSTM (pt 1)
- GRU and LSTM (pt 2)
- A More Challenging Sequence
- Demo of the Long Distance Problem
- RNN for Image Classification (Theory)
- RNN for Image Classification (Code)
- Stock Return Predictions using LSTMs (pt 1)
- Stock Return Predictions using LSTMs (pt 2)
- Stock Return Predictions using LSTMs (pt 3)
- Other Ways to Forecast
- Natural Language Processing (NLP)
- Embeddings
- Code Preparation (NLP)
- Text Preprocessing
- Text Classification with LSTMs
- CNNs for Text
- Text Classification with CNNs
- Recommender Systems
- Recommender Systems with Deep Learning Theory
- Recommender Systems with Deep Learning Code
- Transfer Learning for Computer Vision
- Transfer Learning Theory
- Some Pre-trained Models (VGG, ResNet, Inception, MobileNet)
- Large Datasets and Data Generators
- 2 Approaches to Transfer Learning
- Transfer Learning Code (pt 1)
- Transfer Learning Code (pt 2)
- GANs (Generative Adversarial Networks)
- GAN Theory
- GAN Code
- Deep Reinforcement Learning (Theory)
- Deep Reinforcement Learning Section Introduction
- Elements of a Reinforcement Learning Problem
- States, Actions, Rewards, Policies
- Markov Decision Processes (MDPs)
- The Return
- Value Functions and the Bellman Equation
- What does it mean to “learn”?
- Solving the Bellman Equation with Reinforcement Learning (pt 1)
- Solving the Bellman Equation with Reinforcement Learning (pt 2)
- Epsilon-Greedy
- Q-Learning
- Deep Q-Learning / DQN (pt 1)
- Deep Q-Learning / DQN (pt 2)
- How to Learn Reinforcement Learning
- Stock Trading Project with Deep Reinforcement Learning
- Reinforcement Learning Stock Trader Introduction
- Data and Environment
- Replay Buffer
- Program Design and Layout
- Code pt 1
- Code pt 2
- Code pt 3
- Code pt 4
- Reinforcement Learning Stock Trader Discussion
- Help! Why is the code slower on my machine?
- Advanced Tensorflow Usage
- What is a Web Service? (Tensorflow Serving pt 1)
- Tensorflow Serving pt 2
- Tensorflow Lite (TFLite)
- Why is Google the King of Distributed Computing?
- Training with Distributed Strategies
- Using the TPU
- Low-Level Tensorflow
- Differences Between Tensorflow 1.x and Tensorflow 2.x
- Constants and Basic Computation
- Variables and Gradient Tape
- Build Your Own Custom Model
- In-Depth: Loss Functions
- Mean Squared Error
- Binary Cross Entropy
- Categorical Cross Entropy
- In-Depth: Gradient Descent
- Gradient Descent
- Stochastic Gradient Descent
- Momentum
- Variable and Adaptive Learning Rates
- Adam (pt 1)
- Adam (pt 2)
- Course Conclusion
- How to get the Tensorflow Developer Certificate
- What to Learn Next
- Extras
- How to Choose Hyperparameters
- Get the Exercise Pack for This Course
- Setting up your Environment (FAQ by Student Request)
- Pre-Installation Check
- How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow
- Anaconda Environment Setup
- Installing NVIDIA GPU-Accelerated Deep Learning Libraries on your Home Computer
- Extra Help With Python Coding for Beginners (FAQ by Student Request)
- How to use Github & Extra Coding Tips (Optional)
- Beginner's Coding Tips
- How to Code Yourself (part 1)
- How to Code Yourself (part 2)
- Proof that using Jupyter Notebook is the same as not using it
- Is Theano Dead?
- Effective Learning Strategies for Machine Learning (FAQ by Student Request)
- How to Succeed in this Course (Long Version)
- Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced?
- Machine Learning and AI Prerequisite Roadmap (pt 1)
- Machine Learning and AI Prerequisite Roadmap (pt 2)
- Common Beginner Questions: What if I'm "advanced"?
- Appendix / FAQ Finale
- What is the Appendix?
- BONUS
Master deep learning in PyTorch using an experimental scientific approach, with lots of examples and practice problems.
- Ratings
- 4.69
- Subscribers
- 29,462
- Subscribers last month
(February 2024) - 1,207
- Level
- beginner
- Video Duration
- 57 hours 18 minutes
- Created
- Aug 4th, 2021
- Last updated
- Jan 31st, 2024
- Price
- $109.99
Deep learning is increasingly dominating technology and has major implications for society.
From self-driving cars to medical diagnoses, from face recognition to deep fakes, and from language translation to music generation, deep learning is spreading like wildfire throughout all areas of modern technology.
But deep learning is not only about super-fancy, cutting-edge, highly sophisticated applications. Deep learning is increasingly becoming a standard tool in machine-learning, data science, and statistics. Deep learning is used by small startups for data mining and dimension reduction, by governments for detecting tax evasion, and by scientists for detecting patterns in their research data.
Deep learning is now used in most areas of technology, business, and entertainment. And it's becoming more important every year.
How does deep learning work?
Deep learning is built on a really simple principle: Take a super-simple algorithm (weighted sum and nonlinearity), and repeat it many many times until the result is an incredibly complex and sophisticated learned representation of the data.
Is it really that simple? mmm OK, it's actually a tiny bit more complicated than that ;) but that's the core idea, and everything else -- literally everything else in deep learning -- is just clever ways of putting together these fundamental building blocks. That doesn't mean the deep neural networks are trivial to understand: there are important architectural differences between feedforward networks, convolutional networks, and recurrent networks.
Given the diversity of deep learning model designs, parameters, and applications, you can only learn deep learning -- I mean, really learn deep learning, not just have superficial knowledge from a youtube video -- by having an experienced teacher guide you through the math, implementations, and reasoning. And of course, you need to have lots of hands-on examples and practice problems to work through. Deep learning is basically just applied math, and, as everyone knows, math is not a spectator sport!
What is this course all about?
Simply put: The purpose of this course is to provide a deep-dive into deep learning. You will gain flexible, fundamental, and lasting expertise on deep learning. You will have a deep understanding of the fundamental concepts in deep learning, so that you will be able to learn new topics and trends that emerge in the future.
Please note: This is not a course for someone who wants a quick overview of deep learning with a few solved examples. Instead, this course is designed for people who really want to understand how and why deep learning works; when and how to select metaparameters like optimizers, normalizations, and learning rates; how to evaluate the performance of deep neural network models; and how to modify and adapt existing models to solve new problems.
You can learn everything about deep learning in this course.
In this course, you will learn
Theory: Why are deep learning models built the way they are?
Math: What are the formulas and mechanisms of deep learning?
Implementation: How are deep learning models actually constructed in Python (using the PyTorch library)?
Intuition: Why is this or that metaparameter the right choice? How to interpret the effects of regularization? etc.
Python: If you're completely new to Python, go through the 8+ hour coding tutorial appendix. If you're already a knowledgeable coder, then you'll still learn some new tricks and code optimizations.
Google-colab: Colab is an amazing online tool for running Python code, simulations, and heavy computations using Google's cloud services. No need to install anything on your computer.
Unique aspects of this course
Clear and comprehensible explanations of concepts in deep learning, including transfer learning, generative modeling, convolutional neural networks, feedforward networks, generative adversarial networks (GAN), and more.
Several distinct explanations of the same ideas, which is a proven technique for learning.
Visualizations using graphs, numbers, and spaces that provide intuition of artificial neural networks.
LOTS of exercises, projects, code-challenges, suggestions for exploring the code. You learn best by doing it yourself!
Active Q&A forum where you can ask questions, get feedback, and contribute to the community.
8+ hour Python tutorial. That means you don't need to master Python before enrolling in this course.
So what are you waiting for??
Watch the course introductory video and free sample videos to learn more about the contents of this course and about my teaching style. If you are unsure if this course is right for you and want to learn more, feel free to contact with me questions before you sign up.
I hope to see you soon in the course!
Mike
- Introduction
- How to learn from this course
- Using Udemy like a pro
- Download all course materials
- Downloading and using the code
- My policy on code-sharing
- Concepts in deep learning
- What is an artificial neural network?
- How models "learn"
- The role of DL in science and knowledge
- Running experiments to understand DL
- Are artificial "neurons" like biological neurons?
- About the Python tutorial
- Should you watch the Python tutorial?
- Math, numpy, PyTorch
- PyTorch or TensorFlow?
- Introduction to this section
- Spectral theories in mathematics
- Terms and datatypes in math and computers
- Converting reality to numbers
- Vector and matrix transpose
- OMG it's the dot product!
- Matrix multiplication
- Softmax
- Logarithms
- Entropy and cross-entropy
- Min/max and argmin/argmax
- Mean and variance
- Random sampling and sampling variability
- Reproducible randomness via seeding
- The t-test
- Derivatives: intuition and polynomials
- Derivatives find minima
- Derivatives: product and chain rules
- Gradient descent
- Overview of gradient descent
- What about local minima?
- Gradient descent in 1D
- CodeChallenge: unfortunate starting value
- Gradient descent in 2D
- CodeChallenge: 2D gradient ascent
- Parametric experiments on g.d.
- CodeChallenge: fixed vs. dynamic learning rate
- Vanishing and exploding gradients
- Tangent: Notebook revision history
- ANNs (Artificial Neural Networks)
- The perceptron and ANN architecture
- A geometric view of ANNs
- ANN math part 1 (forward prop)
- ANN math part 2 (errors, loss, cost)
- ANN math part 3 (backprop)
- ANN for regression
- CodeChallenge: manipulate regression slopes
- ANN for classifying qwerties
- Learning rates comparison
- Multilayer ANN
- Linear solutions to linear problems
- Why multilayer linear models don't exist
- Multi-output ANN (iris dataset)
- CodeChallenge: more qwerties!
- Comparing the number of hidden units
- Depth vs. breadth: number of parameters
- Defining models using sequential vs. class
- Model depth vs. breadth
- CodeChallenge: convert sequential to class
- Diversity of ANN visual representations
- Reflection: Are DL models understandable yet?
- Overfitting and cross-validation
- What is overfitting and is it as bad as they say?
- Cross-validation
- Generalization
- Cross-validation -- manual separation
- Cross-validation -- scikitlearn
- Cross-validation -- DataLoader
- Splitting data into train, devset, test
- Cross-validation on regression
- Regularization
- Regularization: Concept and methods
- train() and eval() modes
- Dropout regularization
- Dropout regularization in practice
- Dropout example 2
- Weight regularization (L1/L2): math
- L2 regularization in practice
- L1 regularization in practice
- Training in mini-batches
- Batch training in action
- The importance of equal batch sizes
- CodeChallenge: Effects of mini-batch size
- Metaparameters (activations, optimizers)
- What are "metaparameters"?
- The "wine quality" dataset
- CodeChallenge: Minibatch size in the wine dataset
- Data normalization
- The importance of data normalization
- Batch normalization
- Batch normalization in practice
- CodeChallenge: Batch-normalize the qwerties
- Activation functions
- Activation functions in PyTorch
- Activation functions comparison
- CodeChallenge: Compare relu variants
- CodeChallenge: Predict sugar
- Loss functions
- Loss functions in PyTorch
- More practice with multioutput ANNs
- Optimizers (minibatch, momentum)
- SGD with momentum
- Optimizers (RMSprop, Adam)
- Optimizers comparison
- CodeChallenge: Optimizers and... something
- CodeChallenge: Adam with L2 regularization
- Learning rate decay
- How to pick the right metaparameters
- FFNs (Feed-Forward Networks)
- What are fully-connected and feedforward networks?
- The MNIST dataset
- FFN to classify digits
- CodeChallenge: Binarized MNIST images
- CodeChallenge: Data normalization
- Distributions of weights pre- and post-learning
- CodeChallenge: MNIST and breadth vs. depth
- CodeChallenge: Optimizers and MNIST
- Scrambled MNIST
- Shifted MNIST
- CodeChallenge: The mystery of the missing 7
- Universal approximation theorem
- More on data
- Anatomy of a torch dataset and dataloader
- Data size and network size
- CodeChallenge: unbalanced data
- What to do about unbalanced designs?
- Data oversampling in MNIST
- Data noise augmentation (with devset+test)
- Data feature augmentation
- Getting data into colab
- Save and load trained models
- Save the best-performing model
- Where to find online datasets
- Measuring model performance
- Two perspectives of the world
- Accuracy, precision, recall, F1
- APRF in code
- APRF example 1: wine quality
- APRF example 2: MNIST
- CodeChallenge: MNIST with unequal groups
- Computation time
- Better performance in test than train?
- FFN milestone projects
- Project 1: A gratuitously complex adding machine
- Project 1: My solution
- Project 2: Predicting heart disease
- Project 2: My solution
- Project 3: FFN for missing data interpolation
- Project 3: My solution
- Weight inits and investigations
- Explanation of weight matrix sizes
- A surprising demo of weight initializations
- Theory: Why and how to initialize weights
- CodeChallenge: Weight variance inits
- Xavier and Kaiming initializations
- CodeChallenge: Xavier vs. Kaiming
- CodeChallenge: Identically random weights
- Freezing weights during learning
- Learning-related changes in weights
- Use default inits or apply your own?
- Autoencoders
- What are autoencoders and what do they do?
- Denoising MNIST
- CodeChallenge: How many units?
- AEs for occlusion
- The latent code of MNIST
- Autoencoder with tied weights
- Running models on a GPU
- What is a GPU and why use it?
- Implementation
- CodeChallenge: Run an experiment on the GPU
- Convolution and transformations
- Convolution: concepts
- Feature maps and convolution kernels
- Convolution in code
- Convolution parameters (stride, padding)
- The Conv2 class in PyTorch
- CodeChallenge: Choose the parameters
- Transpose convolution
- Max/mean pooling
- Pooling in PyTorch
- To pool or to stride?
- Image transforms
- Creating and using custom DataLoaders
- Understand and design CNNs
- The canonical CNN architecture
- CNN to classify MNIST digits
- CNN on shifted MNIST
- Classify Gaussian blurs
- Examine feature map activations
- CodeChallenge: Softcode internal parameters
- CodeChallenge: How wide the FC?
- Do autoencoders clean Gaussians?
- CodeChallenge: AEs and occluded Gaussians
Learn to create Deep Learning models in Python from two Machine Learning, Data Science experts. Code templates included.
- Ratings
- 4.55
- Subscribers
- 375,059
- Subscribers last month
(February 2024) - 1,754
- Level
- all
- Video Duration
- 22 hours 27 minutes
- Created
- Mar 20th, 2017
- Last updated
- Feb 5th, 2024
- Price
- $99.99
*** As seen on Kickstarter ***
Artificial intelligence is growing exponentially. There is no doubt about that. Self-driving cars are clocking up millions of miles, IBM Watson is diagnosing patients better than armies of doctors and Google Deepmind's AlphaGo beat the World champion at Go - a game where intuition plays a key role.
But the further AI advances, the more complex become the problems it needs to solve. And only Deep Learning can solve such complex problems and that's why it's at the heart of Artificial intelligence.
--- Why Deep Learning A-Z? ---
Here are five reasons we think Deep Learning A-Z really is different, and stands out from the crowd of other training programs out there:
1. ROBUST STRUCTURE
The first and most important thing we focused on is giving the course a robust structure. Deep Learning is very broad and complex and to navigate this maze you need a clear and global vision of it.
That's why we grouped the tutorials into two volumes, representing the two fundamental branches of Deep Learning: Supervised Deep Learning and Unsupervised Deep Learning. With each volume focusing on three distinct algorithms, we found that this is the best structure for mastering Deep Learning.
2. INTUITION TUTORIALS
So many courses and books just bombard you with the theory, and math, and coding... But they forget to explain, perhaps, the most important part: why you are doing what you are doing. And that's how this course is so different. We focus on developing an intuitive *feel* for the concepts behind Deep Learning algorithms.
With our intuition tutorials you will be confident that you understand all the techniques on an instinctive level. And once you proceed to the hands-on coding exercises you will see for yourself how much more meaningful your experience will be. This is a game-changer.
3. EXCITING PROJECTS
Are you tired of courses based on over-used, outdated data sets?
Yes? Well then you're in for a treat.
Inside this class we will work on Real-World datasets, to solve Real-World business problems. (Definitely not the boring iris or digit classification datasets that we see in every course). In this course we will solve six real-world challenges:
Artificial Neural Networks to solve a Customer Churn problem
Convolutional Neural Networks for Image Recognition
Recurrent Neural Networks to predict Stock Prices
Self-Organizing Maps to investigate Fraud
Boltzmann Machines to create a Recomender System
Stacked Autoencoders* to take on the challenge for the Netflix $1 Million prize
*Stacked Autoencoders is a brand new technique in Deep Learning which didn't even exist a couple of years ago. We haven't seen this method explained anywhere else in sufficient depth.
4. HANDS-ON CODING
In Deep Learning A-Z we code together with you. Every practical tutorial starts with a blank page and we write up the code from scratch. This way you can follow along and understand exactly how the code comes together and what each line means.
In addition, we will purposefully structure the code in such a way so that you can download it and apply it in your own projects. Moreover, we explain step-by-step where and how to modify the code to insert YOUR dataset, to tailor the algorithm to your needs, to get the output that you are after.
This is a course which naturally extends into your career.
5. IN-COURSE SUPPORT
Have you ever taken a course or read a book where you have questions but cannot reach the author?
Well, this course is different. We are fully committed to making this the most disruptive and powerful Deep Learning course on the planet. With that comes a responsibility to constantly be there when you need our help.
In fact, since we physically also need to eat and sleep we have put together a team of professional Data Scientists to help us out. Whenever you ask a question you will get a response from us within 48 hours maximum.
No matter how complex your query, we will be there. The bottom line is we want you to succeed.
--- The Tools ---
Tensorflow and Pytorch are the two most popular open-source libraries for Deep Learning. In this course you will learn both!
TensorFlow was developed by Google and is used in their speech recognition system, in the new google photos product, gmail, google search and much more. Companies using Tensorflow include AirBnb, Airbus, Ebay, Intel, Uber and dozens more.
PyTorch is as just as powerful and is being developed by researchers at Nvidia and leading universities: Stanford, Oxford, ParisTech. Companies using PyTorch include Twitter, Saleforce and Facebook.
So which is better and for what?
Well, in this course you will have an opportunity to work with both and understand when Tensorflow is better and when PyTorch is the way to go. Throughout the tutorials we compare the two and give you tips and ideas on which could work best in certain circumstances.
The interesting thing is that both these libraries are barely over 1 year old. That's what we mean when we say that in this course we teach you the most cutting edge Deep Learning models and techniques.
--- More Tools ---
Theano is another open source deep learning library. It's very similar to Tensorflow in its functionality, but nevertheless we will still cover it.
Keras is an incredible library to implement Deep Learning models. It acts as a wrapper for Theano and Tensorflow. Thanks to Keras we can create powerful and complex Deep Learning models with only a few lines of code. This is what will allow you to have a global vision of what you are creating. Everything you make will look so clear and structured thanks to this library, that you will really get the intuition and understanding of what you are doing.
--- Even More Tools ---
Scikit-learn the most practical Machine Learning library. We will mainly use it:
to evaluate the performance of our models with the most relevant technique, k-Fold Cross Validation
to improve our models with effective Parameter Tuning
to preprocess our data, so that our models can learn in the best conditions
And of course, we have to mention the usual suspects. This whole course is based on Python and in every single section you will be getting hours and hours of invaluable hands-on practical coding experience.
Plus, throughout the course we will be using Numpy to do high computations and manipulate high dimensional arrays, Matplotlib to plot insightful charts and Pandas to import and manipulate datasets the most efficiently.
--- Who Is This Course For? ---
As you can see, there are lots of different tools in the space of Deep Learning and in this course we make sure to show you the most important and most progressive ones so that when you're done with Deep Learning A-Z your skills are on the cutting edge of today's technology.
If you are just starting out into Deep Learning, then you will find this course extremely useful. Deep Learning A-Z is structured around special coding blueprint approaches meaning that you won't get bogged down in unnecessary programming or mathematical complexities and instead you will be applying Deep Learning techniques from very early on in the course. You will build your knowledge from the ground up and you will see how with every tutorial you are getting more and more confident.
If you already have experience with Deep Learning, you will find this course refreshing, inspiring and very practical. Inside Deep Learning A-Z you will master some of the most cutting-edge Deep Learning algorithms and techniques (some of which didn't even exist a year ago) and through this course you will gain an immense amount of valuable hands-on experience with real-world business challenges. Plus, inside you will find inspiration to explore new Deep Learning skills and applications.
--- Real-World Case Studies ---
Mastering Deep Learning is not just about knowing the intuition and tools, it's also about being able to apply these models to real-world scenarios and derive actual measurable results for the business or project. That's why in this course we are introducing six exciting challenges:
#1 Churn Modelling Problem
In this part you will be solving a data analytics challenge for a bank. You will be given a dataset with a large sample of the bank's customers. To make this dataset, the bank gathered information such as customer id, credit score, gender, age, tenure, balance, if the customer is active, has a credit card, etc. During a period of 6 months, the bank observed if these customers left or stayed in the bank.
Your goal is to make an Artificial Neural Network that can predict, based on geo-demographical and transactional information given above, if any individual customer will leave the bank or stay (customer churn). Besides, you are asked to rank all the customers of the bank, based on their probability of leaving. To do that, you will need to use the right Deep Learning model, one that is based on a probabilistic approach.
If you succeed in this project, you will create significant added value to the bank. By applying your Deep Learning model the bank may significantly reduce customer churn.
#2 Image Recognition
In this part, you will create a Convolutional Neural Network that is able to detect various objects in images. We will implement this Deep Learning model to recognize a cat or a dog in a set of pictures. However, this model can be reused to detect anything else and we will show you how to do it - by simply changing the pictures in the input folder.
For example, you will be able to train the same model on a set of brain images, to detect if they contain a tumor or not. But if you want to keep it fitted to cats and dogs, then you will literally be able to a take a picture of your cat or your dog, and your model will predict which pet you have. We even tested it out on Hadelin’s dog!
#3 Stock Price Prediction
In this part, you will create one of the most powerful Deep Learning models. We will even go as far as saying that you will create the Deep Learning model closest to “Artificial Intelligence”. Why is that? Because this model will have long-term memory, just like us, humans.
The branch of Deep Learning which facilitates this is Recurrent Neural Networks. Classic RNNs have short memory, and were neither popular nor powerful for this exact reason. But a recent major improvement in Recurrent Neural Networks gave rise to the popularity of LSTMs (Long Short Term Memory RNNs) which has completely changed the playing field. We are extremely excited to include these cutting-edge deep learning methods in our course!
In this part you will learn how to implement this ultra-powerful model, and we will take the challenge to use it to predict the real Google stock price. A similar challenge has already been faced by researchers at Stanford University and we will aim to do at least as good as them.
#4 Fraud Detection
According to a recent report published by Markets & Markets the Fraud Detection and Prevention Market is going to be worth $33.19 Billion USD by 2021. This is a huge industry and the demand for advanced Deep Learning skills is only going to grow. That’s why we have included this case study in the course.
This is the first part of Volume 2 - Unsupervised Deep Learning Models. The business challenge here is about detecting fraud in credit card applications. You will be creating a Deep Learning model for a bank and you are given a dataset that contains information on customers applying for an advanced credit card.
This is the data that customers provided when filling the application form. Your task is to detect potential fraud within these applications. That means that by the end of the challenge, you will literally come up with an explicit list of customers who potentially cheated on their applications.
#5 & 6 Recommender Systems
From Amazon product suggestions to Netflix movie recommendations - good recommender systems are very valuable in today's World. And specialists who can create them are some of the top-paid Data Scientists on the planet.
We will work on a dataset that has exactly the same features as the Netflix dataset: plenty of movies, thousands of users, who have rated the movies they watched. The ratings go from 1 to 5, exactly like in the Netflix dataset, which makes the Recommender System more complex to build than if the ratings were simply “Liked” or “Not Liked”.
Your final Recommender System will be able to predict the ratings of the movies the customers didn’t watch. Accordingly, by ranking the predictions from 5 down to 1, your Deep Learning model will be able to recommend which movies each user should watch. Creating such a powerful Recommender System is quite a challenge so we will give ourselves two shots. Meaning we will build it with two different Deep Learning models.
Our first model will be Deep Belief Networks, complex Boltzmann Machines that will be covered in Part 5. Then our second model will be with the powerful AutoEncoders, my personal favorites. You will appreciate the contrast between their simplicity, and what they are capable of.
And you will even be able to apply it to yourself or your friends. The list of movies will be explicit so you will simply need to rate the movies you already watched, input your ratings in the dataset, execute your model and voila! The Recommender System will tell you exactly which movies you would love one night you if are out of ideas of what to watch on Netflix!
--- Summary ---
In conclusion, this is an exciting training program filled with intuition tutorials, practical exercises and real-World case studies.
We are super enthusiastic about Deep Learning and hope to see you inside the class!
Kirill & Hadelin
- Welcome to the course!
- Welcome Challenge!
- What is Deep Learning?
- Get the Datasets here
- EXTRA: Use ChatGPT to Boost your Deep Learning Skills
- --------------------- Part 1 - Artificial Neural Networks ---------------------
- Welcome to Part 1 - Artificial Neural Networks
- ANN Intuition
- What You'll Need for ANN
- Plan of Attack
- The Neuron
- The Activation Function
- How do Neural Networks work?
- How do Neural Networks learn?
- Gradient Descent
- Stochastic Gradient Descent
- Backpropagation
- Building an ANN
- Business Problem Description
- IMPORTANT NOTE
- Building an ANN - Step 1
- Check out our free course on ANN for Regression
- Building an ANN - Step 2
- Building an ANN - Step 3
- Building an ANN - Step 4
- Building an ANN - Step 5
- -------------------- Part 2 - Convolutional Neural Networks --------------------
- Welcome to Part 2 - Convolutional Neural Networks
- CNN Intuition
- What You'll Need for CNN
- Plan of attack
- What are convolutional neural networks?
- Step 1 - Convolution Operation
- Step 1(b) - ReLU Layer
- Step 2 - Pooling
- Step 3 - Flattening
- Step 4 - Full Connection
- Summary
- Softmax & Cross-Entropy
- Building a CNN
- IMPORTANT NOTE
- Building a CNN - Step 1
- Building a CNN - Step 2
- Building a CNN - Step 3
- Building a CNN - Step 4
- Building a CNN - Step 5
- Quick Note
- Building a CNN - FINAL DEMO!
- ---------------------- Part 3 - Recurrent Neural Networks ----------------------
- Welcome to Part 3 - Recurrent Neural Networks
- RNN Intuition
- What You'll Need for RNN
- Plan of attack
- The idea behind Recurrent Neural Networks
- The Vanishing Gradient Problem
- LSTMs
- Practical intuition
- EXTRA: LSTM Variations
- Building a RNN
- IMPORTANT NOTE
- Building a RNN - Step 1
- Building a RNN - Step 2
- Building a RNN - Step 3
- Building a RNN - Step 4
- Building a RNN - Step 5
- Building a RNN - Step 6
- Building a RNN - Step 7
- Building a RNN - Step 8
- Building a RNN - Step 9
- Building a RNN - Step 10
- Building a RNN - Step 11
- Building a RNN - Step 12
- Building a RNN - Step 13
- Building a RNN - Step 14
- Building a RNN - Step 15
- Evaluating and Improving the RNN
- Evaluating the RNN
- Improving the RNN
- ------------------------ Part 4 - Self Organizing Maps ------------------------
- Welcome to Part 4 - Self Organizing Maps
- SOMs Intuition
- Plan of attack
- How do Self-Organizing Maps Work?
- Why revisit K-Means?
- K-Means Clustering (Refresher)
- How do Self-Organizing Maps Learn? (Part 1)
- How do Self-Organizing Maps Learn? (Part 2)
- Live SOM example
- Reading an Advanced SOM
- EXTRA: K-means Clustering (part 2)
- EXTRA: K-means Clustering (part 3)
- Building a SOM
- How to get the dataset
- Building a SOM - Step 1
- Building a SOM - Step 2
- Building a SOM - Step 3
- Building a SOM - Step 4
- Mega Case Study
- Mega Case Study - Step 1
- Mega Case Study - Step 2
- Mega Case Study - Step 3
- Mega Case Study - Step 4
- ------------------------- Part 5 - Boltzmann Machines -------------------------
- Welcome to Part 5 - Boltzmann Machines
- Boltzmann Machine Intuition
- Plan of attack
- Boltzmann Machine
- Energy-Based Models (EBM)
- Editing Wikipedia - Our Contribution to the World
- Restricted Boltzmann Machine
- Contrastive Divergence
- Deep Belief Networks
- Deep Boltzmann Machines
- Building a Boltzmann Machine
- How to get the dataset
- Installing PyTorch
- Building a Boltzmann Machine - Introduction
- Same Data Preprocessing in Parts 5 and 6
- Building a Boltzmann Machine - Step 1
- Building a Boltzmann Machine - Step 2
- Building a Boltzmann Machine - Step 3
- Building a Boltzmann Machine - Step 4
- Building a Boltzmann Machine - Step 5
- Building a Boltzmann Machine - Step 6
- Building a Boltzmann Machine - Step 7
- Building a Boltzmann Machine - Step 8
- Building a Boltzmann Machine - Step 9
- Building a Boltzmann Machine - Step 10
- Building a Boltzmann Machine - Step 11
- Building a Boltzmann Machine - Step 12
- Building a Boltzmann Machine - Step 13
- Building a Boltzmann Machine - Step 14
- Evaluating the Boltzmann Machine
- ---------------------------- Part 6 - AutoEncoders ----------------------------
- Welcome to Part 6 - AutoEncoders
- AutoEncoders Intuition
- Plan of attack
- Auto Encoders
- A Note on Biases
- Training an Auto Encoder
- Overcomplete hidden layers
- Sparse Autoencoders
- Denoising Autoencoders
- Contractive Autoencoders
- Stacked Autoencoders
- Deep Autoencoders
- Building an AutoEncoder
- How to get the dataset
- Installing PyTorch
- Same Data Preprocessing in Parts 5 and 6
- Building an AutoEncoder - Step 1
- Building an AutoEncoder - Step 2
- Building an AutoEncoder - Step 3
- Homework Challenge - Coding Exercise
- Building an AutoEncoder - Step 4
- Building an AutoEncoder - Step 5
- Building an AutoEncoder - Step 6
- Building an AutoEncoder - Step 7
- Building an AutoEncoder - Step 8
- Building an AutoEncoder - Step 9
- Building an AutoEncoder - Step 10
- Building an AutoEncoder - Step 11
- THANK YOU Video
- ------------------- Annex - Get the Machine Learning Basics -------------------
- Annex - Get the Machine Learning Basics
- Regression & Classification Intuition
- What You Need for Regression & Classification
- Simple Linear Regression Intuition - Step 1
- Simple Linear Regression Intuition - Step 2
- Multiple Linear Regression Intuition
- Logistic Regression Intuition
- Data Preprocessing
- Data Preprocessing
- The Machine Learning process
- Splitting the data into a Training and Test set
- Feature Scaling
- Data Preprocessing in Python
- Getting Started - Step 1
- Getting Started - Step 2
- Importing the Libraries
- Importing the Dataset - Step 1
- Importing the Dataset - Step 2
- Importing the Dataset - Step 3
- For Python learners, summary of Object-oriented programming: classes & objects
- Taking care of Missing Data - Step 1
- Taking care of Missing Data - Step 2
- Encoding Categorical Data - Step 1
- Encoding Categorical Data - Step 2
- Encoding Categorical Data - Step 3
- Splitting the dataset into the Training set and Test set - Step 1
- Splitting the dataset into the Training set and Test set - Step 2
- Splitting the dataset into the Training set and Test set - Step 3
- Feature Scaling - Step 1
- Feature Scaling - Step 2
- Feature Scaling - Step 3
- Feature Scaling - Step 4
- Logistic Regression
- Logistic Regression Intuition
- Maximum Likelihood
- Logistic Regression in Python - Step 1a
3. Top 3 Recommended YouTube Videos
Here are Outlecture's top 3 recommended YouTube videos, carefully selected for you.
Title | View count | View count last month (February 2024) | Like count | Publish date |
---|---|---|---|---|
AI vs Machine Learning Channel: IBM Technology | 760,859 | 59,444 | 24,109 | Apr 10th, 2023 |
PyTorch in 100 Seconds Channel: Fireship | 718,663 | 38,654 | 30,294 | Mar 20th, 2023 |
How I would learn Machine Learning (if I could start over) Channel: AssemblyAI | 691,256 | 14,051 | 29,920 | Sep 3rd, 2022 |
YouTube has become a familiar platform for everyday use, where viewers can watch videos for free, although they may contain advertisements. Recently, there has been an increase in the availability of high-quality educational materials on this platform.
It is an excellent option for those who want to learn without paying or simply obtaining a quick understanding of a topic.
We highly recommend utilizing YouTube as a valuable learning resource.
Recommended for
- Wanting to learn without spending money
- Wanting to quickly understand the overview of Deep Learning
The details of each course are as follows:
IBM Technology
- View count
- 760,859
- View count last month
(February 2024) - 59,444
- Like count
- 24,109
- Publish date
- Apr 10th, 2023
What is really the difference between Artificial intelligence (AI) and machine learning (ML)? Are they actually the same thing? In this video, Jeff Crume explains the differences and relationship between AI & ML, as well as how related topics like Deep Learning (DL) and other types and properties of each.
#ai #ml #dl #artificialintelligence #machinelearning #deeplearning #watsonx
Fireship
- View count
- 718,663
- View count last month
(February 2024) - 38,654
- Like count
- 30,294
- Publish date
- Mar 20th, 2023
#ai #python #100SecondsOfCode
💬 Chat with Me on Discord
https://discord.gg/fireship
🔗 Resources
PyTorch Docs https://pytorch.org
Tensorflow in 100 Seconds
Python in 100 Seconds https://youtu.be/x7X9w_GIm1s
🔥 Get More Content - Upgrade to PRO
Upgrade at https://fireship.io/pro
Use code YT25 for 25% off PRO access
🎨 My Editor Settings
- Atom One Dark
- vscode-icons
- Fira Code Font
🔖 Topics Covered
- What is PyTorch?
- PyTorch vs Tensorflow
- Build a basic neural network with PyTorch
- PyTorch 2 basics tutorial
- What is a tensor?
- Which AI products use PyTorch?
AssemblyAI
- View count
- 691,256
- View count last month
(February 2024) - 14,051
- Like count
- 29,920
- Publish date
- Sep 3rd, 2022
All courses: https://github.com/AssemblyAI-Examples/ML-Study-Guide
Get your Free Token for AssemblyAI Speech-To-Text API 👇
https://www.assemblyai.com/?utm_source=youtube&utm_medium=referral&utm_campaign=yt_pat_60
▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬▬
🖥️ Website: https://www.assemblyai.com
🐦 Twitter: https://twitter.com/AssemblyAI
🦾 Discord: https://discord.gg/Cd8MyVJAXd
▶️ Subscribe: https://www.youtube.com/c/AssemblyAI?sub_confirmation=1
🔥 We're hiring! Check our open roles: https://www.assemblyai.com/careers
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#MachineLearning #DeepLearning
0:00 Introduction
1:01 MATH
1:58 PYTHON PYTHON
2:37 ML TECH STACK ML TECH STACK
3:35 ML COURSES ML COURSES
4:44 HANDS-ON & DATA PREPARATION
5:17 PRACTICE & PRACTICE & BUILD PORTFOLIO
6:16 SPECIALIZE & CREATE BLOG
5. Wrap-up
We introduced recommended courses for Deep Learning. If you are interested in learning other related courses, please refer to the following.
If you want to further explore and learn after taking one of the courses we introduced today, we recommend visiting the official website or community site.
If you want to stay up-to-date on the latest information, we suggest following the official Twitter account.
Furthermore, We highly recommend utilizing General AI such as ChatGPT as a study aid. This can enable more effective learning, so please give it a try.
We hope you found our website and article helpful. Thank you for visiting.