DEEPLIZARD Resources

Download access to resources is available for Hivemind members

Resources by Course

All deeplizard resources are tested and updated to support newer dependency versions, as well as improved with bug fixes. Resources are made available in versioned releases, so you can stay up to date when changes are applied.

  • Tested
  • Maintained
  • Current

Select your course below to see the resources contained in the latest release.

Machine Learning & Deep Learning Fundamentals

expand_more chevron_left

Deep Learning Fundamentals v1.0.0

Files

Files included in this release:

Download zipped files here.
# Name Size
1 Backpropagation-notebook-deeplizard.pdf 302.9 KB
2 Backpropagation-notebook-deeplizard.tex 18.3 KB
3 deep-learning-fundamentals-deeplizard.ipynb passing 145.6 KB
4 versions.ipynb passing 1.9 KB
5 MNIST 7 Convolution.xlsx 35.7 KB
6 NN.PNG 30.8 KB
Dependencies

Dependencies used in this release:

Code files were tested using these versions.
Name Version
python 3.6.10
tensorflow 2.1.0
keras 2.3.1
numpy 1.18.1
sklearn 0.22.2.post1
Changes

Changes included in this release:

d29b099
Create course
Course Name: Machine Learning & Deep Learning Fundamentals Description: This series explains concepts that are fundamental to deep learning and artificial neural networks for beginners. In addition to covering these concepts, we also show how to implement some of the concepts in code using Keras, a neural network API written in Python. We will learn about layers in an artificial neural network, activation functions, backpropagation, convolutional neural networks (CNNs), data augmentation, transfer learning and much more!

Committed by Mandy on November 22, 2017

Keras - Python Deep Learning Neural Network API

expand_more chevron_left

Deep Learning with Keras v1.0.2

Files

Files included in this release:

Download zipped files here.
# Name Size
1 Part-1-tf.keras-deeplizard.ipynb passing 4.6 MB
2 versions.ipynb passing 1.8 KB
3 hello_app.py 330.0 b
4 predict_app.py 1.3 KB
5 README.txt 149.0 b
6 sample_app.py 121.0 b
7 hello.html 783.0 b
8 predict-with-visuals.html 2.8 KB
9 predict.html 1.6 KB
10 medical_trial_model.h5 31.8 KB
11 my_model_weights.h5 17.5 KB
12 Observable-notebook.txt 139.0 b
13 package-lock.json 13.5 KB
14 package.json 107.0 b
15 server.js 304.0 b
16 imagenet_classes.js 32.8 KB
17 predict-with-tfjs.html 2.1 KB
18 predict.js 2.1 KB
Dependencies

Dependencies used in this release:

Code files were tested using these versions.
Name Version
python 3.6.10
tensorflow 2.2.0
numpy 1.18.1
sklearn 0.22.2.post1
Changes

Changes included in this release:

d65e982
Upgrade and verify code for TensorFlow version 2.2.0
Now tested against TensorFlow 2.2.0

Committed by Mandy on July 15, 2020

48e2699
Update Flask code to use tf.keras

Committed by Mandy on July 15, 2020

d8c793e
General formatting and refactoring updates

Committed by Mandy on July 10, 2020

75a61c0
Update dogs-vs-cats data set
Use larger data set organized by new script in notebook

Committed by Mandy on July 4, 2020

49acc4c
Consolidate notebooks
Move MobileNet code into Part-1-tf.keras-deeplizard.ipynb notebook

Committed by Mandy on July 1, 2020

9937c62
Update Keras notebooks to use tf.keras code

Committed by Mandy on May 27, 2020

f9f1534
Round predictions before passing them to sckit-learn's confusion_matrix
When plotting a confusion matrix using sckit-learn's plot_confusion_matrix, the following error would occur: Classification metrics can't handle a mix of binary and continuous targets Rounding predictions before plotting resolves this issue. https://deeplizard.com/learn/video/HDom7mAxCdc

Committed by Mandy on April 1, 2020

099fcf7
Change the way in which the fine-tuned VGG16 model is built
Previously, we iterated over the original VGG16 model and added all layers to the new model. Then, we would pop off the output layer and add our own output layer. Using pop in this way causes subseqent issues with saving and loading the fine-tuned model, as well as showing an incorrect number of trainable parameters when calling model.summary(). To avoid using pop, when iterating over the original VGG16 model, we add all layers except for the output to the new model. Then add the new output layer. Full explanation can be found here: https://deeplizard.com/learn/video/oDHpqu52soI

Committed by Mandy on June 13, 2019

60e3974
Change order in which training data is generated for simple Keras Sequential model
Two for loops were previously generating the data in such a way that, when we use the validation_split option to create the valdiation set, the validation_split would completely capture all of the data in the second for loop. Therefore, none of the data in the second for loop would be captured in the training set since it would all be split out into the validation set. Changing the order of the two for loops mitigates this issue. Full explanation can be found here: https://deeplizard.com/learn/video/dzoh8cfnvnI

Committed by Mandy on November 22, 2017

d29b099
Create course
Course Name: Keras - Python Deep Learning Neural Network API Description: This series will teach you how to use Keras, a neural network API written in Python. Each video focuses on a specific concept and shows how the full implementation is done in code using Keras and Python. We will learn how to preprocess data, organize data for training, build and train an artificial neural network from scratch, build and fine-tune convolutional neural networks (CNNs), implement fine-tuning and transfer learning, deploy our models using both front-end and back-end deployment techniques, and much more!

Committed by Mandy on November 18, 2017

Neural Network Programming - Deep Learning with PyTorch

expand_more chevron_left

Deep Learning with PyTorch v1.0.2

Files

Files included in this release:

Download zipped files here.
# Name Size
1 debug-data-normalization.py 535.0 b
2 debug-how-to-example.py 315.0 b
3 deeplizard-condensed-code-fashion-mnist-project.ipynb passing 73.1 KB
4 Part-1_Neural-Network-Programming_deeplizard.ipynb passing 78.9 KB
5 Part-2_Neural-Network-Programming_deeplizard.ipynb passing 121.4 KB
6 versions.ipynb passing 2.0 KB
7 plotcm.py 1.2 KB
Dependencies

Dependencies used in this release:

Code files were tested using these versions.
Name Version
python 3.6.5
torch 1.5.0
torchvision 0.6.0
tensorflow 2.0.0-rc1
tensorboard 1.15.0a20190806
pandas 1.0.1
numpy 1.18.1
Changes

Changes included in this release:

e246083
Add a new definition for the RunManager class to keep original code backwards compatible
Update code to remove the comment argument passed to the SummaryWriter constructor. Adding more parameters to the comment caused the file name to grow too large in size. Removed: self.tb = SummaryWriter(comment=f'-{run}') Added: self.tb = SummaryWriter()

Committed by Chris on June 11, 2020

c24b4c8
Add py files for the debugger videos
Adds the following files: debug-data-normalization.py debug-how-to-example.py

Committed by Chris on June 11, 2020

461dbc7
Add code for the Sequential class video

Committed by Chris on June 11, 2020

00e2528
Apply refactor updates recommended by David from the hivemind

Committed by Chris on May 25, 2020

871f0e2
General formatting and refactoring updates

Committed by Chris on May 24, 2020

0547f9e
Remove hard coded variable of class names and replace with train_set.classes

Committed by Chris on May 23, 2020

7e232fd
Upgrade and verify code for PyTorch (1.5) and Torchvision (0.6.0)
Now tested against: PyTorch 1.5 Torchvision 0.6.0

Committed by Chris on May 19, 2020

b7eee0d
Add code for cuda demonstration

Committed by Chris on May 5, 2020

06e73ee
Create PyTorch build 1.0.0

Committed by Chris on April 19, 2020

acacec9
Enhance calculation of total_loss when training set size is not divisible by batch_size
Currently, we have the following: total_loss += loss.item() * batch_size Using the updated code below, we can achieve a more accurate total_loss value: total_loss += loss.item() * images.shape[0] Note that these two lines of code give us the same total_loss value when the training set size is divisible by the batch_size. Thank you to Alireza Abedin Varamin for pointing this out in a comment on YouTube. Further discussion can be found here: https://deeplizard.com/learn/video/ycxulUVoNbk

Committed by Chris on December 9, 2019

faafcd4
Remove call to np.transpose in favor of torch.permute

Committed by Chris on June 8, 2019

989f104
Comparison operations returned dtype change in PyTorch 1.2.0
Behavior Change in PyTorch Version 1.2.0 Comparison operations returned dtype has changed from torch.uint8 to torch.bool (21113). Version 1.1: > torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2]) tensor([1, 0, 0], dtype=torch.uint8) Version 1.2: > torch.tensor([1, 2, 3]) < torch.tensor([3, 1, 2]) tensor([True, False, False]) Release Notes: https://github.com/pytorch/pytorch/releases/tag/v1.2.0 Pull Request: https://github.com/pytorch/pytorch/pull/21113

Committed by Chris on May 3, 2019

609b6d2
Rename train_labels to targets for MNIST dataset in torchvision v0.2.2
Change in torchvision: Add deprecation warning in MNIST train[test]_labels[data] Link: https://github.com/pytorch/vision/pull/742 Before torchvision 0.2.2, we would write: > train_set.train_labels tensor([9, 0, 0, ..., 3, 0, 5]) Starting with and after torchvision 0.2.2, we write: > train_set.targets tensor([9, 0, 0, ..., 3, 0, 5])

Committed by Chris on February 13, 2019

b7f5ee2
Create course
Course Name: Neural Network Programming - Deep Learning with PyTorch Description: This series is all about neural network programming and PyTorch! We'll start out with the basics of PyTorch and CUDA and understand why neural networks use GPUs. We then move on to cover the tensor fundamentals needed for understanding deep learning before we dive into neural network architecture. From there, we'll go through the details of training a network, analyzing results, tuning hyperparameters, and using TensorBoard with PyTorch for visual analytics!

Committed by Chris on November 18, 2018

Reinforcement Learning - Goal Oriented Intelligence

expand_more chevron_left

Reinforcement Learning v1.0.0

Files

Files included in this release:

Download zipped files here.
# Name Size
1 Part-1-Q-learning-Frozen-Lake-deeplizard.ipynb passing 8.8 KB
2 Part-2-Cart-and-Pole-DQN-deeplizard.ipynb passing 142.2 KB
3 versions.ipynb passing 1.5 KB
Dependencies

Dependencies used in this release:

Code files were tested using these versions.
Name Version
python 3.6.10
numpy 1.18.1
gym 0.17.1
torch 1.4.0
torchvision 0.5.0
Changes

Changes included in this release:

1711e79
Fix for calling plt.imshow() on a GPU
When using a GPU, calling plt.imshow(screen.squeeze(0).permute(1, 2, 0), interpolation='none') to plot an image generates the following error: TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. Calling cpu() on permute() resolves this issue. plt.imshow(screen.squeeze(0).permute(1, 2, 0).cpu(), interpolation='none') https://deeplizard.com/learn/video/jkdXDinWfo8

Committed by Mandy on August 10, 2019

d29b099
Create course
Course Name: Reinforcement Learning - Goal Oriented Intelligence Description: This series is all about reinforcement learning (RL)! Here, we’ll gain an understanding of the intuition, the math, and the coding involved with RL. We’ll first start out with an introduction to RL where we’ll learn about Markov Decision Processes (MDPs) and Q-learning. We’ll then move on to deep RL where we’ll learn about deep Q-networks (DQNs) and policy gradients. We’ll also build some cool RL projects in code using Python, PyTorch, and OpenAI Gym.

Committed by Mandy on September 15, 2018