Deep Learning Deployment Fundamentals

Deep Learning Course

Initializing and Accessing Bias with Keras

video

expand_more chevron_left

text

expand_more chevron_left

Initializing and accessing bias in Keras

In this episode, we'll see how we can initialize and access the biases in a neural network in code with Keras. Let's get to it!

Keras model

Let's have a look at this arbitrary small neural network with one hidden Dense layer containing 4 nodes, and an output layer with 2 nodes.

from keras.models import Sequential
from keras.layers import Dense, Activation

model = Sequential([
    Dense(4, input_shape=(1,), activation='relu', use_bias=True, bias_initializer='zeros'),
    Dense(2, activation='softmax')
])

Everything here is pretty standard, and you should be familiar with almost all of it based on previous episodes in this series. The only new items are in in the hidden layer where we have two parameters we haven't yet seen before: use_bias and bias_initializer.

In another episode, we've discussed what exactly bias is in a neural network. Now, we're going to specifically focus on how bias is worked with within a Keras model.

Parameter: use_bias

In Keras, we specify whether or not we want a given layer to include biases for all of its neurons with the use_bias parameter. If we do want to include biases, we set the parameter value to True. Otherwise, we set it to False.

The default value is True, so if we don't specify this parameter here at all, the layer will include bias terms by default.

Parameter: bias_initializer

Next, we have the bias_initializer parameter, which determines how the biases are initialized. This initialization process is really similar to the weight initialization process that we talked about in another episode.

This parameter determines how the biases are first set before we start training the model.

We are setting this parameter's value to the string 'zeros'. This means that all 4 biases in this layer will be set to a value of 0 before the model starts training.

'zeros' is actually the default value for the bias_initializer parameter. If we instead wanted to change this so that the biases were set to some other type of values, like all ones, or random numbers, then we can.

Keras has a list of initializers that it supports and they're actually the same list of initializers we talked about when we discussed weight initialization. So we could even initialize the biases with Xavier initialization if we wanted.

Observing initialized bias terms

After we initialize these biases, we can check them out and inspect at their values by calling model.get_weights.

model.get_weights()

[array([[-0.733, 0.017, 0.561, 0.331]], dtype-float32),
    array([0.,0.,0.,0.], dtype=float32),
    array([[-0.281, 0.502],
        [0.081, 0.713],
        [0.004, -0.696],
        [0.013, -0.488]], dtype=float32),
    array([0., 0.], dtype=float32)]

This gives us all the weights and all the biases for each layer in the model. We can see that we have these randomly initialized weights in the weight matrix for the first hidden layer, and we also have the bias vector containing 4 zeros corresponding to the bias term for each node in the layer for which we specified 'zeros' as the bias_initializer.

Similarly, we have the weight matrix corresponding to the output layer, which is again, followed by the bias vector that contains 2 zeros corresponding to the bias term for each node in this layer.

Remember, we didn't set any bias parameters for the output layer, but because Keras uses bias and initializes bias terms with zeros by default, we get this for free.

After being initialized, during training, these biases (and weights) will be updated as the model learns the optimized values for them. If we were to train this model and then call the get_weights() function again, then the values for the weights and biases would likely be very different.

Wrapping up

As we now know, our Keras models have been using bias this whole time without any effort from our side since, by default, Keras is initializing the biases with zeros. We showed this for Dense layers but the same is true for other layer types as well, like convolutional layers for example.

After first learning about bias on a fundamental level and now seeing in applied in code, what are your thoughts? See ya in the next one!

quiz

expand_more chevron_left
deeplizard logo DEEPLIZARD Message notifications

Quiz Results

resources

expand_more chevron_left
Let's see how we can initialize and access the biases in a neural network in code with Keras. Specifically, we'll be working with the Keras Sequential model along with the use_bias and bias_initializer parameters to initialize biases. We'll then observe the values of the biases by calling get_weights() on the model. Find out more information on bias in neural networks in the following video that spills all the details. https://youtu.be/HetFihsXSys Checkout posts for this video: https://www.patreon.com/posts/18304689 https://www.instagram.com/p/BhznPvDlsY9/?taken-by=deeplizard https://twitter.com/deeplizard/status/987432151019851777 https://steemit.com/deep-learning/@deeplizard/initializing-and-accessing-bias-in-an-artificial-neural-network-with-keras πŸ•’πŸ¦Ž VIDEO SECTIONS πŸ¦ŽπŸ•’ 00:00 Welcome to DEEPLIZARD - Go to deeplizard.com for learning resources 00:30 Help deeplizard add video timestamps - See example in the description 03:41 Collective Intelligence and the DEEPLIZARD HIVEMIND πŸ’₯🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎πŸ’₯ πŸ‘‹ Hey, we're Chris and Mandy, the creators of deeplizard! πŸ‘€ CHECK OUT OUR VLOG: πŸ”— https://youtube.com/deeplizardvlog πŸ’» DOWNLOAD ACCESS TO CODE FILES πŸ€– Available for members of the deeplizard hivemind: πŸ”— https://deeplizard.com/resources ❀️🦎 Special thanks to the following polymaths of the deeplizard hivemind: Tammy BufferUnderrun Mano Prime πŸ‘€ Follow deeplizard: Our vlog: https://youtube.com/deeplizardvlog Facebook: https://facebook.com/deeplizard Instagram: https://instagram.com/deeplizard Twitter: https://twitter.com/deeplizard Patreon: https://patreon.com/deeplizard YouTube: https://youtube.com/deeplizard πŸŽ“ Deep Learning with deeplizard: Deep Learning Dictionary - https://deeplizard.com/course/ddcpailzrd Deep Learning Fundamentals - https://deeplizard.com/course/dlcpailzrd Learn TensorFlow - https://deeplizard.com/learn/video/RznKVRTFkBY Learn PyTorch - https://deeplizard.com/learn/video/v5cngxo4mIg Reinforcement Learning - https://deeplizard.com/learn/video/nyjbcRQ-uQ8 Generative Adversarial Networks - https://deeplizard.com/course/gacpailzrd πŸŽ“ Other Courses: Data Science - https://deeplizard.com/learn/video/d11chG7Z-xk Trading - https://deeplizard.com/learn/video/ZpfCK_uHL9Y πŸ›’ Check out products deeplizard recommends on Amazon: πŸ”— https://amazon.com/shop/deeplizard πŸ“• Get a FREE 30-day Audible trial and 2 FREE audio books using deeplizard's link: πŸ”— https://amzn.to/2yoqWRn 🎡 deeplizard uses music by Kevin MacLeod πŸ”— https://youtube.com/channel/UCSZXFhRIx6b0dFX3xS8L1yQ πŸ”— http://incompetech.com/ ❀️ Please use the knowledge gained from deeplizard content for good, not evil.

updates

expand_more chevron_left
deeplizard logo DEEPLIZARD Message notifications

Update history for this page

Did you know you that deeplizard content is regularly updated and maintained?

  • Updated
  • Maintained

Spot something that needs to be updated? Don't hesitate to let us know. We'll fix it!


All relevant updates for the content on this page are listed below.