Neural Network Programming - Deep Learning with PyTorch

with deeplizard.

PyTorch Tensors Explained - Neural Network Programming

September 21, 2018 by

Blog

Introducing PyTorch Tensors

Welcome back to this series on neural network programming with PyTorch. In this post, we’ll start to dig in deeper with PyTorch itself by exploring PyTorch tensors. Without further ado, let’s get started.

PyTorch logo

PyTorch tensors are the data structures we'll be using when programming neural networks in PyTorch.

The first lines of code that must be written are usually data preprocessing routines, and the ultimate goal of this data preprocessing is to transform whatever data we are working with into tensors that can fuel our neural networks.

Instances of the torch.Tensor class

PyTorch tensors are instances of the torch.Tensor Python class. We can create a torch.Tensor object using the class constructor like so:

> t = torch.Tensor()
> type(t)
torch.Tensor

This creates an empty tensor (tensor with no data), but we'll get to adding data in just a moment.

Tensor attributes

First, let’s look at a few tensor attributes. Every torch.Tensor has these attributes:

  • torch.dtype
  • torch.device
  • torch.layout

Looking at our Tensor t, we can see the following default attribute values:

> print(t.dtype)
> print(t.device)
> print(t.layout)
torch.float32
cpu
torch.strided

Tensors have a torch.dtype

The dtype, which is torch.float32 in our case, specifies the type of the data that is contained within the tensor. Tensors contain uniform (of the same type) numerical data with one of these types:

Data type dtype CPU tensor GPU tensor
32-bit floating point torch.float32 torch.FloatTensor torch.cuda.FloatTensor
64-bit floating point torch.float64 torch.DoubleTensor torch.cuda.DoubleTensor
16-bit floating point torch.float16 torch.HalfTensor torch.cuda.HalfTensor
8-bit integer (unsigned) torch.uint8 torch.ByteTensor torch.cuda.ByteTensor
8-bit integer (signed) torch.int8 torch.CharTensor torch.cuda.CharTensor
16-bit integer (signed) torch.int16 torch.ShortTensor torch.cuda.ShortTensor
32-bit integer (signed) torch.int32 torch.IntTensor torch.cuda.IntTensor
64-bit integer (signed) torch.int64 torch.LongTensor torch.cuda.LongTensor

Notice how each type has a CPU and GPU version. One thing to keep in mind about tensor data types is that tensor operations between tensors must happen between tensors with the same type of data.

Tensors have a torch.device

The device, cpu in our case, specifies the device (CPU or GPU) where the tensor's data is allocated. This determines where tensor computations for the given tensor will be performed.

PyTorch supports the use of multiple devices, and they are specified using an index like so:

> device = torch.device('cuda:0')
> device
device(type='cuda', index=0)

If we have a device like above, we can create a tensor on the device by passing the device to the tensor’s constructor. One thing to keep in mind about using multiple devices is that tensor operations between tensors must happen between tensors that exists on the same device.

Using multiple devices is typically something we will do as we become more advanced users, so there’s no need to worry about that now.

Tensors have a torch.layout

The layout, strided in our case, specifies how the tensor is stored in memory. To learn more about stride check here.

For now, this is all we need to know.

Take away from the tensor attributes

As neural network programmers, we need to be aware of the following:

  1. Tensors contain data of a uniform type (dtype).
  2. Tensor computations between tensors depend on the dtype and the device.

Let’s look now at the common ways of creating tensors using data in PyTorch.

Creating tensors using data

These are the primary ways of creating tensor objects (instances of the torch.Tensor class), with data (array-like) in PyTorch:

  1. torch.Tensor(data)
  2. torch.tensor(data)
  3. torch.as_tensor(data)
  4. torch.from_numpy(data)

Let’s look at each of these. They all accept some form of data and give us an instance of the torch.Tensor class. Sometimes when there are multiple ways to achieve the same result, things can get confusing, so let’s break this down.

We’ll begin by just creating a tensor with each of the options and see what we get. We’ll start by creating some data.

We can use a Python list, or sequence, but numpy.ndarrays are going to be the more common option, so we’ll go with a numpy.ndarray like so:

> data = np.array([1,2,3])
> type(data)
numpy.ndarray

This gives us a simple bit of data with a type of numpy.ndarray.

Now, let’s create our tensors with each of these options 1-4, and have a look at what we get:

> o1 = torch.Tensor(data)
> o2 = torch.tensor(data)
> o3 = torch.as_tensor(data)
> o4 = torch.from_numpy(data)

> print(o1)
> print(o2)
> print(o3)
> print(o4)
tensor([1., 2., 3.])
tensor([1, 2, 3], dtype=torch.int32)
tensor([1, 2, 3], dtype=torch.int32)
tensor([1, 2, 3], dtype=torch.int32)

All of the options (o1, o2, o3, o4) appear to have produced the same tensors except for the first one. The first option (o1) has dots after the number indicating that the numbers are floats, while the next three options have a type of int32.

// Python code example of what we mean

> type(2.)
float

> type(2)
int

In the next post, we will look more deeply at this difference as well as a few other important differences that are lurking behind the scenes.

The discussion in the next post will allow us to see which of these options is best for creating tensors. For now, let's see some of the creation options available for creating tensors from scratch without having any data beforehand.

Creation options without data

Here are some other creation options that are available.

We have the torch.eye() function which returns a 2-D tensor with ones on the diagonal and zeros elsewhere. The name eye() is connected to the idea of an identity matrix , which is a square matrix with ones on the main diagonal and zeros everywhere else.

> print(torch.eye(2))
tensor([
    [1., 0.],
    [0., 1.]
])

We have the torch.zeros() function that creates a tensor of zeros with the shape of specified shape argument.

> print(torch.zeros([2,2]))
tensor([
    [0., 0.],
    [0., 0.]
])

Similarly, we have the torch.ones() function that creates a tensor of ones.

> print(torch.ones([2,2]))
tensor([
    [1., 1.],
    [1., 1.]
])

We also have the torch.rand() function that creates a tensor with a shape of the specified argument whose values are random.

> print(torch.rand([2,2]))
tensor([
    [0.0465, 0.4557],
    [0.6596, 0.0941]
])

This is a small subset of the available creation functions that don’t require data. Check with the PyTorch documentation for the full list.

I hope now you have a good understanding of how we can use PyTorch to create tensors from using data as well as the built in functions that don’t require data. This task is a breeze if we are using numpy.ndarrays, so congratulations if you are already familiar with NumPy.

In the next post, we will look a little more deeply at the creation options that require data, and we’ll discover the differences between these options as well as see which options work best. See you in the next one!

Description

PyTorch tensor objects for neural network programming and deep learning. Check out the corresponding blog and other resources for this video at: http://deeplizard.com/learn/video/jexkKugTg04 ❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind: Ruicong Xie Najib Akram Support collective intelligence, and join the deeplizard hivemind: http://deeplizard.com/hivemind Follow deeplizard: YouTube: https://www.youtube.com/deeplizard Twitter: https://twitter.com/deeplizard Facebook: https://www.facebook.com/Deeplizard-145413762948316 Steemit: https://steemit.com/@deeplizard Instagram: https://www.instagram.com/deeplizard/ Pinterest: https://www.pinterest.com/deeplizard/ Check out products deeplizard suggests on Amazon: https://www.amazon.com/shop/deeplizard Support deeplizard by browsing with Brave: https://brave.com/dee530 Support deeplizard with crypto: Bitcoin: 1AFgm3fLTiG5pNPgnfkKdsktgxLCMYpxCN Litecoin: LTZ2AUGpDmFm85y89PFFvVR5QmfX6Rfzg3 Ether: 0x9105cd0ecbc921ad19f6d5f9dd249735da8269ef Recommended books on AI: The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive: http://amzn.to/2GtjKqu Life 3.0: Being Human in the Age of Artificial Intelligence https://amzn.to/2H5Iau4 Jeremy's Ted talk: https://www.youtube.com/watch?v=t4kyRyKyOpo fast.ai: http://www.fast.ai/ Playlists: Data Science - https://www.youtube.com/playlist?list=PLZbbT5o_s2xrth-Cqs_R9-us6IWk9x27z Machine Learning - https://www.youtube.com/playlist?list=PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras - https://www.youtube.com/playlist?list=PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL TensorFlow.js - https://www.youtube.com/playlist?list=PLZbbT5o_s2xr83l8w44N_g3pygvajLrJ- PyTorch - https://www.youtube.com/watch?v=v5cngxo4mIg&list=PLZbbT5o_s2xrfNyHZsM6ufI0iZENK9xgG Reinforcement Learning - https://www.youtube.com/playlist?list=PLZbbT5o_s2xoWNVdDudn51XM8lOuZ_Njv Music: Thinking Music by Kevin MacLeod Jarvic 8 by Kevin MacLeod YouTube: https://www.youtube.com/channel/UCSZXFhRIx6b0dFX3xS8L1yQ Website: http://incompetech.com/ Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/