PyTorch - Python Deep Learning Neural Network API

Deep Learning Course - Level: Intermediate

PyTorch Tensors Explained - Neural Network Programming

video

expand_more chevron_left

text

expand_more chevron_left

Introducing PyTorch Tensors

Welcome back to this series on neural network programming with PyTorch. In this post, we'll start to dig in deeper with PyTorch itself by exploring PyTorch tensors. Without further ado, let's get started.

PyTorch logo

PyTorch tensors are the data structures we'll be using when programming neural networks in PyTorch.

When programming neural networks, data preprocessing is often one of the first steps in the overall process, and one goal of data preprocessing is to transform the raw input data into tensor form.

Instances of the torch.Tensor class

PyTorch tensors are instances of the torch.Tensor Python class. We can create a torch.Tensor object using the class constructor like so:

> t = torch.Tensor()
> type(t)
torch.Tensor

This creates an empty tensor (tensor with no data), but we'll get to adding data in just a moment.

Tensor attributes

First, let's look at a few tensor attributes. Every torch.Tensor has these attributes:

  • torch.dtype
  • torch.device
  • torch.layout

Looking at our Tensor t, we can see the following default attribute values:

> print(t.dtype)
> print(t.device)
> print(t.layout)
torch.float32
cpu
torch.strided

Tensors have a torch.dtype

The dtype, which is torch.float32 in our case, specifies the type of the data that is contained within the tensor. Tensors contain uniform (of the same type) numerical data with one of these types:

Data type dtype CPU tensor GPU tensor
32-bit floating point torch.float32 torch.FloatTensor torch.cuda.FloatTensor
64-bit floating point torch.float64 torch.DoubleTensor torch.cuda.DoubleTensor
16-bit floating point torch.float16 torch.HalfTensor torch.cuda.HalfTensor
8-bit integer (unsigned) torch.uint8 torch.ByteTensor torch.cuda.ByteTensor
8-bit integer (signed) torch.int8 torch.CharTensor torch.cuda.CharTensor
16-bit integer (signed) torch.int16 torch.ShortTensor torch.cuda.ShortTensor
32-bit integer (signed) torch.int32 torch.IntTensor torch.cuda.IntTensor
64-bit integer (signed) torch.int64 torch.LongTensor torch.cuda.LongTensor

Notice how each type has a CPU and GPU version. One thing to keep in mind about tensor data types is that tensor operations between tensors must happen between tensors with the same type of data. However, this statement only applies to PyTorch versions lower than 1.3. See the section below on PyTorch Tensor Type Promotion for details.

PyTorch Tensor Type Promotion

Arithmetic and comparison operations, as of PyTorch version 1.3, can perform mixed-type operations that promote to a common dtype.

The example below was not allowed in version 1.2. However, in version 1.3 and above, the same code returns a tensor with dtype=torch.float32.

torch.tensor([1], dtype=torch.int) + 
torch.tensor([1], dtype=torch.float32)

See the full documentation for more details.

  • torch.result_type Provide function to determine result of mixed-type ops 26012.
  • torch.can_cast Expose casting rules for type promotion 26805.
  • torch.promote_types Expose promotion logic 26655.

Tensors have a torch.device

The device, cpu in our case, specifies the device (CPU or GPU) where the tensor's data is allocated. This determines where tensor computations for the given tensor will be performed.

PyTorch supports the use of multiple devices, and they are specified using an index like so:

> device = torch.device('cuda:0')
> device
device(type='cuda', index=0)

If we have a device like above, we can create a tensor on the device by passing the device to the tensor's constructor. One thing to keep in mind about using multiple devices is that tensor operations between tensors must happen between tensors that exists on the same device.

Using multiple devices is typically something we will do as we become more advanced users, so there's no need to worry about that now.

Tensors have a torch.layout

The layout, strided in our case, specifies how the tensor is stored in memory. To learn more about stride check here.

For now, this is all we need to know.

Take away from the tensor attributes

As neural network programmers, we need to be aware of the following:

  1. Tensors contain data of a uniform type (dtype).
  2. Tensor computations between tensors depend on the dtype and the device.

Let's look now at the common ways of creating tensors using data in PyTorch.

Creating tensors using data

These are the primary ways of creating tensor objects (instances of the torch.Tensor class), with data (array-like) in PyTorch:

  1. torch.Tensor(data)
  2. torch.tensor(data)
  3. torch.as_tensor(data)
  4. torch.from_numpy(data)

Let's look at each of these. They all accept some form of data and give us an instance of the torch.Tensor class. Sometimes when there are multiple ways to achieve the same result, things can get confusing, so let's break this down.

We'll begin by just creating a tensor with each of the options and see what we get. We'll start by creating some data.

We can use a Python list, or sequence, but numpy.ndarrays are going to be the more common option, so we'll go with a numpy.ndarray like so:

> data = np.array([1,2,3])
> type(data)
numpy.ndarray

This gives us a simple bit of data with a type of numpy.ndarray.

Now, let's create our tensors with each of these options 1-4, and have a look at what we get:

> o1 = torch.Tensor(data)
> o2 = torch.tensor(data)
> o3 = torch.as_tensor(data)
> o4 = torch.from_numpy(data)

> print(o1)
> print(o2)
> print(o3)
> print(o4)
tensor([1., 2., 3.])
tensor([1, 2, 3], dtype=torch.int32)
tensor([1, 2, 3], dtype=torch.int32)
tensor([1, 2, 3], dtype=torch.int32)

All of the options (o1, o2, o3, o4) appear to have produced the same tensors except for the first one. The first option (o1) has dots after the number indicating that the numbers are floats, while the next three options have a type of int32.

// Python code example of what we mean

> type(2.)
float

> type(2)
int

In the next post, we will look more deeply at this difference as well as a few other important differences that are lurking behind the scenes.

The discussion in the next post will allow us to see which of these options is best for creating tensors. For now, let's see some of the creation options available for creating tensors from scratch without having any data beforehand.

Creation options without data

Here are some other creation options that are available.

We have the torch.eye() function which returns a 2-D tensor with ones on the diagonal and zeros elsewhere. The name eye() is connected to the idea of an identity matrix , which is a square matrix with ones on the main diagonal and zeros everywhere else.

> print(torch.eye(2))
tensor([
    [1., 0.],
    [0., 1.]
])

We have the torch.zeros() function that creates a tensor of zeros with the shape of specified shape argument.

> print(torch.zeros([2,2]))
tensor([
    [0., 0.],
    [0., 0.]
])

Similarly, we have the torch.ones() function that creates a tensor of ones.

> print(torch.ones([2,2]))
tensor([
    [1., 1.],
    [1., 1.]
])

We also have the torch.rand() function that creates a tensor with a shape of the specified argument whose values are random.

> print(torch.rand([2,2]))
tensor([
    [0.0465, 0.4557],
    [0.6596, 0.0941]
])

This is a small subset of the available creation functions that don't require data. Check with the PyTorch documentation for the full list.

I hope now you have a good understanding of how we can use PyTorch to create tensors from using data as well as the built in functions that don't require data. This task is a breeze if we are using numpy.ndarrays, so congratulations if you are already familiar with NumPy.

In the next post, we will look a little more deeply at the creation options that require data, and we'll discover the differences between these options as well as see which options work best. See you in the next one!

quiz

expand_more chevron_left
deeplizard logo DEEPLIZARD Message notifications

Quiz Results

resources

expand_more chevron_left
PyTorch tensor objects for neural network programming and deep learning. Jeremy's Ted talk: https://www.youtube.com/watch?v=t4kyRyKyOpo fast.ai: http://www.fast.ai/ πŸ•’πŸ¦Ž VIDEO SECTIONS πŸ¦ŽπŸ•’ 00:00 Welcome to DEEPLIZARD - Go to deeplizard.com for learning resources 00:30 Help deeplizard add video timestamps - See example in the description 09:47 Collective Intelligence and the DEEPLIZARD HIVEMIND πŸ’₯🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎πŸ’₯ πŸ‘‹ Hey, we're Chris and Mandy, the creators of deeplizard! πŸ‘€ CHECK OUT OUR VLOG: πŸ”— https://youtube.com/deeplizardvlog πŸ’ͺ CHECK OUT OUR FITNESS CHANNEL: πŸ”— https://www.youtube.com/channel/UCdCxHNCexDrAx78VfAuyKiA 🧠 Use code DEEPLIZARD at checkout to receive 15% off your first Neurohacker order: πŸ”— https://neurohacker.com/shop?rfsn=6488344.d171c6 ❀️🦎 Special thanks to the following polymaths of the deeplizard hivemind: Mano Prime πŸ‘€ Follow deeplizard: Our vlog: https://youtube.com/deeplizardvlog Fitness: https://www.youtube.com/channel/UCdCxHNCexDrAx78VfAuyKiA Facebook: https://facebook.com/deeplizard Instagram: https://instagram.com/deeplizard Twitter: https://twitter.com/deeplizard Patreon: https://patreon.com/deeplizard YouTube: https://youtube.com/deeplizard πŸŽ“ Deep Learning with deeplizard: AI Art for Beginners - https://deeplizard.com/course/sdcpailzrd Deep Learning Dictionary - https://deeplizard.com/course/ddcpailzrd Deep Learning Fundamentals - https://deeplizard.com/course/dlcpailzrd Learn TensorFlow - https://deeplizard.com/course/tfcpailzrd Learn PyTorch - https://deeplizard.com/course/ptcpailzrd Natural Language Processing - https://deeplizard.com/course/txtcpailzrd Reinforcement Learning - https://deeplizard.com/course/rlcpailzrd Generative Adversarial Networks - https://deeplizard.com/course/gacpailzrd Stable Diffusion Masterclass - https://deeplizard.com/course/dicpailzrd πŸŽ“ Other Courses: DL Fundamentals Classic - https://deeplizard.com/learn/video/gZmobeGL0Yg Deep Learning Deployment - https://deeplizard.com/learn/video/SI1hVGvbbZ4 Data Science - https://deeplizard.com/learn/video/d11chG7Z-xk Trading - https://deeplizard.com/learn/video/ZpfCK_uHL9Y πŸ›’ Check out products deeplizard recommends on Amazon: πŸ”— https://amazon.com/shop/deeplizard πŸ“• Get a FREE 30-day Audible trial and 2 FREE audio books using deeplizard's link: πŸ”— https://amzn.to/2yoqWRn 🎡 deeplizard uses music by Kevin MacLeod πŸ”— https://youtube.com/channel/UCSZXFhRIx6b0dFX3xS8L1yQ ❀️ Please use the knowledge gained from deeplizard content for good, not evil.

updates

expand_more chevron_left
deeplizard logo DEEPLIZARD Message notifications

Update history for this page

Did you know you that deeplizard content is regularly updated and maintained?

  • Updated
  • Maintained

Spot something that needs to be updated? Don't hesitate to let us know. We'll fix it!


All relevant updates for the content on this page are listed below.