PyTorch Tensors

Akhil SoniAkhil Soni
7 min read

A cell is fundamental unit of life. In the similar way, a pytorch tensor is a fundamental building block of pytorch. Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.

One of the important features which are offered by tensors is they can store track of all the operations performed on them, which helps to compute the optimized output; this can be done by using the Autograd functionality of a tensor.

In this article, we will know different operations that can be performed with tensors and different functions that can be applied over tensors.

Let us start by checking whether given object is a tensor. Typically, is_tensor function checks and is_storage function checks whether the object is stored as tensor object.

import torch

x = [1, 2, 3, 4, 5]
print(torch.is_tensor(x))
#False
print(torch.is_storage(x))
#False
y = torch.randn(1, 2, 3, 4, 5)
print(torch.is_tensor(y))
#True
print(torch.is_storage(y))
#False

torch.numel() — It is used to count the number of elements in a tensor.

torch.zeros() — It is used to create a tensor of zeros with given size as input.

torch.eye() — Like numpy operations, this function creates tensor matrix having diagonal elements as one and off diagonal elements as zeros.

torch.linspace() — It creates an array with line space between given two integers as starting and ending point such that any two elements having given step size.

torch.logspace() — Like linear spacing, logarithmic spacing can be also be performed.

torch.rand() — Random number generation is a common process in the data. Random numbers can be generated from a statistical distribution, any two values, or a predefined distribution. It will generate random values between 0 and 1 for a matrix of given size or shape.

torch.randn() — random numbers can also be generated from a normal distribution with mean = 0 and standard deviation = 1.

torch.randperm() — This is used to select random values from a range of values using random permutation requires defining the range first.

torch.arange() — It creates a tensor starting from a given start point till the ending limit value with a given step size which places all the values in an equal distance space.

import torch
torch.zeros(4, 4)
'''
tensor([[0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.]])
'''
torch.numel(torch.zeros(4, 4))
# 16
torch.eye(3, 4)
'''
tensor([[1., 0., 0., 0.],
        [0., 1., 0., 0.],
        [0., 0., 1., 0.]])
'''
torch.linspace(2, 10, steps=25)
'''
tensor([ 2.0000,  2.3333,  2.6667,  3.0000,  3.3333,  3.6667,  4.0000,  4.3333,
         4.6667,  5.0000,  5.3333,  5.6667,  6.0000,  6.3333,  6.6667,  7.0000,
         7.3333,  7.6667,  8.0000,  8.3333,  8.6667,  9.0000,  9.3333,  9.6667,
        10.0000])
'''
torch.logspace(-10, 10, steps=15)
'''
tensor([1.0000e-10, 2.6827e-09, 7.1969e-08, 1.9307e-06, 5.1795e-05, 1.3895e-03,
        3.7276e-02, 1.0000e+00, 2.6827e+01, 7.1969e+02, 1.9307e+04, 5.1795e+05,
        1.3895e+07, 3.7276e+08, 1.0000e+10])
'''
print(torch.rand(10))
print(torch.rand(4, 5))
print(torch.randn(10))
print(torch.randn(4, 5))
'''
tensor([0.8927, 0.6671, 0.5811, 0.1508, 0.3388, 0.1991, 0.3674, 0.7992, 0.1951,
        0.1456])
tensor([[0.6308, 0.7874, 0.0135, 0.7786, 0.0379],
        [0.2510, 0.6107, 0.6645, 0.6441, 0.6244],
        [0.1283, 0.3969, 0.6396, 0.3165, 0.1998],
        [0.0510, 0.6915, 0.3689, 0.7104, 0.5749]])
tensor([-0.0961,  1.3530, -0.6086,  1.8604, -0.0209,  0.6059,  0.5776, -1.5042,
        -0.7254,  1.7684])
tensor([[ 1.1595,  2.5918, -0.6773,  0.4769, -0.2748],
        [ 1.0266, -2.6797,  0.0799, -0.1842, -0.3189],
        [-0.1976, -0.5190,  0.5714, -0.0902,  0.3909],
        [-0.3451,  0.1769,  0.2894, -1.2471,  0.1037]])
'''
print(torch.randperm(10))
print(torch.arange(10, 40, 2))
print(torch.arange(10, 40))
'''
tensor([5, 1, 8, 4, 6, 7, 0, 9, 3, 2])
tensor([10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38])
tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
        28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39])
'''

torch.argmin() — It is used to find the minimum value by returning the index of the minimum valued element. A dimension can also be provided along which it finds minimum element’s index.

torch.argmax() — It is used to find the maximum value by returning the index of the maximum valued element. A dimension can also be provided along which it finds maximum element’s index.

torch.cat() — This function is used to concatenate tensors along the given dimension.

torch.chunk() — A tensor can be split between multiple tensors. The chunks can be created along any dimension.

d = torch.randn(4, 5)
print(torch.argmin(d, dim=1))
print(torch.argmax(d, dim=1))
'''
tensor([2, 2, 1, 1])
tensor([3, 0, 0, 3])
'''
x = torch.randn(4, 5)
print(torch.cat((x, x)))
print(torch.cat((x, x, x), 1))
print(torch.cat((x, x), 0))
a = torch.randn(4, 4)
print(a)
torch.chunk(a, 2, 0)
'''
tensor([[-0.6380, -0.2203,  0.1790,  0.3127],
        [-0.3274, -0.3027,  0.5537,  0.2763],
        [ 1.2585,  0.1529, -0.2521,  0.1731],
        [ 2.2597, -0.1669, -0.4358, -0.7637]])
(tensor([[-0.6380, -0.2203,  0.1790,  0.3127],
         [-0.3274, -0.3027,  0.5537,  0.2763]]),
 tensor([[ 1.2585,  0.1529, -0.2521,  0.1731],
         [ 2.2597, -0.1669, -0.4358, -0.7637]]))
'''

torch.index_select() — The LongTensor function or the index select function can be used to fetch relevant values from a tensor.

torch.nonzero() — This is a common practice to check non-missing or zero values in a tensor.

torch.split() — The split function helps to split the long tensor into smaller tensors.

import torch
a = torch.randn(4, 4)
indices = torch.LongTensor([0, 2])
torch.index_select(a, 0, indices)
'''
tensor([[-0.6380, -0.2203,  0.1790,  0.3127],
        [ 1.2585,  0.1529, -0.2521,  0.1731]])
'''

torch.nonzero(torch.tensor([10, 00, 23, 0, 0.0]))
'''tensor([[0],
        [2]])
'''
torch.split(torch.tensor([12, 24, 34, 25, 42, 56, 100]), 2)
# (tensor([12, 24]), tensor([34, 25]), tensor([42, 56]), tensor([100]))

.t() — It is a tensor operation to transpose the tensor.

torch.unbind() — The unbind function removes a dimension from a tensor and separate all tensors.

torch.add() — This is used to add a scalar value to all the elements of tensor.

torch.mul() — This is used to multiply all the elements of a tensor with a scalar value.

torch.ceil() — It is used to find the ceil of all values in a tensor.

torch.floor() — It is used to find the floor of all values in a tensor.

torch.clamp() — Limiting the values of any tensor within a certain range can be done using the minimum and maximum argument and using the clamp function.

torch.exp() — It finds the exponential of all the values in a tensor.

torch.log() — It finds the logarithmic values of all the values in a tensor.

torch.pow() — It is used to compute the power of each element in a tensor.

print(x)
x.t()
'''
tensor([[ 1.4751e+00,  1.3738e-01,  2.7290e+00, -6.9853e-01,  1.6076e-02],
        [ 7.6424e-01,  1.9375e-01,  1.5251e-01, -3.5000e-01,  6.5908e-01],
        [-9.9262e-01, -2.5521e-01,  5.0992e-01,  2.5330e-03, -2.0036e-01],
        [-8.6793e-01, -9.2227e-01, -1.4782e-01,  5.1307e-01, -6.1206e-01]])
tensor([[ 1.4751e+00,  7.6424e-01, -9.9262e-01, -8.6793e-01],
        [ 1.3738e-01,  1.9375e-01, -2.5521e-01, -9.2227e-01],
        [ 2.7290e+00,  1.5251e-01,  5.0992e-01, -1.4782e-01],
        [-6.9853e-01, -3.5000e-01,  2.5330e-03,  5.1307e-01],
        [ 1.6076e-02,  6.5908e-01, -2.0036e-01, -6.1206e-01]])
'''
p = torch.rand(4, 4)
torch.unbind(p, 1)
'''
(tensor([0.0295, 0.7131, 0.9537, 0.9237]),
 tensor([0.2399, 0.2305, 0.8583, 0.1983]),
 tensor([0.5908, 0.2627, 0.9901, 0.8207]),
 tensor([0.0721, 0.1146, 0.0458, 0.5727]))
'''
print(torch.add(p, 20))
print(torch.mul(p, 2))
'''
tensor([[20.0295, 20.2399, 20.5908, 20.0721],
        [20.7131, 20.2305, 20.2627, 20.1146],
        [20.9537, 20.8583, 20.9901, 20.0458],
        [20.9237, 20.1983, 20.8207, 20.5727]])
tensor([[0.0590, 0.4799, 1.1817, 0.1441],
        [1.4261, 0.4610, 0.5255, 0.2293],
        [1.9075, 1.7167, 1.9802, 0.0917],
        [1.8475, 0.3965, 1.6415, 1.1453]])
'''
print(torch.ceil(torch.randn(5, 5)))
print(torch.floor(torch.randn(5, 5)))
'''
tensor([[ 1.,  1.,  2., -0.,  1.],
        [-1.,  1.,  1.,  1., -0.],
        [-0., -0.,  2., -0.,  1.],
        [ 2.,  1.,  1.,  1.,  3.],
        [ 1., -0.,  2.,  2.,  1.]])
tensor([[-2.,  0., -1.,  1.,  0.],
        [ 2.,  0., -1.,  2., -1.],
        [-1., -1.,  1.,  0.,  0.],
        [ 0.,  1., -1., -1., -1.],
        [-1.,  0., -1., -2.,  1.]])
'''
torch.clamp(torch.floor(torch.randn(5, 5)), min=-0.3, max=0.5)
'''
tensor([[ 0.0000, -0.3000, -0.3000,  0.0000,  0.0000],
        [ 0.0000,  0.0000, -0.3000, -0.3000, -0.3000],
        [ 0.5000,  0.0000,  0.0000,  0.0000,  0.0000],
        [ 0.5000,  0.5000, -0.3000,  0.0000,  0.5000],
        [ 0.0000, -0.3000,  0.5000, -0.3000, -0.3000]])
'''
print(torch.exp(x))
print(torch.log(x))
print(torch.pow(x, 2))
'''
tensor([[ 4.3716,  1.1473, 15.3173,  0.4973,  1.0162],
        [ 2.1474,  1.2138,  1.1647,  0.7047,  1.9330],
        [ 0.3706,  0.7748,  1.6651,  1.0025,  0.8184],
        [ 0.4198,  0.3976,  0.8626,  1.6704,  0.5422]])
tensor([[ 0.3887, -1.9850,  1.0039,     nan, -4.1304],
        [-0.2689, -1.6412, -1.8806,     nan, -0.4169],
        [    nan,     nan, -0.6735, -5.9784,     nan],
        [    nan,     nan,     nan, -0.6673,     nan]])
tensor([[2.1760e+00, 1.8874e-02, 7.4474e+00, 4.8795e-01, 2.5843e-04],
        [5.8406e-01, 3.7538e-02, 2.3258e-02, 1.2250e-01, 4.3439e-01],
        [9.8530e-01, 6.5134e-02, 2.6001e-01, 6.4160e-06, 4.0143e-02],
        [7.5330e-01, 8.5058e-01, 2.1851e-02, 2.6324e-01, 3.7462e-01]])
'''

torch.from_numpy() — This is used to form tensor from given numpy array.

import numpy as np
x = [10, 20, 30, 40, 50]
x1 = np.array(x)
y1 = torch.from_numpy(x1)
'''
array([10, 20, 30, 40, 50])
tensor([10, 20, 30, 40, 50])
'''

They are more functions and operations on tensors given these are some important and interesting ones that I found. There are lot of applications of tensors which one can apply.

That’t it in this tutorial.

Thank you!

— Akhil Soni

0
Subscribe to my newsletter

Read articles from Akhil Soni directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Akhil Soni
Akhil Soni

I am an ML enthusiast along with passionate for development and also interested in programming and problem solving.