Day 8 - Tensors
A tensor is a multi-dimensional array of data. Just like a scalar is a single number, a vector is a one-dimensional array of numbers, and a matrix is a two-dimensional array, a tensor extends this concept to higher dimensions. Tensors are used to represent data and parameters in machine learning models.
Types of Tensors ๐
Scalar (0-D Tensor) ๐
Description: A single number.
Example:
7
,3.14
Usage: Represents constants or single data points in computations.
Vector (1-D Tensor) ๐
Description: A one-dimensional array of numbers.
Example:
[1, 2, 3]
Usage: Used to represent data features or model parameters in a list.
Matrix (2-D Tensor) ๐
Description: A two-dimensional array of numbers.
Example:
[[1, 2], [3, 4]]
Usage: Commonly used to represent datasets, transformation matrices, and more.
3-D Tensor ๐ฆ
Description: A three-dimensional array of numbers.
Example:
[[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
Usage: Often used in image processing where each image has multiple color channels (RGB).
4-D Tensor and Higher ๐
Description: An array with four or more dimensions.
Example: In deep learning, a 4-D tensor might represent a batch of images, where dimensions correspond to batch size, image height, image width, and number of channels.
Usage: Utilized in complex models, such as those handling batches of multi-dimensional data.
Why Tensors are Essential in Machine Learning? ๐
Data Representation ๐๏ธ: Simplifies data processing and learning.
Efficient Computations โ๏ธ: Enables fast, GPU-optimized operations.
Model Building ๐๏ธ: Defines model parameters and layers.
Versatility ๐: Handles diverse data types and complex models.
Subscribe to my newsletter
Read articles from Nischal Baidar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by