Introduction To Function And Linear Transformation

Fatima JannetFatima Jannet
5 min read

Intro

A function is a mathematical relationship that uniquely associates elements of one set (called the domain) with elements of another set (called the codomain). In simpler terms, a function maps inputs to outputs in a specific way.

Notation: A function f is mapping elements from set x(domain) to set y(codomain) is denoted by f: X → Y. If x is an element of X, then f(x) is the corresponding element in Y.

f(x) = 2x + 3 => maps each real number x to a real number 2x+3.

Vector Transformation

Defn: Vector transformation refers to operations that map vectors from one space to another, often changing their magnitude, direction, or both. These transformations are typically described using matrices and are fundamental in various fields, including physics, engineering, computer graphics, and data science.

f(x, y, z) = (x+y, 2z)

f([1,2,3]) = [1+2, 3×2] => [3, 6]; have you noticed how it transformed from 3D to 2D? This is essentially how a function works. (3,6) represents (1,2,3) in x and y axis. It's difficult to explain without visuals, which I will add soon, but for now, please bear with the description.

(1,2,3) R³ => (3,6) R²; This is the mapping of a set from one to another is called Vector Transformation. T: R³ → R²

Vector transformation is heavily used in gaming industry.

Use cases:

If you want to change the magnitude while keeping the direction same, you prolly need to apply scaling; because, scaling is a transformation that changes the magnitude of vector while keeping their direction same.

v’ = 2v = 2[2 3] = [4 6]. If you plot this you’ll see you have scaled while keeping the direction same. In data science, scaling is used in two ways:

  1. Data normalization

  2. Computer graphics to resize objects => Paint => image => resize

Then we have rotation: transformation that turns vector around the origin. It is used in deep learning image processing.

Reflection: It is a transformation that flips vectors over a specified axis/plane.

Then we have shearing, mirror images. Try to google it, i hope you’ll be able to understand.

Linear Transformation

A linear transformation is a function between two vector spaces that preserves the operations of vector addition and scaler multiplication. This means that if T is a linear transformation from a vector space V to a vector space W, then for every vector:

  1. Addition: T(u+v) = T(u) + T(v)

  2. Homogeneity: T(C(u)) = CT(u)

    (u, v ∈ v and c is a scaler value)

Example: Reflection

In reflection transformation: the reflection transformation T across the y axis maps a vector X = [x y] T = [-x y] (this is the transformation)

  1. checking addition:

    pass (will add the visualization later)

  2. checking homogeneity:

    requirement: T(cu) = cT(u)

    pass (will add the visualization later)

Why linear transformation?

Linear transformations are important in data science for several reasons. They offer a way to handle and analyze data, which is key for data processing, building models, and understanding results.

  1. Dimensionality Reduction

    Principal Component Analysis (PCA) is a common method to reduce the number of dimensions in datasets while keeping as much variation as possible. It does this by finding perpendicular axes (principal components) and projecting the data onto these axes. Changing data points from the original space to the new space defined by the principal components is a linear transformation. This helps with:

    • Lowering computational costs.

    • Reducing problems related to high dimensions.

    • Visualizing data with many dimensions.

  2. Feature Engineering

    Linear transformations can create new features from existing ones. For example, you can capture interactions between features using linear combinations, which can then be used in machine learning models to improve predictions. Techniques like linear regression, ridge regression, and linear discriminant analysis (LDA) use linear transformations to find useful feature representations.

  3. Data Preprocessing

    Normalization and Standardization: Linear transformations are used to adjust data for machine learning models. Standardization changes data to have a mean of zero and a standard deviation of one, while normalization scales data to a specific range (e.g., 0 to 1). These transformations are important to ensure all features have equal impact on the model, especially in algorithms like gradient descent.

  1. Neural Networks

    In neural networks, especially deep learning models, the layers consist of linear transformations followed by non-linear activation functions. The weights in a neural network can be seen as a series of linear transformations that map input data to intermediate layers and, eventually, to the output layer. This linear aspect is crucial for the network's ability to lear complex patterns in data.

  2. Image and Signal Processing

    In image and signal processing, linear transformations are used extensively. For example: Convolutional filters in image processing can be seen as linear transformations applied to local regions of an image. Fourier transforms, which decompose signals into sinusoidal components, are linear transformations that convert time-domain signals into frequency-domain representations.

  3. Understanding and Interpretation

    Linear transformations simplify complex relationships between variables into linear relationships, which are easier to understand and interpret. For example, linear regression provides a clear model of how each feature affects the target variable through linear coefficients, making it easier to explain to stakeholders.

  4. Optimization and Solving Systems of Equations

    Linear transformations are used to solve systems of linear equations, which is a common problem in data analysis and optimization. Techniques like matrix inversion and the use of pseudo-inverses are essential for finding solutions in linear regression and other linear models.

  1. Theoretical Foundations

    Many advanced machine learning algorithms and statistical techniques have linear algebra and linear transformations at their core. Understanding these fundamentals is crucial for grasping more complex topics like support vector machines

  2. Robustness and Stability

    Linear transformations help in understanding the stability and robustness of data and models. For instance. transformations can be used to detect collinearity among features, which can destabilize models like linear regression.

  3. Geometric Interpretation

    Linear transformations give a geometric view of data. They can show rotations, reflections, scalings, and translations, which are easy to understand and see. This geometric view is helpful in many areas, like data visualization and understanding how datasets are structured.

Linear Transformation Visualization

pass

Vector length and vector unit

Magnitude of a vector = len of a vector.

unit vector → vector has a len of 1

pass

Projection

pass

0
Subscribe to my newsletter

Read articles from Fatima Jannet directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Fatima Jannet
Fatima Jannet