Chapter 3 Linear Algebra

Antahkarana: Transformation concerns in a most peculiar manner the three aspects of mind upon the mental plane:

  • The lower mind
  • The son of mind, the soul
  • The higher mind.

“Let Transfiguration follow Transformation and may Transmutation disappear. Let the O.M. be heard right at the centre of the group, proclaiming God is All.” -Rules for Group Initiation


3.1 Conceptual Definition of “Transformation”

  • Transformation: In mathematics, a transformation refers to a function or mapping that takes elements from one set and assigns them to elements in another set, often within the same space. Such transformation \(T\) can be represented as: \[ T: A \to B \] where \(T(x) = y\) for \(x \in A\) and \(y \in B\). The term “transformation” comes from the idea of “changing form” or “transfiguring” something. In the context of vector spaces, a transformation changes vectors from one form to another, often altering their direction, length, or position while adhering to specific rules.

  • “Trans” of the “Form”: The prefix “trans-” means “across,” “beyond,” or “through.” When combined with “form,” it suggests changing or moving across forms. In linear algebra, this means altering the form of vectors through operations like rotation, scaling, or translation, while maintaining certain properties.

3.2 Transformation of Space

A transformation in space refers to a mathematical operation that changes the position, orientation, or size of objects within a given space. These transformations can be applied to geometric figures, vectors, or entire coordinate systems, and they are fundamental in fields like geometry, computer graphics, and physics. Here’s a closer look at different types of transformations:

3.2.1 Types of Transformations

  • Translation: shifts every point of an object or space by the same distance in a given direction. It involves adding a constant vector to every point, effectively moving the object without altering its shape or orientation.
  • Rotation:involves turning an object around a fixed point (in 2D) or a fixed axis (in 3D). The object maintains its shape and size but changes its orientation. Rotations are defined by an angle and a center or axis of rotation.
  • Scaling: changes the size of an object. Uniform scaling increases or decreases the size of an object by the same factor in all directions, while non-uniform scaling changes the size by different factors along different axes.
  • Reflection: creates a mirror image of an object across a specified line (in 2D) or plane (in 3D). This transformation changes the orientation of the object but preserves its shape and size.
  • Shearing: distorts the shape of an object by shifting its points parallel to a given line or plane. This transformation changes the shape but not the area or volume of the object.

3.2.2 Applications

  • Computer Graphics: Transformations are used to manipulate images and models, allowing for animations, simulations, and rendering of 3D scenes.
  • Robotics: Transformations help in understanding and controlling the movement and orientation of robotic arms and other components.
  • Physics: Transformations describe changes in physical systems, such as rotations and translations of bodies in space.

In essence, transformations in space are operations that modify the position, orientation, or size of objects, and they are crucial for analyzing and manipulating spatial relationships in various scientific and engineering disciplines.

3.3 Linear Transformations and Their Properties

  • Preservation of Vector Addition and Scalar Multiplication: Linear transformations specifically preserve these two operations:
  • Vector Addition: If \(T\) is a linear transformation, then for any vectors \(\mathbf{u}\) and \(\mathbf{v}\), \(T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v})\).
  • Scalar Multiplication: For any vector \(\mathbf{v}\) and scalar \(c\), \(T(c\mathbf{v}) = cT(\mathbf{v})\).

In essence, linear transformations are defined by their ability to maintain the linear structure of vector spaces, specifically through vector addition and scalar multiplication.

3.4 Linear Transformations and Matrices

A matrix represents a linear transformation because it inherently satisfies the two key properties that define linearity: additivity and scalar multiplication. Here’s a closer look at why matrices are tied to linear transformations:

3.4.1 Properties of Linear Transformations

  1. Additivity: A transformation \(T\) is linear if for any two vectors \(\mathbf{u}\) and \(\mathbf{v}\), the transformation satisfies:

    \[ T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v}) \]

Matrices naturally satisfy this property because matrix multiplication distributes over vector addition.

  1. Scalar Multiplication: A transformation \(T\) is linear if for any vector \(\mathbf{v}\) and scalar \(c\), it satisfies:

    \[ T(c \mathbf{v}) = c T(\mathbf{v}) \]

Matrices also satisfy this property because multiplying a vector by a scalar before applying the matrix is equivalent to scaling the result of the matrix-vector multiplication.

3.4.2 How Matrices Implement Linear Transformations

  • Matrix-Vector Multiplication: When a matrix \(A\) multiplies a vector \(\mathbf{v}\), the result is another vector. This operation can be seen as transforming the vector \(\mathbf{v}\) into a new vector \(A\mathbf{v}\).

  • Basis Transformation: Matrices can be thought of as transforming the basis of a vector space. For example, a 2x2 matrix can rotate, scale, or shear vectors in a 2D space, all of which are linear operations.

  • Preservation of Linear Structure: Because matrices adhere to the rules of linearity, they preserve the linear structure of vector spaces. This means that lines remain lines, planes remain planes, and so on, under the transformation.

Therefore, the structure of matrices inherently encodes these linear properties, making them perfect tools for representing linear transformations. This is why matrices are fundamental in linear algebra and are used extensively in fields that require linear modeling and transformations, such as computer graphics, physics, and data science.

3.5 Stretching and shearing

Stretching and shearing are both types of linear transformations that can be applied to objects in a geometric space.

  • Stretching: This transformation involves scaling an object along one or more axes. For example, if you stretch an object along the x-axis, you multiply the x-coordinates of all points in the object by a certain factor, while the y-coordinates remain unchanged. This results in the object becoming longer or shorter along that axis, but it retains its overall shape.

  • Shearing: Shearing involves shifting one part of an object in a direction parallel to a coordinate axis, effectively slanting the shape. For instance, in a shear transformation along the x-axis, the x-coordinates of points are adjusted based on their y-coordinates, creating a slanted effect. This transformation changes the angles within the shape but preserves parallelism.

Both transformations are linear because they can be represented by matrices and involve operations that preserve the origin and straight lines. They are fundamental in computer graphics, physics, and various fields of engineering for manipulating shapes and objects.

Key Differences

  • Effect on Shape: Stretching changes the size but not the shape (angles remain the same), while shearing alters the shape by changing angles without necessarily changing the size.
  • Mathematical Representation: In linear algebra, stretching is represented by diagonal matrices with scaling factors, whereas shearing is represented by matrices with off-diagonal elements that introduce the slant.

Applications: Stretching is often used in scaling operations, such as resizing images, while shearing is used in graphics and animations to create effects like slanting or skewing.

In summary, stretching and shearing are distinct transformations with different effects on objects, with stretching focusing on size changes and shearing on shape distortion.

3.6 Nonlinear Transformations

  • Nonlinear transformations involve changes that do not preserve straight lines or proportional relationships. They can bend, curve, or warp the space in ways that linear transformations like shearing cannot.
  • Nonlinear transformations are described by nonlinear equations, which can include terms like squares, cubes, or other non-linear functions. These transformations can map straight lines to curves and change the dimensionality of the space in complex ways.

Why Shearing Can’t Represent Nonlinear Transformations

  • Lack of Curvature: Shearing cannot introduce curvature or bending into the transformation. It only shifts parts of the object linearly along an axis, which is insufficient for representing the complex, curved nature of nonlinear transformations.
  • Linear Matrix Representation: The matrix used to represent a shear transformation is linear, with constant coefficients that do not change with the input. Nonlinear transformations require variable coefficients or additional terms that depend on the input values.
  • Scope of Application: Shearing is limited to operations that maintain linearity, such as slanting or skewing, whereas nonlinear transformations are needed for tasks like warping images, modeling complex physical phenomena, or transforming data in neural networks.

In summary, shearing is limited to linear transformations due to its mathematical structure and properties, making it unsuitable for representing the more complex and varied changes involved in nonlinear transformations.

3.7 Tensors

Tensors are a generalization of matrices to higher dimensions and are particularly useful for representing and modeling complex, nonlinear relationships in various fields such as physics, computer vision, and machine learning.

3.7.1 Definition:

Tensor: a multi-dimensional array of numerical values that can represent data or transformations in a space with more than two dimensions. The order (or rank) of a tensor indicates the number of dimensions it has:

  • Scalar: A single number, a tensor of rank 0.
  • Vector: A one-dimensional array, a tensor of rank 1.
  • Matrix: A two-dimensional array, a tensor of rank 2.
  • Higher-order Tensors: Arrays with three or more dimensions.

3.7.2 Properties:

  • Tensors can be used to represent linear transformations in multi-dimensional spaces.
  • They are invariant under coordinate transformations, meaning their properties do not change when the coordinate system is altered.

3.7.3 Applications:

  • In physics, tensors are used to describe physical properties like stress, strain, and moment of inertia.
  • In machine learning, particularly in deep learning, tensors are fundamental in representing data and parameters in neural networks.

Other Mathematical Tools for Nonlinear Transformations

  • Nonlinear Functions: functions that are not linear, such as polynomials, exponentials, and trigonometric functions, are basic tools for representing nonlinear transformations.
  • Differential Equations: Nonlinear differential equations are used to model complex systems where the change in a system is not proportional to the current state.
  • Manifolds: A manifold is a mathematical space that locally resembles Euclidean space but can have a more complex global structure. Manifolds are used to study nonlinear spaces and transformations.
  • Lie Groups and Lie Algebras:These are algebraic structures that describe continuous symmetry and are used in the study of nonlinear transformations, particularly in physics.
  • Neural Networks: In machine learning, neural networks are powerful tools for modeling nonlinear relationships. They consist of layers of interconnected nodes (neurons) that can approximate complex functions.
  • Fourier and Wavelet Transforms: These transforms are used to analyze functions or signals in terms of frequency components, providing a way to handle nonlinearities in signal processing.

In summary, tensors are versatile tools for representing multi-dimensional data and transformations, while a variety of other mathematical tools exist to handle nonlinear transformations, each suited to different types of problems and applications.

3.8 Semantics

The word “tensor” is derived from the Latin word “tendere,” which means “to stretch.” Semantically, this origin is quite fitting for the concept of a tensor in mathematics and physics, as it relates to the idea of stretching or transforming spaces.

3.8.1 Semantic Connections

  • Transformation and Stretching: Tensors are used to describe how objects transform under various conditions, such as changes in coordinate systems or physical forces. This transformation can be thought of as “stretching” the space in which the objects exist, aligning with the original meaning of “tendere.”
  • Generalization of Vectors: Just as vectors can be thought of as arrows that stretch from one point to another in space, tensors generalize this concept to higher dimensions, describing more complex forms of stretching and transformation.
  • Physical Interpretation: In physics, tensors often describe how materials deform or how forces are distributed, which involves stretching and compressing. For example, the stress tensor describes how internal forces are distributed within a material, effectively “stretching” it in various directions.
  • Mathematical Flexibility: Tensors provide a flexible framework for representing and manipulating data across multiple dimensions, akin to stretching the capabilities of simpler mathematical objects like scalars and vectors to handle more complex scenarios.

In essence, the term “tensor” captures the essence of these mathematical objects as tools for describing and managing the stretching and transformation of spaces and data, both in abstract mathematical terms and in practical physical applications.