
2 Determinants
In the previous chapters, we studied Systems of Linear Equations (SLE) and solved them using row reduction methods such as Gaussian elimination and Gauss–Jordan elimination. These methods relied on transforming the augmented matrix into Row Echelon Form (REF) or Reduced Row Echelon Form (RREF) [1], lay2012linear?.
However, row reduction is not the only method to solve linear systems. Another powerful tool is the determinant of a matrix, which provides:
- A criterion for whether a matrix is invertible.
- A method to solve linear systems directly using Cramer’s Rule.
- A way to describe geometric properties such as area and volume [2], meyer2000?.
Thus, determinants build a natural bridge between row operations and more advanced concepts like matrix inverse, eigenvalues, and geometric transformations. More details about this topic can be visualized in the following Mind Map.

2.1 Definition of a Determinant
The determinant is a scalar value associated with a square matrix that provides important information about the matrix, such as invertibility, scaling of geometric objects, and solutions of linear systems [1], lay2012linear?, meyer2000?.
2.1.1 Determinat \(M_{2 \times 2}\)
For a \(2 \times 2\) matrix:
\[ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, \quad \det(A) = ad - bc \]
2.1.2 Determinat \(M_{3 \times 3}\)
When dealing with a \(3 \times 3\) matrix, one convenient method to calculate its determinant is Sarrus’ Rule. This method provides a simple visual approach that avoids the longer Laplace expansion process.
\[ A = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix}, \quad \det(A) = aei + bfg + cdh - ceg - bdi - afh \]
The Sarrus Rule is a shortcut method to compute the determinant.
Step 1. Rewrite the first two columns of \(A\) to the right of the matrix:
\[ \begin{array}{ccc|cc} a & b & c & a & b \\ d & e & f & d & e \\ g & h & i & g & h \end{array} \]
Step 2. Compute the sum of the products of the three downward diagonals:
- \(a \cdot e \cdot i\)
- \(b \cdot f \cdot g\)
- \(c \cdot d \cdot h\)
So the downward sum is:
\[ (aei) + (bfg) + (cdh) \]
Step 3. Compute the sum of the products of the three upward diagonals:
- \(c \cdot e \cdot g\)
- \(a \cdot f \cdot h\)
- \(b \cdot d \cdot i\)
So the upward sum is:
\[ (ceg) + (afh) + (bdi) \]
Step 4. Subtract the two results:
\[ \det(A) = (aei + bfg + cdh) - (ceg + afh + bdi) \]
Let
\[ A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} \]
Apply Sarrus’ Rule:
Downward diagonals:
\(1 \cdot 5 \cdot 9 + 2 \cdot 6 \cdot 7 + 3 \cdot 4 \cdot 8 = 45 + 84 + 96 = 225\)Upward diagonals:
\(3 \cdot 5 \cdot 7 + 1 \cdot 6 \cdot 8 + 2 \cdot 4 \cdot 9 = 105 + 48 + 72 = 225\)
Thus:
\[ \det(A) = 225 - 225 = 0 \]
So the matrix \(A\) is singular (non-invertible).
2.1.3 Determinat \(M_{n \times n}\)
So far, we have seen how to compute determinants of \(2 \times 2\) and \(3 \times 3\) matrices. For larger matrices, however, the computation becomes more complex and tedious. To handle this, two general approaches are commonly used:
- Laplace Expansion (Cofactor Expansion) – expands the determinant along a row or column using minors and cofactors.
- Row Reduction / Triangular Form – simplifies the matrix to an upper or lower triangular form, where the determinant is the product of the diagonal entries [1], lay2012linear?, meyer2000?.
Laplace Expansion
The Laplace expansion (Cofactor Expansion) allows us to compute the determinant of any \(n \times n\) matrix by expanding along a row or column.
For a matrix \(A = [a_{ij}]\) of order \(n\):
\[ \det(A) = \sum_{j=1}^n (-1)^{1+j} \, a_{1j} \, M_{1j} \]
where \(M_{1j}\) is the determinant of the \((n-1) \times (n-1)\) submatrix obtained by deleting the first row and the \(j\)-th column of \(A\).
In general, expanding along the \(i\)-th row:
\[ \det(A) = \sum_{j=1}^n (-1)^{i+j} \, a_{ij} \, M_{ij} \]
Here, the factor \((-1)^{i+j}\) ensures the alternating signs (checkerboard pattern).
Determinant of a \(4\times4\) matrix by Laplace Expansion
Consider the matrix \[ A = \begin{bmatrix} 1 & 2 & 0 & 3 \\ 4 & 1 & -1 & 2 \\ 0 & 5 & 2 & 1 \\ 2 & 0 & 3 & 4 \end{bmatrix}. \]
We will compute \(\det(A)\) using Laplace Expansion along the first row.
We expand along the 3rd row:
\[ \det(A) = \sum_{j=1}^{4} (-1)^{3+j} a_{3j} M_{3j}, \]
where \(M_{3j}\) is the minor obtained by deleting row 3 and column \(j\).
Step 1: Identify elements of row 3:
\[ R_3 = [0, 5, 2, 1] \]
Notice \(a_{31}=0\), so the first term contributes 0.
Step 2: Compute minor \(M_{32}\) (delete row 3, column 2):
Submatrix:
\[ \begin{bmatrix} 1 & 0 & 3 \\ 4 & -1 & 2 \\ 2 & 3 & 4 \end{bmatrix}. \]
Compute determinant using Sarrus’ Rule:
\[ \begin{aligned} M_{32} &= 1((-1)\cdot4 - 2\cdot3) - 0(4\cdot4 - 2\cdot2) + 3(4\cdot3 - (-1)\cdot2) \\ &= 1(-4-6) - 0(\cdots) + 3(12+2) \\ &= -10 + 0 + 42 = 32 \end{aligned} \]
Cofactor:
\[ C_{32} = (-1)^{3+2} \cdot a_{32} \cdot M_{32} = (-1)^{5} \cdot 5 \cdot 32 = -160 \]
Step 3: Compute minor \(M_{33}\) (delete row 3, column 3):
Submatrix:
\[ \begin{bmatrix} 1 & 2 & 3 \\ 4 & 1 & 2 \\ 2 & 0 & 4 \end{bmatrix}. \]
Compute determinant:
\[ \begin{aligned} M_{33} &= 1(1\cdot4 - 2\cdot0) - 2(4\cdot4 - 2\cdot2) + 3(4\cdot0 - 1\cdot2) \\ &= 1(4 - 0) - 2(16-4) + 3(0 -2) \\ &= 4 - 24 -6 = -26 \end{aligned} \]
Cofactor:
\[ C_{33} = (-1)^{3+3} \cdot a_{33} \cdot M_{33} = (+1) \cdot 2 \cdot (-26) = -52 \]
Step 4: Compute minor \(M_{34}\) (delete row 3, column 4):
Submatrix:
\[ \begin{bmatrix} 1 & 2 & 0 \\ 4 & 1 & -1 \\ 2 & 0 & 3 \end{bmatrix}. \]
Compute determinant:
\[ \begin{aligned} M_{34} &= 1(1\cdot3 - (-1)\cdot0) - 2(4\cdot3 - (-1)\cdot2) + 0(4\cdot0 -1\cdot2) \\ &= 1(3-0) - 2(12+2) + 0(-2) \\ &= 3 - 28 + 0 = -25 \end{aligned} \]
Cofactor:
\[ C_{34} = (-1)^{3+4} \cdot a_{34} \cdot M_{34} = (-1) \cdot 1 \cdot (-25) = 25 \]
Step 5: Combine terms
\[ \det(A) = 0 + (-160) + (-52) + 25 = -187 \]
Final Result:
\[ \boxed{\det(A) = -187} \]
Row Reduction Method
While Laplace expansion is useful conceptually, it becomes very inefficient for large matrices because the number of operations grows rapidly. A more practical method is to transform \(A\) into an upper triangular matrix using elementary row operations (Gaussian elimination).
- The determinant of a triangular matrix is the product of its diagonal entries.
- However, we must track the effect of each row operation on the determinant:
- Swapping two rows \(\;\;\Rightarrow\;\;\) determinant changes sign.
- Multiplying a row by \(k \;\;\Rightarrow\;\;\) determinant is multiplied by \(k\).
- Adding a multiple of one row to another \(\;\;\Rightarrow\;\;\) determinant unchanged.
- Swapping two rows \(\;\;\Rightarrow\;\;\) determinant changes sign.
Determinant of a \(4\times4\) matrix by row reduction, \[ A=\begin{bmatrix} 1 & 2 & 0 & 3 \\ 4 & 1 & -1 & 2 \\ 0 & 5 & 2 & 1 \\ 2 & 0 & 3 & 4 \end{bmatrix}. \]
We will perform Gaussian elimination to transform \(A\) into an upper triangular matrix \(U\). All row operations used are of the form \(R_i \leftarrow R_i + kR_j\) (adding a multiple of one row to another), which do not change the determinant.
Step 0: Initial matrix
\[ A^{(0)}= \begin{bmatrix} 1 & 2 & 0 & 3 \\ 4 & 1 & -1 & 2 \\ 0 & 5 & 2 & 1 \\ 2 & 0 & 3 & 4 \end{bmatrix}. \]
Step 1: Eliminate entries below pivot \(a_{11}=1\)
Use \(R_1\) to eliminate the entries in column 1 of \(R_2\) and \(R_4\):
- \(R_2 \leftarrow R_2 - 4R_1\)
- \(R_4 \leftarrow R_4 - 2R_1\)
Compute:
\[ \begin{aligned} R_2 &= [4,\,1,\,-1,\,2] - 4[1,\,2,\,0,\,3] = [0,\,-7,\,-1,\,-10],\\[4pt] R_4 &= [2,\,0,\,3,\,4] - 2[1,\,2,\,0,\,3] = [0,\,-4,\,3,\,-2]. \end{aligned} \]
Thus
\[ A^{(1)}= \begin{bmatrix} 1 & 2 & 0 & 3 \\ 0 & -7 & -1 & -10 \\ 0 & 5 & 2 & 1 \\ 0 & -4 & 3 & -2 \end{bmatrix}. \]
Step 2: Pivot at \(a_{22}=-7\). Eliminate entries below it (column 2).
We eliminate the (3,2) and (4,2) entries using row 2.
For row 3: factor \(= \dfrac{5}{-7} = -\dfrac{5}{7}\). Use \(R_3 \leftarrow R_3 - (-\tfrac{5}{7})R_2 = R_3 + \tfrac{5}{7}R_2\).
For row 4: factor \(= \dfrac{-4}{-7} = \tfrac{4}{7}\). Use \(R_4 \leftarrow R_4 - \tfrac{4}{7}R_2\).
Compute:
\[ \begin{aligned} \tfrac{5}{7}R_2 &= \tfrac{5}{7}[0,-7,-1,-10]=[0,-5,-\tfrac{5}{7},-\tfrac{50}{7}],\\[6pt] R_3 &= [0,5,2,1] + [0,-5,-\tfrac{5}{7},-\tfrac{50}{7}] = \left[0,\,0,\,2-\tfrac{5}{7},\,1-\tfrac{50}{7}\right]\\[4pt] &= \left[0,\,0,\,\tfrac{9}{7},\,-\tfrac{43}{7}\right]. \end{aligned} \]
and
\[ \begin{aligned} \tfrac{4}{7}R_2 &= [0,-4,-\tfrac{4}{7},-\tfrac{40}{7}],\\[6pt] R_4 &= [0,-4,3,-2] - [0,-4,-\tfrac{4}{7},-\tfrac{40}{7}]\\[4pt] &= \left[0,\,0,\,3+\tfrac{4}{7},\,-2+\tfrac{40}{7}\right] = \left[0,\,0,\,\tfrac{25}{7},\,\tfrac{26}{7}\right]. \end{aligned} \]
Thus
\[ A^{(2)}= \begin{bmatrix} 1 & 2 & 0 & 3 \\ 0 & -7 & -1 & -10 \\ 0 & 0 & \tfrac{9}{7} & -\tfrac{43}{7} \\ 0 & 0 & \tfrac{25}{7} & \tfrac{26}{7} \end{bmatrix}. \]
Step 3: Pivot at \(a_{33}=\tfrac{9}{7}\). Eliminate entry below it (column 3).
Eliminate the (4,3) entry. Factor:
\[ \text{factor} = \dfrac{\tfrac{25}{7}}{\tfrac{9}{7}} = \dfrac{25}{9}. \]
Perform \(R_4 \leftarrow R_4 - \dfrac{25}{9}R_3\).
Compute:
\[ \begin{aligned} \dfrac{25}{9}R_3 &= \dfrac{25}{9}\Big[0,0,\tfrac{9}{7},-\tfrac{43}{7}\Big] = \Big[0,0,\tfrac{25}{7},-\tfrac{1075}{63}\Big],\\[6pt] R_4 &= \Big[0,0,\tfrac{25}{7},\tfrac{26}{7}\Big] - \Big[0,0,\tfrac{25}{7},-\tfrac{1075}{63}\Big] = \Big[0,0,0,\tfrac{26}{7} + \tfrac{1075}{63}\Big]. \end{aligned} \]
Compute the final entry:
\[ \tfrac{26}{7} = \tfrac{234}{63},\qquad \tfrac{234}{63} + \tfrac{1075}{63} = \tfrac{1309}{63}. \]
So
\[ R_4 = \Big[0,0,0,\tfrac{1309}{63}\Big]. \]
Now the matrix is upper triangular:
\[ U= \begin{bmatrix} 1 & 2 & 0 & 3 \\ 0 & -7 & -1 & -10 \\ 0 & 0 & \tfrac{9}{7} & -\tfrac{43}{7} \\ 0 & 0 & 0 & \tfrac{1309}{63} \end{bmatrix}. \]
Step 4: Determinant from diagonal product
Because all operations were of the form \(R_i \leftarrow R_i + kR_j\) (determinant-preserving), the determinant of \(A\) equals the product of the diagonal entries of \(U\):
\[ \det(A) = 1 \cdot (-7) \cdot \frac{9}{7} \cdot \frac{1309}{63}. \]
Simplify:
\[ (-7)\cdot\frac{9}{7} = -9, \]
so
\[ \det(A) = -9 \cdot \frac{1309}{63} = -\frac{9\cdot1309}{63} = -\frac{1309}{7} = -187. \]
Therefore,
\[ \boxed{\det(A) = -187.} \]
- Laplace expansion works for any \(n\times n\) matrix.
- For larger matrices, row reduction is usually faster and less error-prone.
- This method also connects nicely with the earlier discussion of \(3\times3\) determinants using Sarrus’ Rule.
2.2 Properties of Determinants
Determinants have several important properties that make them useful in linear algebra. These properties help simplify computations, analyze matrix invertibility, and understand geometric interpretations [1], lay2012linear?, meyer2000?:
2.2.1 Triangular Matrices
For any \(n \times n\) upper or lower triangular matrix \(T\):
\[ T = \begin{bmatrix} t_{11} & t_{12} & \dots & t_{1n} \\ 0 & t_{22} & \dots & t_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & t_{nn} \end{bmatrix}, \quad \det(T) = \prod_{i=1}^{n} t_{ii} \]
\[ T = \begin{bmatrix} 2 & 3 & 1 \\ 0 & -1 & 4 \\ 0 & 0 & 5 \end{bmatrix}, \quad \det(T) = 2 \cdot (-1) \cdot 5 = -10 \]
2.2.2 Row Operations
Let \(A\) be an \(n \times n\) matrix:
\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad R_1 \leftrightarrow R_2 \Rightarrow \det(\text{swapped}) = -\det(A) \]
Multiplying a row by a scalar \(k\) multiplies the determinant by \(k\).
\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad 2 \cdot R_1 \Rightarrow B = \begin{bmatrix} 2 & 4 \\ 3 & 4 \end{bmatrix} \]
Compute determinants:
\[ \det(A) = 1\cdot4 - 2\cdot3 = -2 \]
\[ \det(B) = 2\cdot4 - 4\cdot3 = -4 = 2 \cdot \det(A) \]
\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad R_2 \rightarrow R_2 + 3R_1 \Rightarrow \det = \det(A) \]
2.2.3 Invertibility
A square matrix \(A\) is invertible iff \(\det(A) \neq 0\).
\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad \det(A) = 1\cdot4 - 2\cdot3 = -2 \neq 0 \]
2.2.4 Multiplicative Property
For \(A, B \in \mathbb{R}^{n \times n}\):
\[ \det(AB) = \det(A) \cdot \det(B) \]
\[ A = \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}, \; B = \begin{bmatrix} 3 & 0 \\ 0 & 4 \end{bmatrix}, \; \det(AB) = (1\cdot2)(3\cdot4) = 24 \]
2.2.5 Determinant of Transpose
\[ \det(A^T) = \det(A) \]
\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad A^T = \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix}, \quad \det(A^T) = -2 = \det(A) \]
2.2.6 Scalar Multiplication
\[ \det(kA) = k^n \cdot \det(A) \]
\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \; k = 2, \; \det(2A) = 2^2 \cdot (-2) = -8 \]
2.2.7 Block Diagonal Matrices
\[ A = \begin{bmatrix} B & 0 \\ 0 & C \end{bmatrix}, \quad \det(A) = \det(B) \cdot \det(C) \]
\[ B = [2], \; C = \begin{bmatrix} 1 & 3 \\ 0 & 4 \end{bmatrix}, \; A = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 1 & 3 \\ 0 & 0 & 4 \end{bmatrix}, \; \det(A) = 2 \cdot (1\cdot4 - 0\cdot3) = 8 \]
2.2.8 Zero Row or Column
If any row or column is all zeros, then \(\det(A) = 0\).
\[ A = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 4 & 5 & 6 \end{bmatrix}, \quad \det(A) = 0 \]
2.2.9 Linear Dependence
If the rows (or columns) are linearly dependent:
\[ \det(A) = 0 \]
\[ A = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 0 & 1 & 1 \end{bmatrix}, \quad \text{row 2 = 2 * row 1} \Rightarrow \det(A) = 0 \]
2.3 Cramer’s Rule
Determinants allow us to solve linear systems using Cramer’s Rule. For a system of \(n\) equations with \(n\) unknowns \(A \mathbf{x} = \mathbf{b}\) where \(A\) is an \(n \times n\) matrix with \(\det(A) \neq 0\), the solution is:
\[ x_i = \frac{\det(A_i)}{\det(A)}, \quad i = 1,2,\dots,n \]
Here, \(A_i\) is the matrix formed by replacing the \(i\)-th column of \(A\) with the vector \(\mathbf{b}\) [1], lay2012linear?, meyer2000?.
\[ \begin{aligned} a_1x + b_1y &= c_1 \\ a_2x + b_2y &= c_2 \end{aligned} \]
The solution is:
\[
x = \frac{\begin{vmatrix} c_1 & b_1 \\ c_2 & b_2 \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 \\ a_2 & b_2 \end{vmatrix}}, \quad
y = \frac{\begin{vmatrix} a_1 & c_1 \\ a_2 & c_2 \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 \\ a_2 & b_2 \end{vmatrix}}
\]
This method is elegant but becomes computationally expensive for large \(n\), where row-reduction methods are more efficient.
2.4 Geometric Interpretation
Determinants are not only an algebraic tool but also have a geometric meaning.
For example:
- In 2D, the absolute value of the determinant of a \(2 \times 2\) matrix formed by two vectors gives the area of the parallelogram spanned by those vectors.
- In 3D, the absolute value of the determinant of a \(3 \times 3\) matrix formed by three vectors gives the volume of the parallelepiped spanned by those vectors.
- The sign of the determinant indicates the orientation (whether the vectors preserve or reverse orientation) [1], lay2012linear?, meyer2000?.
This geometric interpretation provides an intuitive understanding of why a determinant of zero implies linear dependence among vectors: the area or volume collapses to zero.
2.4.1 Area in 2D
For two vectors in 2D:
\[ \mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix}, \]
the determinant of the \(2\times2\) matrix formed by these vectors gives the signed area of the parallelogram spanned by \(\mathbf{u}\) and \(\mathbf{v}\):
\[ \det([\mathbf{u} \ \mathbf{v}]) = \begin{vmatrix} u_1 & v_1 \\ u_2 & v_2 \end{vmatrix} = u_1 v_2 - u_2 v_1 \]
- The absolute value \(|\det([\mathbf{u} \ \mathbf{v}])|\) gives the area.
- The sign indicates the orientation (clockwise or counterclockwise).
A mining engineer is mapping the cross-section of a mineral deposit. Two vectors in the plane represent edges of a small parallelogram section of the deposit:
\[ \mathbf{u} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} 1 \\ 4 \end{bmatrix}. \]
Determine the area of the parallelogram formed by these two vectors.
The area of the parallelogram is given by the absolute value of the determinant:
\[ \det([\mathbf{u} \ \mathbf{v}]) = \begin{vmatrix} 2 & 1 \\ 3 & 4 \end{vmatrix} = 2\cdot4 - 3\cdot1 = 5 \]
Thus, the area is:
\[ \text{Area} = |\det([\mathbf{u} \ \mathbf{v}])| = 5 \]
2.4.2 Volume in 3D
For three vectors in 3D:
\[ \mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix}, \quad \mathbf{w} = \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix}, \]
the determinant of the \(3\times3\) matrix formed by these vectors gives the signed volume of the parallelepiped spanned by \(\mathbf{u}, \mathbf{v}, \mathbf{w}\):
\[ \det([\mathbf{u} \ \mathbf{v} \ \mathbf{w}]) = \begin{vmatrix} u_1 & v_1 & w_1 \\ u_2 & v_2 & w_2 \\ u_3 & v_3 & w_3 \end{vmatrix} \]
- The absolute value \(|\det([\mathbf{u} \ \mathbf{v} \ \mathbf{w}])|\) gives the volume.
- The sign indicates the orientation in space (right-hand or left-hand system).
A mining company is designing a custom-shaped container to store rare ore. The container is a parallelepiped in 3D space, but the edges are not aligned with the standard axes. The vectors representing the edges originating from one corner are:
\[ \mathbf{u} = \begin{bmatrix} 3 \\ 1 \\ 2 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} 2 \\ 4 \\ 1 \end{bmatrix}, \quad \mathbf{w} = \begin{bmatrix} 1 \\ 2 \\ 5 \end{bmatrix}. \]
Tasks:
1. Find the volume of the container.
2. Calculate the area of the parallelogram formed by edges \(\mathbf{v}\) and \(\mathbf{w}\). 3. Determine the height of the parallelepiped relative to the base formed by \(\mathbf{v}\) and \(\mathbf{w}\).
Step 1 – Volume using determinant:
The volume of a parallelepiped formed by vectors \(\mathbf{u}, \mathbf{v}, \mathbf{w}\) is the absolute value of the determinant of the matrix formed by these vectors:
\[ V = \left| \det \begin{bmatrix} 3 & 2 & 1 \\ 1 & 4 & 2 \\ 2 & 1 & 5 \end{bmatrix} \right| \]
Compute the determinant:
\[ \det = 3(4\cdot5 - 2\cdot1) - 2(1\cdot5 - 2\cdot2) + 1(1\cdot1 - 4\cdot2) = 3(18) - 2(1) + 1(-7) = 54 - 2 - 7 = 45 \]
So the volume:
\[ V = |45| = 45 \ \text{m}^3 \]
Step 2 – Area of the base formed by \(\mathbf{v}\) and \(\mathbf{w}\):
The area of a parallelogram formed by two vectors is the magnitude of their cross product. Compute the cross product:
\[ \mathbf{v} \times \mathbf{w} = \begin{bmatrix} v_2 w_3 - v_3 w_2 \\ v_3 w_1 - v_1 w_3 \\ v_1 w_2 - v_2 w_1 \end{bmatrix} = \begin{bmatrix} 4\cdot5 - 1\cdot2 \\ 1\cdot1 - 2\cdot5 \\ 2\cdot2 - 4\cdot1 \end{bmatrix} = \begin{bmatrix} 18 \\ -9 \\ 0 \end{bmatrix} \]
Magnitude of the cross product:
\[ A_\text{base} = \|\mathbf{v} \times \mathbf{w}\| = \sqrt{18^2 + (-9)^2 + 0^2} = \sqrt{324 + 81 + 0} = \sqrt{405} \approx 20.12 \ \text{m}^2 \]
Step 3 – Height relative to the base:
Height of the parallelepiped is:
\[ h = \frac{\text{Volume}}{\text{Area of base}} = \frac{45}{20.12} \approx 2.24 \ \text{m} \]
Determinants also help determine whether vectors are linearly independent:
- If \(\det([\mathbf{u} \ \mathbf{v}]) = 0\) in 2D, vectors are collinear.
- If \(\det([\mathbf{u} \ \mathbf{v} \ \mathbf{w}]) = 0\) in 3D, vectors are coplanar.
2.5 Invertibility
Determinants provide a quick test for the invertibility of a square matrix [1], lay2012linear?, meyer2000?:
Non-singular matrix (\(\det(A) \neq 0\)):
The matrix \(A\) is invertible, meaning an inverse exists:
\[ A^{-1} \text{ exists.} \]Singular matrix (\(\det(A) = 0\)):
The matrix \(A\) is not invertible, meaning no inverse exists:
\[ A^{-1} \text{ does not exist.} \]
This directly connects to solving linear systems \(A \mathbf{x} = \mathbf{b}\):
- Non-singular matrix (\(\det(A)\neq0\)):
The system has a unique solution, which can be found using:- Inverse method:
\[ \mathbf{x} = A^{-1} \mathbf{b} \] - Gaussian elimination / RREF
- Inverse method:
- Singular matrix (\(\det(A)=0\)):
The system may have:- No solution (inconsistent system)
- Infinitely many solutions (dependent system)
- No solution (inconsistent system)
\[ A = \begin{bmatrix} 2 & 1 \\ 3 & 4 \end{bmatrix}, \quad \det(A) = 2\cdot4 - 1\cdot3 = 5 \neq 0 \]
- \(A\) is invertible.
- The system \(A\mathbf{x} = \mathbf{b}\) has a unique solution for any vector \(\mathbf{b}\).
\[ B = \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix}, \quad \det(B) = 1\cdot4 - 2\cdot2 = 0 \]
- \(B\) is non-invertible.
- The system \(B\mathbf{x} = \mathbf{b}\) may have no solution (if \(\mathbf{b}\) is inconsistent) or infinitely many solutions (if \(\mathbf{b}\) is in the column space of \(B\)).
Determinants act as a shortcut to check invertibility before attempting more computationally expensive methods like RREF or computing the inverse.