🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way
∑MathIntermediate

Systems of Linear Equations

Key Points

  • •
    A system of linear equations asks for numbers that make several linear relationships true at the same time, which we compactly write as Ax = b.
  • •
    Gaussian elimination uses row operations to convert A into an upper-triangular matrix so we can solve by back substitution.
  • •
    Row echelon form (REF) and reduced row echelon form (RREF) reveal pivots, rank, free variables, and whether solutions exist or are unique.
  • •
    LU decomposition factors a matrix into L (lower-triangular) and U (upper-triangular), often with a permutation P for stability: PA = LU.
  • •
    Partial pivoting (row swaps to choose the largest available pivot) avoids division by very small numbers and improves numerical stability.
  • •
    Solving one system with LU costs about O(n3), but reusing LU for many right-hand sides costs only O(n2) per solve.
  • •
    Inconsistent systems show a row of zeros in A but a nonzero in b after elimination; underdetermined systems have free variables.
  • •
    Floating-point computations require tolerances; never compare to zero directly when checking for pivots or singularity.

Prerequisites

  • →Matrices and vectors — Systems are expressed compactly as Ax = b; understanding matrix and vector operations is essential.
  • →Basic algebra of linear equations — Row operations mirror solving equations by substitution and elimination.
  • →Big-O time and space complexity — To compare Gaussian elimination, LU reuse, and triangular solves.
  • →Floating-point arithmetic and numerical errors — Pivoting, tolerances, and conditioning directly affect solution accuracy.
  • →C++ arrays/vectors and loops — Implementing elimination and LU requires careful indexing and memory handling.
  • →Summations and series — To understand operation counts like (2/3) n^3 from summing inner-loop work.

Detailed Explanation

Tap terms for definitions

01Overview

A system of linear equations is a collection of linear relationships among unknowns, such as x + 2y = 5 and 3x − y = 4. Instead of handling each equation separately, we package them into a matrix equation Ax = b, where A contains coefficients, x holds the unknowns, and b stores the right-hand sides. This compact form lets us apply powerful, systematic procedures to analyze and solve the system.

Gaussian elimination is the foundational algorithm: it transforms A into an upper-triangular form using legal row operations (swap rows, scale a row, add a multiple of one row to another). Once in upper-triangular form, we can solve easily by back substitution. Closely related are the canonical shapes called row echelon form (REF) and reduced row echelon form (RREF), which expose structural properties: the number of pivots, rank, whether solutions exist, and how many degrees of freedom there are.

LU decomposition goes a step further by factoring A into the product of a lower-triangular matrix L and an upper-triangular matrix U (often with a permutation matrix P for stability). The factorization captures the elimination steps so we can solve Ax = b quickly for many different b’s by performing forward and back substitution. These tools power applications across science and engineering—fitting models, simulating physical systems, analyzing networks, and more.

02Intuition & Analogies

Imagine juggling multiple constraints at once, like planning a party with a budget, a guest limit, and venue rules. Each constraint is linear (e.g., 2 chairs per guest, cost per chair), and you want numbers that satisfy all constraints together. That’s exactly what a system of linear equations does: find values that make everything line up.

Gaussian elimination is like tidying your to-do list so it’s easy to finish. You rearrange (swap rows), scale (normalize), and subtract multiples (eliminate) to turn a messy set of constraints into a neat staircase: first solve for one variable, then use that to solve the next, and so on. Picture draining water down steps—once the top step is clear, the rest follow naturally.

Row echelon form is the “staircase” shape itself: each leading nonzero (pivot) marches to the right as you go down. Reduced row echelon form polishes the staircase so every pivot is 1 and the only nonzero in its column; this makes reading solutions almost as simple as reading off values.

LU decomposition is like keeping a recipe for your elimination steps. Instead of redoing elimination every time you change the shopping list (the vector b), you save the plan (L and U) and quickly compute the total with minimal extra work. Partial pivoting is your safety helmet: it makes sure you always choose a stable step (a large enough pivot) so tiny round-off errors don’t avalanche. In practice, these methods turn complex, interdependent requirements into a sequence of small, manageable moves.

03Formal Definition

Given m equations in n variables, a system can be written as Ax=b, where A ∈ Rm×n, x ∈ Rn, and b ∈ Rm. A solution is any x satisfying all equations simultaneously. The augmented matrix is [A∣b],formedbyappendingbasanextracolumntoA.Rowechelonform(REF)isobtainedbyelementaryrowoperationssothat(1)anynonzerorowsareaboveall−zerorows;(2)eachrow’sleadingnonzeroentry(pivot)istotherightofthepivotintherowabove;and(3)entriesbeloweachpivotarezero.Reducedrowechelonform(RREF)additionallyrequiresthateachpivotequals1andistheonlynonzeroinitscolumn.GaussianeliminationappliesafinitesequenceofelementaryrowoperationstotransformAintoanupper−triangularmatrixU(ortoREF/RREF).Equivalently,thereexistinvertibleelementarymatricesE1​,...,Ek​suchthatEk​⋯E1​A=UandEk​⋯E1​b=c.IfUissquareandnonsingular,backsubstitutionyieldsauniquesolution.TherankofA,denotedrank(A),equalsthenumberofpivotsandsatisfiesrank(A)≤min(m,n).Thesystemisconsistentifandonlyifrank(A)=rank([A∣b]). If A is square (n × n) and nonsingular, then rank(A) = n and the solution is unique. LU decomposition (with partial pivoting) expresses P A=L U where P is a permutation matrix, L is unit lower-triangular (ones on the diagonal), and U is upper-triangular. Forward substitution solves L y=P b, and back substitution solves U x=y, giving x.

04When to Use

Use Gaussian elimination when you need a one-off solve of a small to medium dense system, or when you want to diagnose consistency, rank, or the presence of free variables. It is the standard, general-purpose method taught first because it connects cleanly to theory (rank, nullity) and practice (solving). Choose RREF when you need the full structure of the solution set, such as identifying all free variables or computing a basis for the solution space; note, however, that RREF is often slower and more numerically delicate than stopping at REF.

Use LU decomposition when you’ll solve Ax = b for many different b with the same A. The one-time O(n^3) factorization is amortized over multiple solves that each cost O(n^2). LU is also useful for computing determinants (product of U’s diagonal times the permutation sign) and for implementing block algorithms. Always pair LU with partial pivoting (PA = LU) for numerical stability unless you are working with special matrices that guarantee stability (e.g., symmetric positive definite, for which Cholesky is preferable).

For very large or sparse systems, specialized sparse direct solvers or iterative methods (Conjugate Gradient, GMRES) may be better. For overdetermined systems (m > n) in least-squares problems, QR decomposition is typically preferred over normal equations for numerical stability, though elimination ideas still underlie the approach.

⚠️Common Mistakes

• Dividing by a zero (or tiny) pivot: Always search for a good pivot via partial pivoting; if the largest available pivot is below a tolerance (e.g., 1e−12), treat the matrix as singular or nearly singular. • Forgetting to pivot the right-hand side: When you swap rows in A, you must also swap the corresponding entries in b (or apply the permutation to b when using LU). • Assuming LU exists without pivoting: Some matrices require row permutations for a stable or even valid factorization; implement PA = LU, not just A = LU, unless you have special guarantees. • Expecting exact zero in floating point: Round-off makes exact comparisons unreliable. Use tolerances when checking for zeros or for detecting rank/consistency. • Misreading REF/RREF: A row [0 0 … 0 | c] with c ≠ 0 means the system is inconsistent (no solution). A column without a pivot corresponds to a free variable (infinitely many solutions along that dimension). • Using RREF for numeric solves unnecessarily: RREF is more work and can magnify rounding error. For numeric solutions, stop at triangular form and back-substitute, or use LU. • Ignoring conditioning: If A is ill-conditioned (large condition number), small data errors cause large solution errors. Check the residual r = b − A\hat{x} and consider scaling or better-suited methods. • Indexing mistakes in code: C++ is 0-based; ensure loops and swaps are consistent, and avoid aliasing bugs when swapping rows in both A and L during LU with pivoting.

Key Formulas

Matrix form of a system

Ax=b,A∈Rm×n, x∈Rn, b∈Rm

Explanation: A compact way to write m linear equations in n unknowns. Solving the system means finding x that satisfies this vector equation.

Elimination via elementary matrices

Ek​⋯E1​A=U,Ek​⋯E1​b=c

Explanation: Gaussian elimination is equivalent to multiplying by invertible elementary matrices that encode row operations, producing an upper-triangular matrix U and transformed right-hand side c.

LU with partial pivoting

PA=LU

Explanation: With row permutations captured by permutation matrix P, A factors into L (unit lower-triangular) and U (upper-triangular). This enables efficient solves for multiple right-hand sides.

Back substitution

Ux=y⇒xi​=uii​1​(yi​−j=i+1∑n​uij​xj​)

Explanation: Solves an upper-triangular system from the last variable to the first. Each step subtracts known contributions and divides by the diagonal.

Forward substitution

Ly=b⇒yi​=bi​−j=1∑i−1​ℓij​yj​

Explanation: Solves a lower-triangular system from top to bottom. Each step removes the influence of previously computed entries.

Rank condition for consistency

rank(A)=rank([A∣b])⟺system is consistent

Explanation: The system has at least one solution if and only if appending b to A does not increase the rank. Otherwise, it is inconsistent.

Uniqueness condition

Unique solution if rank(A)=n (for A∈Rn×n)

Explanation: A square system has exactly one solution when A has full rank, i.e., all columns are linearly independent.

Cost of Gaussian elimination

T(n)=32​n3+O(n2)

Explanation: The dominant operation count for dense n×n elimination grows as two-thirds n cubed. Lower-order terms become negligible as n increases.

Determinant from LU

det(A)=sgn(P)i=1∏n​uii​

Explanation: Once PA = LU is computed, the determinant equals the product of U’s diagonal, adjusted by the parity (sign) of the permutation.

Condition number

κ(A)=∥A∥⋅∥A−1∥

Explanation: Measures sensitivity of the solution to input perturbations. A larger condition number implies potential amplification of errors.

Complexity Analysis

For a dense n×n system, classical Gaussian elimination with partial pivoting performs about 32​ n3 floating-point operations (flops), with an additional O(n2) for forward/back substitution. The pivot search in each column costs O(n) and is absorbed into the same cubic count. Space usage is O(n2) to store the matrix; the algorithm itself needs only O(1) or O(n) extra space beyond A and b if done in place. LU decomposition with partial pivoting (PA=LU) has the same asymptotic cost: O(n3) time and O(n2) space. Its advantage is amortized: after one O(n3) factorization, each new right-hand side b is solved in O(n2) time via one forward substitution (Ly=Pb) and one back substitution (Ux=y). For k different right-hand sides, total time is O(n3 + k n2), which is far better than k repetitions of elimination when k ≫ 1. For rectangular m×n matrices (assuming m≥n), elimination to REF costs O(m n2): each of the n pivot columns requires eliminating roughly m − i rows with O(n − i) work per row. RREF adds work by clearing above pivots, still O(m n2), and has larger constant factors. Space remains O(m n). Numerical stability: partial pivoting typically keeps growth of entries moderate, yielding accurate results for many practical problems. However, ill-conditioned matrices can still produce large forward errors even when residuals are small; detecting near-singularity via small pivots (relative to row/column norms) and monitoring residuals r=b − Ax^ help diagnose issues.

Code Examples

Gaussian Elimination with Partial Pivoting (Solve Ax = b)
1#include <bits/stdc++.h>
2using namespace std;
3
4// Solve A x = b using Gaussian elimination with partial pivoting.
5// Returns pair<bool, vector<double>> where bool indicates success (nonsingular).
6pair<bool, vector<double>> gaussian_elimination(vector<vector<double>> A, vector<double> b, double eps = 1e-12) {
7 int n = (int)A.size();
8 for (int i = 0; i < n; ++i) {
9 // 1) Pivot: find row with largest absolute value in column i
10 int pivot = i;
11 double best = fabs(A[i][i]);
12 for (int r = i + 1; r < n; ++r) {
13 if (fabs(A[r][i]) > best) {
14 best = fabs(A[r][i]);
15 pivot = r;
16 }
17 }
18 if (best < eps) {
19 // Matrix is singular or nearly singular
20 return {false, {}};
21 }
22 if (pivot != i) {
23 swap(A[pivot], A[i]);
24 swap(b[pivot], b[i]);
25 }
26 // 2) Eliminate entries below the pivot
27 for (int r = i + 1; r < n; ++r) {
28 double factor = A[r][i] / A[i][i];
29 if (fabs(factor) < eps) continue;
30 for (int c = i; c < n; ++c) {
31 A[r][c] -= factor * A[i][c];
32 }
33 b[r] -= factor * b[i];
34 }
35 }
36 // 3) Back substitution: solve U x = b (A is now upper-triangular)
37 vector<double> x(n, 0.0);
38 for (int i = n - 1; i >= 0; --i) {
39 double sum = b[i];
40 for (int c = i + 1; c < n; ++c) sum -= A[i][c] * x[c];
41 double diag = A[i][i];
42 if (fabs(diag) < eps) return {false, {}}; // Safety check
43 x[i] = sum / diag;
44 }
45 return {true, x};
46}
47
48int main() {
49 // Example: Solve
50 // 2x + y - z = 8
51 // -3x - y + 2z = -11
52 // -2x + y + 2z = -3
53 vector<vector<double>> A = {
54 { 2, 1, -1},
55 {-3, -1, 2},
56 {-2, 1, 2}
57 };
58 vector<double> b = {8, -11, -3};
59
60 auto [ok, x] = gaussian_elimination(A, b);
61 if (!ok) {
62 cout << "Singular or ill-conditioned system.\n";
63 return 0;
64 }
65 cout.setf(std::ios::fixed); cout<<setprecision(6);
66 cout << "Solution x = [";
67 for (int i = 0; i < (int)x.size(); ++i) {
68 cout << x[i] << (i+1==(int)x.size()? "]\n" : ", ");
69 }
70
71 // Compute residual norm ||b - A x||_2 to assess quality
72 vector<double> r(3, 0.0);
73 for (int i = 0; i < 3; ++i) {
74 double Ax_i = 0.0;
75 for (int j = 0; j < 3; ++j) Ax_i += A[i][j] * x[j];
76 r[i] = b[i] - Ax_i; // Note: A and b here were modified during elimination; in practice, keep copies.
77 }
78 double norm2 = 0.0; for (double v : r) norm2 += v*v; norm2 = sqrt(norm2);
79 cout << "Residual 2-norm (computed on modified A,b; keep copies in production): " << norm2 << "\n";
80 return 0;
81}
82

This program performs Gaussian elimination with partial pivoting to avoid dividing by tiny pivots. It transforms A to upper-triangular form while applying the same operations to b, then uses back substitution to obtain x. A small residual indicates the solution fits the equations well. In production code, preserve copies of A and b if you need residuals on the original data.

Time: O(n^3)Space: O(n^2)
LU Decomposition with Partial Pivoting (PA = LU) and Multiple Right-Hand Sides
1#include <bits/stdc++.h>
2using namespace std;
3
4// Compute PA = LU using Doolittle's method with partial pivoting.
5// L has unit diagonal; U is upper-triangular. P is stored as a permutation vector perm where Pb = b[perm].
6bool lu_decompose(const vector<vector<double>>& Ain,
7 vector<vector<double>>& L,
8 vector<vector<double>>& U,
9 vector<int>& perm,
10 double eps = 1e-12) {
11 int n = (int)Ain.size();
12 U = Ain; // Work on a copy
13 L.assign(n, vector<double>(n, 0.0));
14 perm.resize(n);
15 iota(perm.begin(), perm.end(), 0); // Identity permutation
16
17 for (int k = 0; k < n; ++k) {
18 // Pivot selection on column k
19 int piv = k;
20 double best = fabs(U[k][k]);
21 for (int r = k + 1; r < n; ++r) {
22 if (fabs(U[r][k]) > best) { best = fabs(U[r][k]); piv = r; }
23 }
24 if (best < eps) return false; // Singular or nearly singular
25 if (piv != k) {
26 swap(U[piv], U[k]);
27 swap(perm[piv], perm[k]);
28 // Swap corresponding rows in L for previous columns [0..k-1]
29 for (int c = 0; c < k; ++c) swap(L[piv][c], L[k][c]);
30 }
31 // Set L's diagonal to 1
32 L[k][k] = 1.0;
33 // Eliminate below U[k][k]
34 for (int i = k + 1; i < n; ++i) {
35 double factor = U[i][k] / U[k][k];
36 L[i][k] = factor;
37 for (int j = k; j < n; ++j) U[i][j] -= factor * U[k][j];
38 }
39 }
40 return true;
41}
42
43// Apply permutation perm to vector b: returns Pb
44vector<double> apply_perm(const vector<int>& perm, const vector<double>& b) {
45 int n = (int)perm.size();
46 vector<double> Pb(n);
47 for (int i = 0; i < n; ++i) Pb[i] = b[perm[i]];
48 return Pb;
49}
50
51// Forward substitution: L y = b (L has unit diagonal)
52vector<double> forward_subst(const vector<vector<double>>& L, const vector<double>& b) {
53 int n = (int)L.size();
54 vector<double> y(n, 0.0);
55 for (int i = 0; i < n; ++i) {
56 double sum = b[i];
57 for (int j = 0; j < i; ++j) sum -= L[i][j] * y[j];
58 // L[i][i] is 1.0 by construction
59 y[i] = sum;
60 }
61 return y;
62}
63
64// Back substitution: U x = y
65vector<double> back_subst(const vector<vector<double>>& U, const vector<double>& y, double eps = 1e-12) {
66 int n = (int)U.size();
67 vector<double> x(n, 0.0);
68 for (int i = n - 1; i >= 0; --i) {
69 double sum = y[i];
70 for (int j = i + 1; j < n; ++j) sum -= U[i][j] * x[j];
71 if (fabs(U[i][i]) < eps) throw runtime_error("Singular U in back substitution");
72 x[i] = sum / U[i][i];
73 }
74 return x;
75}
76
77// Solve Ax = b given PA = LU factorization
78vector<double> lu_solve(const vector<vector<double>>& L,
79 const vector<vector<double>>& U,
80 const vector<int>& perm,
81 const vector<double>& b) {
82 vector<double> Pb = apply_perm(perm, b);
83 vector<double> y = forward_subst(L, Pb);
84 vector<double> x = back_subst(U, y);
85 return x;
86}
87
88int main() {
89 // Example matrix
90 vector<vector<double>> A = {
91 { 4, 3, 0},
92 { 3, 4, -1},
93 { 0, -1, 4}
94 };
95
96 vector<vector<double>> L, U; vector<int> P;
97 bool ok = lu_decompose(A, L, U, P);
98 if (!ok) { cout << "LU failed: singular matrix.\n"; return 0; }
99
100 // Solve A x = b1 and A x = b2 efficiently using the same LU
101 vector<double> b1 = {24, 30, -24};
102 vector<double> b2 = {1, 0, 1};
103
104 vector<double> x1 = lu_solve(L, U, P, b1);
105 vector<double> x2 = lu_solve(L, U, P, b2);
106
107 cout.setf(std::ios::fixed); cout<<setprecision(6);
108 cout << "x1 = ["; for (int i=0;i<(int)x1.size();++i) cout<<x1[i]<<(i+1==(int)x1.size()? "]\n":", ");
109 cout << "x2 = ["; for (int i=0;i<(int)x2.size();++i) cout<<x2[i]<<(i+1==(int)x2.size()? "]\n":", ");
110
111 // Determinant from U and permutation parity
112 int swaps = 0; // Count parity from permutation vector
113 vector<int> seen(P.size(), 0);
114 for (int i = 0; i < (int)P.size(); ++i) if (!seen[i]) {
115 int j = i, len = 0; while (!seen[j]) { seen[j] = 1; j = P[j]; ++len; }
116 if (len > 0) swaps += len - 1;
117 }
118 double det = (swaps % 2 ? -1.0 : 1.0);
119 for (int i = 0; i < (int)U.size(); ++i) det *= U[i][i];
120 cout << "det(A) = " << det << "\n";
121 return 0;
122}
123

This program factors A as PA = LU using partial pivoting, then solves for two different right-hand sides without refactoring. It demonstrates forward and back substitution and shows how to compute the determinant from U and the permutation parity.

Time: O(n^3) for LU; O(n^2) per right-hand sideSpace: O(n^2)
Reduced Row Echelon Form (RREF) to Diagnose Consistency and Free Variables
1#include <bits/stdc++.h>
2using namespace std;
3
4// Compute RREF of an m x n matrix (in-place). Returns the row rank and pivot column indices.
5struct RREFResult { int rank; vector<int> pivot_col; };
6
7RREFResult rref(vector<vector<double>>& A, double eps = 1e-12) {
8 int m = (int)A.size();
9 int n = (m ? (int)A[0].size() : 0);
10 int row = 0; vector<int> pivot_col;
11 for (int col = 0; col < n && row < m; ++col) {
12 // Find pivot row with largest magnitude in this column at or below 'row'
13 int piv = row; double best = fabs(A[row][col]);
14 for (int r = row + 1; r < m; ++r) if (fabs(A[r][col]) > best) { best = fabs(A[r][col]); piv = r; }
15 if (best < eps) continue; // No pivot in this column
16 swap(A[piv], A[row]);
17 // Normalize pivot row
18 double pivot = A[row][col];
19 for (int c = 0; c < n; ++c) A[row][c] /= pivot;
20 // Eliminate all other entries in this column
21 for (int r = 0; r < m; ++r) if (r != row) {
22 double factor = A[r][col];
23 if (fabs(factor) < eps) continue;
24 for (int c = 0; c < n; ++c) A[r][c] -= factor * A[row][c];
25 }
26 pivot_col.push_back(col);
27 ++row;
28 }
29 return {(int)pivot_col.size(), pivot_col};
30}
31
32int main() {
33 // Solve a 2x3 system with infinitely many solutions:
34 // x + y + z = 2
35 // 2x + y + 3z = 5
36 // Build augmented matrix [A|b] of size m x (n+1)
37 vector<vector<double>> Aug = {
38 {1, 1, 1, 2},
39 {2, 1, 3, 5}
40 };
41 auto res = rref(Aug);
42
43 // Print RREF
44 cout.setf(std::ios::fixed); cout << setprecision(6);
45 cout << "RREF([A|b]):\n";
46 for (auto& row : Aug) {
47 for (double v : row) cout << setw(10) << v << ' ';
48 cout << '\n';
49 }
50
51 int m = (int)Aug.size();
52 int n_aug = (m ? (int)Aug[0].size() : 0);
53 int n = n_aug - 1; // number of variables
54
55 // Check consistency: a row [0 ... 0 | c] with c != 0 means inconsistent
56 bool inconsistent = false;
57 for (int i = 0; i < m; ++i) {
58 bool allZero = true; for (int j = 0; j < n; ++j) allZero &= fabs(Aug[i][j]) < 1e-9;
59 if (allZero && fabs(Aug[i][n]) > 1e-9) inconsistent = true;
60 }
61
62 if (inconsistent) {
63 cout << "System is inconsistent (no solution).\n";
64 return 0;
65 }
66
67 // Identify free variables (columns without pivots)
68 vector<int> isPivot(n, 0);
69 for (int c : res.pivot_col) if (c < n) isPivot[c] = 1;
70 vector<int> freeVars;
71 for (int j = 0; j < n; ++j) if (!isPivot[j]) freeVars.push_back(j);
72
73 cout << "Rank = " << res.rank << ", variables = " << n << ", free variables: ";
74 for (int j : freeVars) cout << "x" << (j+1) << ' '; cout << "\n";
75
76 // Extract one particular solution by setting free vars = 0 and reading pivots from RREF
77 vector<double> x(n, 0.0);
78 for (int i = 0; i < m; ++i) {
79 // Find pivot column in this row (if any) among first n columns
80 int pc = -1; for (int j = 0; j < n; ++j) if (fabs(Aug[i][j] - 1.0) < 1e-9) { pc = j; break; }
81 if (pc == -1) continue;
82 double rhs = Aug[i][n];
83 for (int j = 0; j < n; ++j) if (j != pc) rhs -= Aug[i][j] * x[j];
84 x[pc] = rhs; // with current free vars assumed zero
85 }
86
87 cout << "One particular solution (free vars = 0): [";
88 for (int i = 0; i < n; ++i) cout << x[i] << (i+1==n? "]\n" : ", ");
89 return 0;
90}
91

This program computes the RREF of an augmented matrix to diagnose consistency and reveal free variables. It prints the RREF, identifies free-variable columns, and constructs one particular solution by setting free variables to zero. You can parameterize the full solution by assigning arbitrary values to free variables and solving for pivots.

Time: O(m n^2) for an m x n (without the augmented column) matrixSpace: O(m n)
#systems of linear equations#gaussian elimination#row echelon form#rref#lu decomposition#partial pivoting#forward substitution#back substitution#matrix solve#numerical stability#rank#determinant#condition number#augmented matrix#c++ linear algebra