🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
⏱️Coach🧩Problems🧠Thinking🎯Prompts🧠Review
SearchSettings
How I Study AI - Learn AI Papers & Lectures the Easy Way
∑MathIntermediate

State Space Models (SSM)

Key Points

  • •
    A State Space Model (SSM) describes a dynamical system using a state vector x(t) that evolves via a first-order matrix differential equation and produces outputs y(t).
  • •
    Continuous-time linear time-invariant (LTI) SSMs have the form \(x˙(t) = Ax(t) + Bu(t),\; y(t) = Cx(t) + Du(t)\).
  • •
    The solution uses the matrix exponential: \(x(t) = eAtx(0) + ∫0t​ eA(t−τ)Bu(τ)\,dτ\).
  • •
    You can discretize continuous SSMs for simulation and digital control using zero-order hold to get \(xk+1​=Ad​ xk​ + Bd​ uk​\).
  • •
    Controllability and observability determine whether you can steer internal states with inputs or reconstruct them from outputs.
  • •
    Numerical simulation typically uses Runge–Kutta (RK4) or other ODE solvers and must choose stable step sizes.
  • •
    SSMs unify and generalize transfer functions; the transfer function is \(G(s) = C(sI-A)^{-1}B + D\).
  • •
    SSMs are widely used in robotics, circuits, aerospace, and signal processing, and enable modern tools like Kalman filtering and LQR.

Prerequisites

  • →Linear Algebra (matrices, vectors, eigenvalues) — SSMs use matrix equations, stability from eigenvalues, and rank conditions for controllability/observability.
  • →Ordinary Differential Equations (ODEs) — Continuous-time SSMs are first-order ODEs; understanding solutions and numerical methods is essential.
  • →Numerical Methods — Simulation and discretization rely on numerical integration and stable algorithms for matrix exponentials.
  • →Signals and Systems / Laplace Transform — Connects state space to transfer functions and frequency-domain reasoning.
  • →C++ Programming Basics — Implementing simulations and matrix utilities requires comfort with arrays, loops, and functions.

Detailed Explanation

Tap terms for definitions

01Overview

A State Space Model (SSM) is a standard way to represent dynamical systems using vectors and matrices. It breaks the system into an internal state x(t), an input u(t), and an output y(t). For linear time-invariant (LTI) systems, the state evolves according to a first-order differential equation (\dot{x}(t) = Ax(t) + Bu(t)) while the output is computed as (y(t) = Cx(t) + Du(t)). Here A, B, C, and D are constant matrices describing how states interact, how inputs affect states, how states map to outputs, and how inputs feed through to outputs. This formulation is compact, scalable to high dimensions, and mathematically robust. It allows you to analyze stability (via eigenvalues of A), controllability and observability (via ranks of special matrices), and performance (via norms and responses). It also connects directly to controller design (e.g., LQR, pole placement) and estimation (e.g., Kalman filtering). Computationally, SSMs are convenient because they can be simulated with numerical ODE solvers and discretized for digital implementation. They generalize and systematize many familiar engineering models such as mass-spring-damper systems, RLC circuits, and aircraft attitude dynamics. In short, an SSM is a powerful, general-purpose framework for modeling, analyzing, and controlling dynamical systems.

02Intuition & Analogies

Imagine tracking a car on a highway. You care about position and velocity, but your GPS only gives noisy position. The car’s true motion is governed by physics: acceleration changes velocity, velocity changes position. That invisible duo—position and velocity—are the system’s state. The gas pedal is your input u(t), and your speedometer or GPS reading is the output y(t). The state space model captures this with a simple rule: the rate of change of the state (how position and velocity evolve right now) equals some combination of the current state and input. If you knew the exact rules (the matrices A and B), then given the current state and the driver’s actions, you could predict the future state. To turn internal state into something you can measure, you apply another rule (matrices C and D) to produce an output. Another analogy: think of a bakery oven. The oven’s internal temperature is the state; the heating element power is the input; the thermometer reading is the output. The oven doesn’t jump straight to your setpoint—it evolves over time, typically in a smooth way described by a differential equation. The state space model is like a recipe telling you how the oven’s temperature changes with heater power and how the thermometer reflects temperature. Why is this useful? First, it separates what’s happening internally (state) from what you can do (input) and what you can see (output). Second, it scales: whether you have 2 states or 200, the same matrix equations work. Third, it plugs into powerful tools: checking if you can drive the oven’s temperature anywhere (controllability), whether your thermometer is good enough to reconstruct the temperature (observability), and how to design inputs to hit targets smoothly (control) or estimate states from noisy readings (filtering).

03Formal Definition

A continuous-time linear time-invariant (LTI) State Space Model over real vector spaces is given by \[ x˙(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t), \] where \(x(t) ∈ Rn\) is the state, \(u(t) ∈ Rm\) is the input, \(y(t) ∈ Rp\) is the output, and \(A ∈ Rn×n, B ∈ Rn×m, C ∈ Rp×n, D ∈ Rp×m\) are constant matrices. The system’s solution for initial state \(x(0)=x0​\) and input \(u(⋅)\) is \[ x(t) = eAtx0​ + ∫0t​ eA(t−τ)Bu(τ)\,dτ, y(t) = Cx(t) + Du(t). \] The matrix exponential is defined by the convergent series \(eAt = ∑k=0∞​ k!(At)k​\). Stability of the unforced system (\(u≡ 0\)) is determined by the eigenvalues of A: asymptotic stability holds if and only if all eigenvalues of A have strictly negative real parts. Controllability of the pair (A, B) is equivalent to rank\(\,C\)=n where \(C=[B, AB, …, An−1B]\). Observability of (A, C) is equivalent to rank\(\,O\)=n where \(O = \begin{bmatrix} C \\ CA \\ ⋮ \\ CAn−1 \end{bmatrix}\). The transfer function is \(G(s) = C(sI - A)^{-1}B + D\), mapping Laplace-domain inputs to outputs. Discretization with zero-order hold (ZOH) and step size \(Δ t\) yields \(xk+1​=Ad​ xk​ + Bd​ uk​\) with \(Ad​ = eAΔt\) and \(Bd​ = ∫0Δt​ eAτ dτ\, B\).

04When to Use

Use state space models when dynamics evolve continuously and depend on hidden internal quantities you want to predict, control, or estimate. They are ideal for modeling physical systems governed by Newton’s or Kirchhoff’s laws: mass–spring–damper mechanics, robotic manipulators, quadrotors, satellites, and RLC circuits. If you want to design modern controllers (LQR, pole placement, H∞) or estimators (Kalman filters), SSMs are the natural language. They are also helpful when you have multiple inputs and outputs (MIMO) because the matrix form scales cleanly. In digital control or signal processing, even if the plant is continuous, SSMs discretize cleanly to run on microcontrollers: use ZOH to compute (A_d, B_d) for chosen sampling time. In system identification, SSMs provide a structured model class to fit from experimental data. Choose SSMs over transfer functions when you need: (1) internal-state access (e.g., for state feedback), (2) time-domain simulation, (3) multivariable interactions, or (4) extensions to nonlinear/ time-varying cases (by linearization around operating points). If your system is static, memoryless, or purely algebraic, an SSM may be unnecessary. If the system is strongly nonlinear and operates far from a linearization point, consider nonlinear state space models or piecewise-linear approximations.

⚠️Common Mistakes

• Mixing continuous- and discrete-time formulas: using (A_d = e^{A\Delta t}) in a continuous step or using Euler updates while assuming exact ZOH discretization can lead to stability or accuracy issues. Always be explicit about your time domain. • Ignoring units and scaling: poorly scaled states and inputs produce ill-conditioned matrices and numerical instability in simulation or control design. Normalize variables when possible. • Choosing an inappropriate step size: too-large steps in Euler/RK4 yield inaccurate or unstable simulations, especially for stiff systems; too-small steps waste computation and magnify rounding errors. • Assuming controllability/observability without checking: attempting full-state feedback or state estimation on an uncontrollable/unobservable system fails. Always compute ranks of (\mathcal{C}) and (\mathcal{O}). • Confusing D matrix usage: D represents direct input-to-output feedthrough. Setting D incorrectly can add unrealistic algebraic loops or change high-frequency behavior. • Overfitting in identification: high-order SSMs can fit noise. Prefer minimal realizations and validate on held-out data. • Numerical pitfalls with matrix exponential: naive series with large (|A\Delta t|) can be inaccurate. Use scaling-and-squaring/Padé or the Van Loan method for (B_d) if high accuracy is needed. • Forgetting initial conditions: responses depend on (x(0)); neglecting or misestimating it skews simulations and Kalman filter initialization.

Key Formulas

Continuous-Time LTI SSM

x˙(t)=Ax(t)+Bu(t),y(t)=Cx(t)+Du(t)

Explanation: Defines how the state changes over time and how outputs are produced from states and inputs using constant matrices.

State Transition Solution

x(t)=eAtx(0)+∫0t​eA(t−τ)Bu(τ)dτ

Explanation: Gives the exact solution of the linear ODE in terms of the matrix exponential and a convolution-like integral with the input.

Matrix Exponential Series

eAt=k=0∑∞​k!(At)k​

Explanation: Defines the matrix exponential via a power series; useful for analysis and small-step approximations.

Transfer Function

G(s)=C(sI−A)−1B+D

Explanation: Maps Laplace-domain inputs to outputs and connects state space to classical frequency-domain analysis.

Controllability Matrix

C=[B,AB,…,An−1B],rank(C)=n

Explanation: A system is controllable if the controllability matrix has full row rank n; then all states are reachable.

Observability Matrix

O=​CCA⋮CAn−1​​,rank(O)=n

Explanation: A system is observable if the stacked observability matrix has full column rank n; then states can be reconstructed.

Zero-Order Hold Discretization

Ad​=eAΔt,Bd​=∫0Δt​eAτdτB

Explanation: Exact discretization for piecewise constant inputs over each sampling interval of length \(Δ t\).

Van Loan Method

[Ad​0​Bd​I​]=e[A0​B0​]Δt

Explanation: Computes both Ad​ and Bd​ via one block matrix exponential, improving numerical robustness for ZOH.

Stability Test

Stable⟺ℜ(λi​(A))<0∀i

Explanation: The continuous LTI system is asymptotically stable if all eigenvalues of A lie strictly in the left half-plane.

Controllability Gramian

Wc​=∫0∞​eAtBB⊤eA⊤tdt

Explanation: Quantifies input energy needed to reach states; solves the Lyapunov equation when A is stable.

Continuous-time Algebraic Riccati Equation (CARE)

A⊤P+PA−PBR−1B⊤P+Q=0

Explanation: Determines the optimal LQR cost-to-go matrix P, yielding state feedback K=R−1 B⊤ P.

PBH Test (Controllability)

rank([λI−A,B])=n∀λ∈C

Explanation: Alternative controllability check using eigenvalues; useful in theoretical analysis and numerical diagnostics.

Complexity Analysis

The computational cost of working with state space models depends on the chosen task and system dimension n (states), m (inputs), and p (outputs). For time-domain simulation using explicit Runge–Kutta 4 (RK4), one step requires a small, fixed number of matrix–vector multiplications. Each multiplication A x costs O(n2) for dense A, so per-step cost is O(n2). For N time steps, total simulation cost is O(N n2) time and O(n2 + n + m + p) space to store A, B, C, D and vectors. If A is sparse with s nonzeros per row, cost reduces to O(N s n). Exact zero-order hold discretization requires evaluating the matrix exponential eAΔt and the integral for Bd​. High-accuracy methods like scaling-and-squaring with Padé typically cost O(n3) for dense matrices due to matrix multiplications and factorizations, with space O(n2). The Van Loan block exponential doubles the matrix dimension to (n+m), leading to O((n+m)^3) time. Series-based approximations to eAΔt truncated at order K cost O(K n3) if implemented via matrix–matrix products, or O(K n2) if only matrix–vector products are needed per step. Controllability/observability checks build matrices of size n × (n m) and (n p) × n respectively, plus rank computations via Gaussian elimination or SVD. Dense rank via elimination costs roughly O(n3) time and O(n2) space; forming the block powers Ak B and C Ak costs O(n3 m) and O(n3 p) cumulatively for k up to n − 1 with dense A. Advanced design/estimation adds further costs: solving the continuous Riccati equation (CARE) via Schur methods is O(n3), and a single step of the (discrete) Kalman filter is O(n3) if covariance updates are done with dense matrices. In large-scale problems, structure (sparsity, bandedness) or iterative methods (Krylov subspace for exponentials) are essential to keep time near-linear in the number of nonzeros and memory to O(nnz).

Code Examples

RK4 simulation of a continuous-time LTI state space model
1#include <bits/stdc++.h>
2using namespace std;
3
4// Basic linear algebra helpers for small dense matrices
5using Mat = vector<vector<double>>;
6using Vec = vector<double>;
7
8Vec matVec(const Mat& A, const Vec& x){
9 size_t n = A.size(); size_t m = x.size();
10 Vec y(n, 0.0);
11 for(size_t i=0;i<n;++i){
12 for(size_t j=0;j<m;++j) y[i] += A[i][j]*x[j];
13 }
14 return y;
15}
16
17Mat matMul(const Mat& A, const Mat& B){
18 size_t n=A.size(), k=A[0].size(), m=B[0].size();
19 Mat C(n, vector<double>(m,0.0));
20 for(size_t i=0;i<n;++i)
21 for(size_t t=0;t<k;++t){ double a=A[i][t];
22 for(size_t j=0;j<m;++j) C[i][j]+=a*B[t][j]; }
23 return C;
24}
25
26Vec vecAdd(const Vec& a, const Vec& b){ Vec c=a; for(size_t i=0;i<a.size();++i) c[i]+=b[i]; return c; }
27Vec vecScale(const Vec& a, double s){ Vec c=a; for(double &v:c) v*=s; return c; }
28
29// Derivative: dx/dt = A x + B u(t)
30Vec f(const Mat& A, const Mat& B, const Vec& x, const Vec& u){
31 Vec Ax = matVec(A, x);
32 // Add Bu
33 Vec Bu(A.size(), 0.0);
34 for(size_t i=0;i<A.size();++i)
35 for(size_t j=0;j<B[0].size();++j)
36 Bu[i] += B[i][j]*u[j];
37 for(size_t i=0;i<A.size();++i) Ax[i]+=Bu[i];
38 return Ax;
39}
40
41int main(){
42 // Example: mass-spring-damper m xdd + c xd + k x = u
43 // State x = [position, velocity]^T
44 double m=1.0, c=0.5, k=4.0;
45 Mat A = {{0.0, 1.0}, {-k/m, -c/m}}; // 2x2
46 Mat B = {{0.0}, {1.0/m}}; // 2x1
47 Mat C = {{1.0, 0.0}}; // output is position
48 Mat D = {{0.0}};
49
50 // Input: step force u(t)=1 for t>=0
51 auto u_of_t = [&](double t)->Vec{
52 (void)t; return Vec{1.0};
53 };
54
55 // RK4 parameters
56 double dt = 0.001; // time step (s)
57 int steps = 10000; // simulate 10 s
58 Vec x = {0.0, 0.0}; // initial position, velocity
59
60 cout << fixed << setprecision(6);
61 cout << "# t position velocity output\n";
62 double t=0.0;
63 for(int kstep=0;kstep<steps;++kstep){
64 Vec u = u_of_t(t);
65 // RK4 stages
66 Vec k1 = f(A,B,x,u);
67 Vec k2 = f(A,B, vecAdd(x, vecScale(k1, dt*0.5)), u);
68 Vec k3 = f(A,B, vecAdd(x, vecScale(k2, dt*0.5)), u);
69 Vec k4 = f(A,B, vecAdd(x, vecScale(k3, dt)), u);
70 Vec dx = vecScale(vecAdd(vecAdd(k1, vecScale(vecAdd(k2,k3),2.0)), k4), dt/6.0);
71 // Update state
72 x = vecAdd(x, dx);
73 // Output y = Cx + Du
74 Vec y = matVec(C, x);
75 for(size_t i=0;i<D.size();++i)
76 for(size_t j=0;j<D[0].size();++j)
77 y[i] += D[i][j]*u[j];
78 if(kstep % 100 == 0)
79 cout << t << " " << x[0] << " " << x[1] << " " << y[0] << "\n";
80 t += dt;
81 }
82 return 0;
83}
84

Simulates a 2D state (position, velocity) under a unit step force using RK4. The code defines basic dense linear algebra, computes the derivative f(x,u) = Ax + Bu, advances the state, and prints outputs periodically. This pattern generalizes to any (A,B,C,D).

Time: O(N n^2) where N is number of time steps and n is state dimensionSpace: O(n^2 + n + m + p) to store A, B, C, D and working vectors
Zero-Order Hold discretization via truncated series
1#include <bits/stdc++.h>
2using namespace std;
3using Mat = vector<vector<double>>;
4
5Mat eye(size_t n){ Mat I(n, vector<double>(n,0.0)); for(size_t i=0;i<n;++i) I[i][i]=1.0; return I; }
6Mat matMul(const Mat& A, const Mat& B){ size_t n=A.size(), k=A[0].size(), m=B[0].size(); Mat C(n, vector<double>(m,0.0)); for(size_t i=0;i<n;++i) for(size_t t=0;t<k;++t){ double a=A[i][t]; for(size_t j=0;j<m;++j) C[i][j]+=a*B[t][j]; } return C; }
7Mat matAdd(const Mat& A, const Mat& B){ Mat C=A; for(size_t i=0;i<A.size();++i) for(size_t j=0;j<A[0].size();++j) C[i][j]+=B[i][j]; return C; }
8Mat matScale(const Mat& A, double s){ Mat C=A; for(auto &r:C) for(double &v:r) v*=s; return C; }
9
10// Compute (A*dt)^k iteratively up to K and use truncation to approximate exp and integral
11pair<Mat, Mat> discretizeZOH_series(const Mat& A, const Mat& B, double dt, int K){
12 size_t n=A.size(); size_t m=B[0].size();
13 Mat Ad = eye(n); // start with I for exp series
14 Mat term = eye(n); // (A dt)^0
15 // Series for A_d = sum_{k=0}^K (A dt)^k / k!
16 double fact = 1.0;
17 Mat A_dt = matScale(A, dt);
18 for(int k=1;k<=K;++k){
19 term = matMul(term, A_dt); // (A dt)^k
20 fact *= k; // k!
21 Ad = matAdd(Ad, matScale(term, 1.0/fact));
22 }
23 // Series for B_d = sum_{k=0}^K (A dt)^k * dt/(k+1)! * B
24 Mat Bd(n, vector<double>(m, 0.0));
25 term = eye(n); fact = 1.0; // reset to (A dt)^0 / 0!
26 for(int k=0;k<=K;++k){
27 double coeff = dt / (fact * (k+1)); // dt/(k+1)!
28 Mat contrib = matScale(term, coeff);
29 Mat add = matMul(contrib, B);
30 // Bd += add
31 for(size_t i=0;i<n;++i) for(size_t j=0;j<m;++j) Bd[i][j] += add[i][j];
32 // next power
33 term = matMul(term, A_dt);
34 fact *= (k+1 == 0 ? 1 : (k+1)); // maintain (k+1)! in denominator for next loop
35 }
36 return {Ad, Bd};
37}
38
39int main(){
40 // Same mass-spring-damper as before
41 double m=1.0, c=0.5, k=4.0; (void)m;
42 vector<vector<double>> A = {{0.0, 1.0}, {-k/m, -c/m}};
43 vector<vector<double>> B = {{0.0}, {1.0/m}};
44
45 double dt = 0.01; // sample time
46 int K = 6; // truncation order (increase for accuracy)
47 auto [Ad, Bd] = discretizeZOH_series(A, B, dt, K);
48
49 cout << fixed << setprecision(6);
50 cout << "Ad =\n";
51 for(auto &r:Ad){ for(double v:r) cout << setw(12) << v << ' '; cout << '\n'; }
52 cout << "Bd =\n";
53 for(auto &r:Bd){ for(double v:r) cout << setw(12) << v << ' '; cout << '\n'; }
54
55 // Test one-step response to unit input from zero state: x1 = Ad x0 + Bd u0
56 vector<double> x0 = {0.0, 0.0};
57 vector<double> u0 = {1.0};
58 // x1 = Bd * u0 since x0=0
59 vector<double> x1(2,0.0);
60 for(size_t i=0;i<2;++i) for(size_t j=0;j<1;++j) x1[i] += Bd[i][j]*u0[j];
61 cout << "x1 from rest with u=1: [" << x1[0] << ", " << x1[1] << "]\n";
62 return 0;
63}
64

Implements ZOH discretization using truncated power series: Ad ≈ ∑ (AΔt)^k/k! and Bd ≈ ∑ (AΔt)^k Δt/(k+1)! B. This is educational and works well when ∥AΔt∥ is small and K is large enough. For production, prefer scaling-and-squaring/Padé or the Van Loan block exponential.

Time: O(K n^3 + n^2 m) for dense matrices (matrix–matrix products dominate)Space: O(n^2 + n m) for storing A, Ad, powers, and Bd
Controllability and Observability rank checks
1#include <bits/stdc++.h>
2using namespace std; using Mat = vector<vector<double>>;
3
4Mat matMul(const Mat& A, const Mat& B){ size_t n=A.size(), k=A[0].size(), m=B[0].size(); Mat C(n, vector<double>(m,0.0)); for(size_t i=0;i<n;++i) for(size_t t=0;t<k;++t){ double a=A[i][t]; for(size_t j=0;j<m;++j) C[i][j]+=a*B[t][j]; } return C; }
5Mat horzCat(const Mat& A, const Mat& B){ Mat C=A; for(size_t i=0;i<A.size();++i) C[i].insert(C[i].end(), B[i].begin(), B[i].end()); return C; }
6Mat vertCat(const Mat& A, const Mat& B){ Mat C=A; for(auto r:B) C.push_back(r); return C; }
7
8int rankGauss(Mat M, double eps=1e-9){
9 int n=M.size(), m=M[0].size(), r=0;
10 for(int c=0;c<m && r<n; ++c){
11 int piv=r; for(int i=r;i<n;++i) if(fabs(M[i][c])>fabs(M[piv][c])) piv=i;
12 if(fabs(M[piv][c])<eps) continue; swap(M[piv], M[r]);
13 double div=M[r][c]; for(int j=c;j<m;++j) M[r][j]/=div;
14 for(int i=0;i<n;++i) if(i!=r){ double f=M[i][c]; if(fabs(f)<eps) continue; for(int j=c;j<m;++j) M[i][j]-=f*M[r][j]; }
15 ++r;
16 }
17 return r;
18}
19
20Mat controllabilityMatrix(const Mat& A, const Mat& B){
21 size_t n=A.size(); Mat Cn=B; Mat Ab=B; // Ab = A^k B (k starts at 1)
22 Mat Ak = A; // A^1
23 for(size_t k=1;k<n; ++k){
24 Ab = matMul(Ak, B);
25 Cn = horzCat(Cn, Ab);
26 Ak = matMul(Ak, A); // next power
27 }
28 return Cn;
29}
30
31Mat observabilityMatrix(const Mat& A, const Mat& C){
32 size_t n=A.size(); Mat On=C; Mat Ak = A; // A^1
33 for(size_t k=1;k<n; ++k){ Mat row = matMul(C, Ak); On = vertCat(On, row); Ak = matMul(Ak, A); }
34 return On;
35}
36
37int main(){
38 // Example 1: controllable and observable mass-spring-damper
39 double m=1.0, c=0.5, k=4.0;
40 Mat A1 = {{0.0, 1.0}, {-k/m, -c/m}}; Mat B1 = {{0.0}, {1.0/m}}; Mat C1 = {{1.0, 0.0}};
41 Mat Cmat1 = controllabilityMatrix(A1, B1); Mat Omat1 = observabilityMatrix(A1, C1);
42 cout << "Rank(C1)=" << rankGauss(Cmat1) << ", Rank(O1)=" << rankGauss(Omat1) << " (n=2)\n";
43
44 // Example 2: uncontrollable system: dx1= x2, dx2=0, input has no effect (B=0)
45 Mat A2 = {{0.0, 1.0}, {0.0, 0.0}}; Mat B2 = {{0.0}, {0.0}}; Mat C2 = {{1.0, 0.0}};
46 Mat Cmat2 = controllabilityMatrix(A2, B2); Mat Omat2 = observabilityMatrix(A2, C2);
47 cout << "Rank(C2)=" << rankGauss(Cmat2) << ", Rank(O2)=" << rankGauss(Omat2) << " (n=2)\n";
48 return 0;
49}
50

Builds the controllability matrix [B, AB, ..., A^{n-1}B] and the observability matrix [C; CA; ...; CA^{n-1}], then computes ranks using Gaussian elimination with partial pivoting. The first system is fully controllable/observable; the second is uncontrollable.

Time: O(n^3 + n^2 m + n^2 p) for dense matrices (forming powers and rank factorization)Space: O(n^2 + n m + n p) to store A, B, C, and the block matrices
#state space#matrix exponential#controllability#observability#zero-order hold#runge–kutta#transfer function#kalman filter#lqr#riccati equation#lyapunov#linear systems#discretization#numerical stability#eigenvalues