🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
Chapter 3: Linear transformations and matrices | Essence of Linear Algebra | How I Study AI
📚 선형대수학의 본질4 / 12
PrevNext
Chapter 3: Linear transformations and matrices | Essence of Linear Algebra
Watch on YouTube

Chapter 3: Linear transformations and matrices | Essence of Linear Algebra

Beginner
3Blue1Brown Korean
AI BasicsYouTube

Key Summary

  • •This lesson explains linear transformations: special functions that move every point in space to a new point while keeping straight lines straight and keeping the origin fixed. You learn why not all transformations are linear and how these two rules act like a “truth test.” Examples include scaling (stretching/shrinking) and rotation, which are linear, and translation, which is not because it moves the origin.
  • •A key idea is that a linear transformation in 2D is fully determined by what it does to the two standard basis vectors, i-hat and j-hat. If you know where i-hat (1,0) and j-hat (0,1) go, you can figure out where any vector goes because any vector is made of some amount of i-hat plus some amount of j-hat. This turns a big problem into a small one: track just two vectors.
  • •Matrices are the language for linear transformations. Put the transformed i-hat as the first column and the transformed j-hat as the second column to build a 2x2 matrix. Then use matrix–vector multiplication to send any vector through the transformation.
  • •Matrix–vector multiplication gives the transformed coordinates using the columns as building blocks. The numbers of your input vector are the weights for those columns, and adding those weighted columns gives the output vector. This column view makes the geometry very clear.
  • •Scaling is linear because lines stay straight and the origin doesn’t move. Doubling all vectors keeps directions the same while changing lengths. Rotations about the origin are also linear since they keep the origin fixed and map lines to lines.
  • •Translations are not linear because they move the origin. Shifting every point by (1,1) changes where the origin lands, breaking the key rule. Even though translations keep lines straight, they fail the origin test.

Why This Lecture Matters

Linear transformations and matrices are the bridge between geometric intuition and algebraic calculation. If you work in graphics, robotics, physics, or data analysis, you constantly need to rotate, scale, or otherwise re-express vectors. Knowing that every linear transformation is determined by its action on i-hat and j-hat lets you build and read 2x2 matrices with confidence. This saves time and prevents mistakes when designing or debugging systems that move points around, animate objects, or change coordinate systems. In real projects, you can quickly model a desired effect—like a tilt, a stretch, or a projection—by choosing where the basis vectors should go, writing the matrix, and applying it to your data. This knowledge solves common problems: deciding if a transformation can be represented by a simple matrix, computing outputs for many points at once, and understanding the large-scale effect on space. It boosts your career by strengthening a core skill used across STEM fields. Industry relies on these tools because they scale: once you grasp 2D, the same ideas extend to 3D and beyond, and to more advanced topics like combining transformations (matrix multiplication), undoing them (inverses), and measuring area changes (determinants). Mastering this foundation gives you a reliable mental model and a practical toolkit you will use again and again.

Lecture Summary

Tap terms for definitions

01Overview

This lesson teaches the core idea that links geometry and algebra: a linear transformation is a special kind of function that moves every point in space to a new point while preserving straightness of lines and keeping the origin fixed. You will see how to represent any such transformation in two dimensions using a 2x2 matrix, and how matrix–vector multiplication describes where any vector lands after the transformation. The central simplification is that to describe the whole transformation, you only need to know where two special vectors—the standard basis vectors i-hat and j-hat—are sent. Once you know those two destinations, you can assemble a matrix whose columns are exactly those two vectors, and then compute the image of any vector by multiplying the matrix by that vector.

The lesson is designed for beginners to linear algebra and anyone who wants a deeply intuitive understanding of matrices. You don’t need advanced math to follow along. It helps if you know what vectors are, what coordinates like (x, y) mean, and how to do basic arithmetic. Some familiarity with the idea that a vector can be written as a combination of i-hat (1,0) and j-hat (0,1) will make things even smoother, but the lesson also explains this gently.

After completing this lesson, you will be able to: (1) recognize whether a transformation is linear by checking two simple geometric rules—lines stay lines, and the origin stays fixed; (2) build the 2x2 matrix of a linear transformation by recording where i-hat and j-hat go; (3) apply a linear transformation to any vector using matrix–vector multiplication; and (4) understand geometric effects like scaling, rotation, and projection as matrices acting on vectors. These are practical skills, because matrices are used everywhere—from computer graphics to physics, from robotics to data science—to describe and compute changes of position, orientation, and scale.

The structure of the lesson flows from big-picture intuition to concrete calculation. It starts by defining linear transformations with two geometric rules that are easy to visualize. Then it shows examples of linear transformations (scaling and rotation) and a non-linear transformation (translation) to sharpen your sense of what counts. Next, it explains the crucial role of the standard basis vectors i-hat and j-hat. You learn that a 2D vector (x, y) is x copies of i-hat plus y copies of j-hat, which makes it enough to track where i-hat and j-hat go under the transformation. From there, the lesson introduces how to construct the matrix by placing those images as the columns of a 2x2 array. Finally, it demonstrates matrix–vector multiplication as the process that takes in the input vector’s coordinates and outputs the transformed coordinates, tying the algebraic steps back to the geometric motion you can imagine in the plane.

By the end, you have both a picture in your mind—how the whole grid moves under scaling, rotation, or projection—and a recipe in your hands—how to write down the matrix and compute the result for any vector. You also practice with specific numeric examples to ground the ideas: mapping i-hat to (3,0) and j-hat to (1,2), and mapping both i-hat and j-hat to (1,1). These anchor your understanding and show that the columns of a matrix are more than just numbers—they are the images of the basis vectors, the building blocks of the transformation.

Key Takeaways

  • ✓Always test linearity with two checks: lines must stay straight and the origin must stay fixed. If either fails, the rule is not a pure linear transformation. This quick filter saves you from trying to force a matrix onto a non-linear action. Use it before building any matrix model.
  • ✓To build the 2x2 matrix of a transformation, first find where i-hat and j-hat go. Put the image of i-hat as the first column and the image of j-hat as the second column. This completely defines the transformation. It’s the fastest, most reliable construction method.
  • ✓Read matrix–vector multiplication as mixing columns using the input’s coordinates as weights. This picture helps detect arithmetic mistakes and explains why the formula ax + cy and bx + dy appears. When stuck, draw the columns and scale/add them by x and y. Seeing the geometry clarifies the numbers.
  • ✓Use simple test inputs to validate your matrix: multiply by (1,0) and (0,1) and verify you get the columns you intended. If not, you misplaced numbers in the matrix. This habit catches row/column mix-ups early. It keeps your models trustworthy.
  • ✓Remember that translations are not linear because they move the origin. Don’t try to represent a pure translation with a 2x2 matrix acting on vectors alone. Recognize when you need affine tools instead. This prevents confusion and wrong computations.
  • ✓Practice with concrete examples to build intuition: pick destinations for i-hat and j-hat, make the matrix, and apply it to several vectors. Sketch the grid before and after to see lines tilt and stretch. Repetition with different matrices cements understanding. Geometry plus computation is a powerful combo.

Glossary

Linear transformation

A special rule that moves every vector to another vector while keeping straight lines straight and keeping the origin fixed. It is like stretching, shrinking, or rotating the whole plane around the origin. The rule treats the space evenly and doesn’t bend or curve lines. Because of these properties, we can describe it with a matrix. This makes both thinking and computing much simpler.

Origin

The special point at the center of the coordinate system, located at (0,0) in 2D. It’s the fixed reference point from which all positions are measured. For linear transformations, the origin must stay in place. If the origin moves, the transformation is not linear. Think of it as the anchor pin holding the grid still.

Straight line

A path in the plane where direction does not change. In linear transformations, straight lines must remain straight, though they may tilt or stretch. If lines bend into curves, the transformation is not linear. This property keeps the structure of space simple. It lets us use matrices to describe the action.

i-hat (standard basis vector)

The vector pointing one unit along the x-axis, written as (1,0). It represents a single step in the horizontal direction. Together with j-hat, it can build any 2D vector. Knowing where i-hat goes under a transformation is half the information you need. It becomes the first column of the matrix.

#linear transformation#matrix#matrix-vector multiplication#basis vectors#i-hat#j-hat#scaling#rotation#translation#projection#origin#straight lines#2x2 matrix#column interpretation#linear combination#plane#identity matrix#weighted sum#geometric intuition#standard basis
Version: 1
•
If i-hat goes to (a,b) and j-hat goes to (c,d), the matrix of the transformation is [[a c],[b d]]. Multiply this matrix by any vector (x,y) to get the new vector. The arithmetic is ax + cy for the new x, and bx + dy for the new y.
  • •A concrete example: if i-hat maps to (3,0) and j-hat maps to (1,2), the matrix is [[3,1],[0,2]]. Plugging in (2,3) yields (9,6). This shows how columns act like the images of the basis vectors.
  • •Another example: if both i-hat and j-hat map to (1,1), the matrix is [[1,1],[1,1]]. This squeezes the whole plane onto the line y = x, like a projection. Everything collapses onto that diagonal line.
  • •The lesson builds intuition by linking geometry and algebra. The geometric rules (lines stay straight, origin fixed) match the algebraic rules behind matrices and linear combinations. This harmony lets you see transformations and compute them easily.
  • •You practice reading matrices as actions: columns show where the basis vectors go, and multiplying applies the action to any vector. This is the foundation for understanding more complex ideas like matrix multiplication and composition of transformations later.
  • •The main takeaway: linear transformations are simple to describe and compute once you think in terms of basis vectors and matrices. Remember the two rules and the “columns are images of i-hat and j-hat” idea. With those, you can model many geometric effects using just a small 2x2 grid of numbers.
  • 02Key Concepts

    • 01

      Definition of a linear transformation: A linear transformation is a function that moves every point in space to another point while keeping straight lines straight and keeping the origin fixed. This definition is geometric and easy to picture: no bending or curving of the grid, and (0,0) stays where it is. It captures a broad class of familiar moves like stretching and rotating. The key is that the transformation acts evenly across the plane. If you imagine the whole coordinate grid, each straight line stays a straight line after the move, and the center does not drift.

    • 02

      Two key rules: lines stay lines, and the origin stays fixed: These two rules are the quick test for linearity. Many transformations keep lines straight, but if they move the origin, they fail the test. Likewise, some transformations keep the origin, but if they bend lines into curves, they fail. Together, the rules ensure the transformation respects the straight structure of the plane. They also match the algebraic idea of preserving addition and scaling, though the lesson focuses on the geometric view.

    • 03

      Why translation is not linear: A translation shifts every point by a fixed amount, like adding (1,1) to all coordinates. While it keeps lines straight, it moves the origin to (1,1), violating the origin rule. Therefore, translations are not linear transformations. This makes them a different class, usually called affine transformations. The distinction matters because matrices alone represent linear transformations, not pure translations.

    • 04

      Scaling as a linear transformation: Scaling stretches or shrinks vectors, for example doubling all lengths. Straight lines remain straight under scaling, and the origin remains in place. This makes scaling a textbook example of a linear transformation. The effect is uniform: every vector scales by the same factor in the same direction. This uniformity is part of why matrices can describe it cleanly.

    • 05

      Rotation as a linear transformation: Rotating the plane around the origin by a fixed angle moves points while keeping the origin exactly where it is. Straight lines rotate to new straight lines. Directions change, but straightness and the origin are preserved, which is why rotation is linear. This geometric motion can be encoded by a specific matrix depending on the angle. Though the exact angle formula isn’t needed here, the concept of rotation fits perfectly into the linear framework.

    • 06

      Standard basis vectors i-hat and j-hat: The vectors i-hat and j-hat, written as (1,0) and (0,1), are the building blocks of all vectors in the plane. Any vector (x,y) can be made by taking x copies of i-hat and y copies of j-hat. This idea is called a linear combination, using x and y as weights. Tracking how a transformation moves i-hat and j-hat is enough to know how it moves everything. That’s why the matrix can be built from their images.

    • 07

      Matrix as a record of where i-hat and j-hat go: To form the matrix of a linear transformation, place the image of i-hat as the first column and the image of j-hat as the second column. This creates a 2x2 array that fully captures the transformation. The columns are not arbitrary—they are directly meaningful geometric vectors. Reading a matrix this way turns it into a picture: the first column is where (1,0) lands, and the second column is where (0,1) lands. This column view is a key habit to develop.

    • 08

      Matrix–vector multiplication as building with columns: When you multiply the matrix by a vector (x,y), you are taking x copies of the first column and y copies of the second column, then adding them. The result is the transformed vector. This connects algebra to geometry because you are literally reconstructing the output from the images of the basis. It also explains why the arithmetic formula ax + cy and bx + dy appears: those are just the coordinates of the weighted column sum. Seeing it this way makes the computation feel natural.

    • 09

      Computing a transformed vector: Given a 2x2 matrix with columns (a,b) and (c,d), the image of (x,y) is (ax + cy, bx + dy). Each output coordinate is a weighted sum of a row against the input vector. This is another way to compute the same result as the column view. Both perspectives agree, so you can pick the one that makes more sense to you. In practice, thinking in columns often builds better intuition.

    • 10

      Example with i-hat→(3,0) and j-hat→(1,2): If i-hat goes to (3,0) and j-hat goes to (1,2), the matrix is (3102)\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}(30​12​). Multiplying this matrix by (2,3) gives (9,6). This shows that 2 copies of (3,0) plus 3 copies of (1,2) equals (6,0) + (3,6) = (9,6). It cements the idea that the input vector’s coordinates are weights on the matrix columns. You can visualize how the grid is stretched and tilted by this mapping.

    • 11

      Example with both i-hat and j-hat going to (1,1): If both basis vectors land on (1,1), the matrix is (1111)\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}(11​11​). This collapses the plane onto the line y = x, like squashing a sheet onto a diagonal thread. Every vector gets sent to some point on that line. The two columns being identical is a visual clue that the output can only lie along that single direction. This is a vivid picture of a projection-like effect.

    • 12

      Why the origin staying fixed matters: The origin is the anchor of the coordinate system. If it moves, the transformation behaves like a shift plus something else, which is not purely linear. Keeping the origin fixed means the transformation is centered and evenly applied. It also guarantees that the image of the zero vector is still the zero vector. This property keeps computations consistent and predictable.

    • 13

      Why lines staying lines matters: Straight lines represent constant-direction paths in the plane. If a transformation bends them, it introduces curving and warping that linear algebra does not model with simple matrices. Keeping lines straight preserves the structure of space in a way matrices can capture. It means the transformation respects how vectors add along a line. This straightness is the hallmark of linear behavior.

    • 14

      Vectors as linear combinations of i-hat and j-hat: Writing (x,y) as x⋅ix·ix⋅i-hat + y⋅jy·jy⋅j-hat is more than a convenience; it’s the key that unlocks matrix representation. Because a linear transformation respects how vectors combine, knowing where i-hat and j-hat go tells you where any combination goes. The weights x and y simply carry through to the images. This is why matrices are compact yet complete descriptions. The concept also generalizes to higher dimensions with more basis vectors.

    • 15

      Reading a matrix like a map: Think of each column as a destination for a compass direction: east (i-hat) and north (j-hat). The matrix records: “east goes here,” “north goes there.” To see where any point goes, mix those two destinations according to how much east and north the point has. This is a simple, reliable way to mentally simulate transformations. It builds strong intuition for future topics like matrix multiplication.

    • 16

      Geometric meaning of identical columns: When both columns of a matrix are the same, the transformation sends the whole plane onto a single line. No matter what the input is, the output lies along that shared column direction. This reveals a loss of dimensionality—many inputs map to the same output. It’s like squashing a wide sheet into a narrow strip. The example with (1111)\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}(11​11​) makes this effect very clear.

    • 17

      Practical computation habit: Use the column picture to check your work: Take x copies of column 1 and y copies of column 2, then add. If your arithmetic result doesn’t match that picture, re-check the numbers. This method helps prevent sign mistakes and misordered operations. It also keeps the geometry in mind while you calculate. Over time, it becomes second nature.

    • 18

      Visualizing with the coordinate grid: Picture how the standard grid lines—vertical and horizontal—move. Vertical lines x=cx = cx=c become the images of all points with the same weighted i-hat amount, while horizontal lines y=cy = cy=c track the j-hat amount. Under linear transformations, these lines stay straight but tilt and stretch. This mental movie shows the whole action, not just single points. It also reinforces that the origin grid crossing stays put.

    • 19

      From geometric rules to algebraic form: The geometric definition (lines stay lines, origin fixed) matches the algebraic form of a matrix acting on vectors. Matrices naturally keep the origin fixed and map straight lines to straight lines. That’s why they’re the perfect tool for linear transformations. The move from pictures to numbers is smooth and consistent. This alignment is the heart of linear algebra.

    03Technical Details

    Below is a complete, step-by-step guide to understanding, building, and using linear transformations and their matrices, written so you can follow and compute everything yourself.

    1. Linear transformation: what, why, and how
    • 🎯 One-line definition: A linear transformation is a rule that moves every vector to a new vector while keeping straight lines straight and keeping the origin fixed. For example, scaling or rotating the plane around the origin fits this rule.
    • 🏠 Everyday analogy: It’s like stretching or rotating a rubber sheet pinned at its center dot: the sheet may change shape, but lines drawn on it remain straight, and the pin at the center never moves. For instance, imagine a square grid drawn on a stretchy fabric with a pin through the exact center.
    • 🔧 Technical explanation: Formally, in 2D, the transformation sends each vector (xy)\begin{pmatrix} x \\ y \end{pmatrix}(xy​) to another vector in the plane while preserving straightness and fixing the origin. For example, if TTT is a scaling by 2, then T ⁣((xy))=(2x2y)T\!\left(\begin{pmatrix} x \\ y \end{pmatrix}\right) = \begin{pmatrix} 2x \\ 2y \end{pmatrix}T((xy​))=(2x2y​); with (3−1)\begin{pmatrix} 3 \\ -1 \end{pmatrix}(3−1​) as input, the output is (6−2)\begin{pmatrix} 6 \\ -2 \end{pmatrix}(6−2​).
    • 💡 Why it matters: Without these rules, we can’t use simple matrices to describe the action—curving or moving the origin would require more complicated math. Keeping the origin fixed and lines straight makes the behavior predictable and computable.
    • 📝 Concrete example: Doubling all vectors in the plane turns (14)\begin{pmatrix} 1 \\ 4 \end{pmatrix}(14​) into (28)\begin{pmatrix} 2 \\ 8 \end{pmatrix}(28​) and (−32)\begin{pmatrix} -3 \\ 2 \end{pmatrix}(−32​) into (−64)\begin{pmatrix} -6 \\ 4 \end{pmatrix}(−64​).
    1. The two geometric rules in detail
    • 🎯 One-line definition: Rule 1—straight lines remain straight; Rule 2—the origin stays fixed. For instance, a rotation by a fixed angle obeys both rules.
    • 🏠 Everyday analogy: Think of moving furniture on a wooden floor with straight planks. If the planks (lines) stay straight, and the nail at the exact center of the room (origin) doesn’t move, you’re doing a “linear-friendly” move.
    • 🔧 Technical explanation: For any line in the plane, its image is also a line; the special point (00)\begin{pmatrix} 0 \\ 0 \end{pmatrix}(00​) must map to (00)\begin{pmatrix} 0 \\ 0 \end{pmatrix}(00​). For example, if T ⁣((xy))=(x+yx+y)T\!\left(\begin{pmatrix} x \\ y \end{pmatrix}\right) = \begin{pmatrix} x + y \\ x + y \end{pmatrix}T((xy​))=(x+yx+y​), then the line y=2xy = 2xy=2x (e.g., point (12)\begin{pmatrix} 1 \\ 2 \end{pmatrix}(12​)) maps to (33)\begin{pmatrix} 3 \\ 3 \end{pmatrix}(33​), which lies on the line y=xy = xy=x.
    • 💡 Why it matters: These conditions ensure we can represent the transformation with a matrix, making calculations simple and reliable.
    • 📝 Concrete example: A rotation by 90∘90^{\circ}90∘ around the origin sends (20)\begin{pmatrix} 2 \\ 0 \end{pmatrix}(20​) to (02)\begin{pmatrix} 0 \\ 2 \end{pmatrix}(02​), still keeping the origin fixed.
    1. Standard basis vectors (i-hat and j-hat)
    • 🎯 One-line definition: The standard basis vectors are i^=(10)\hat{i} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}i^=(10​) and j^=(01)\hat{j} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}j^​=(01​); they generate every 2D vector. For example, (4−2)=4i^+(−2)j^\begin{pmatrix} 4 \\ -2 \end{pmatrix} = 4\hat{i} + (-2)\hat{j}(4−2​)=4i^+(−2)j^​.
    • 🏠 Everyday analogy: They are like the east and north steps on a grid map: any location can be reached by taking some number of steps east and some number of steps north.
    • 🔧 Technical explanation: Any vector (xy)\begin{pmatrix} x \\ y \end{pmatrix}(xy​) equals xi^+yj^x\hat{i} + y\hat{j}xi^+yj^​. For example, (−13)=(−1)(10)+3(01)\begin{pmatrix} -1 \\ 3 \end{pmatrix} = (-1)\begin{pmatrix} 1 \\ 0 \end{pmatrix} + 3\begin{pmatrix} 0 \\ 1 \end{pmatrix}(−13​)=(−1)(10​)+3(01​).
    • 💡 Why it matters: If a transformation is linear, knowing where i^\hat{i}i^ and j^\hat{j}j^​ go determines where every vector goes.
    • 📝 Concrete example: If T(i^)=(21)T(\hat{i}) = \begin{pmatrix} 2 \\ 1 \end{pmatrix}T(i^)=(21​) and T(j^)=(−13)T(\hat{j}) = \begin{pmatrix} -1 \\ 3 \end{pmatrix}T(j^​)=(−13​), then T ⁣((4−2))=4(21)+(−2)(−13)=(10−2)T\!\left(\begin{pmatrix} 4 \\ -2 \end{pmatrix}\right) = 4\begin{pmatrix} 2 \\ 1 \end{pmatrix} + (-2)\begin{pmatrix} -1 \\ 3 \end{pmatrix} = \begin{pmatrix} 10 \\ -2 \end{pmatrix}T((4−2​))=4(21​)+(−2)(−13​)=(10−2​).
    1. Building the matrix from i-hat and j-hat
    • 🎯 One-line definition: Put T(i^)T(\hat{i})T(i^) as the first column and T(j^)T(\hat{j})T(j^​) as the second column to form the 2x2 matrix of TTT. If T(i^)=(ab)T(\hat{i})=\begin{pmatrix} a \\ b \end{pmatrix}T(i^)=(ab​) and T(j^)=(cd)T(\hat{j})=\begin{pmatrix} c \\ d \end{pmatrix}T(j^​)=(cd​), the matrix is (acbd)\begin{pmatrix} a & c \\ b & d \end{pmatrix}(ab​cd​). For instance, if T(i^)=(30)T(\hat{i})=\begin{pmatrix} 3 \\ 0 \end{pmatrix}T(i^)=(30​) and T(j^)=(12)T(\hat{j})=\begin{pmatrix} 1 \\ 2 \end{pmatrix}T(j^​)=(12​), then the matrix is (3102)\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}(30​12​).
    • 🏠 Everyday analogy: It’s like making a recipe card: column 1 tells you where a unit east step ends up, and column 2 tells you where a unit north step ends up.
    • 🔧 Technical explanation: Because any vector is xi^+yj^x\hat{i} + y\hat{j}xi^+yj^​, linearity implies T ⁣((xy))=x T(i^)+y T(j^)T\!\left(\begin{pmatrix} x \\ y \end{pmatrix}\right) = x\,T(\hat{i}) + y\,T(\hat{j})T((xy​))=xT(i^)+yT(j^​), which equals (acbd)(xy)\begin{pmatrix} a & c \\ b & d \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix}(ab​cd​)(xy​). For example, with (acbd)=(3102)\begin{pmatrix} a & c \\ b & d \end{pmatrix} = \begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}(ab​cd​)=(30​12​) and (xy)=(23)\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 2 \\ 3 \end{pmatrix}(xy​)=(23​), the output is (96)\begin{pmatrix} 9 \\ 6 \end{pmatrix}(96​).
    • 💡 Why it matters: This compresses the whole description of TTT into four numbers arranged meaningfully. It gives a fast, precise way to compute outputs.
    • 📝 Concrete example: If T(i^)=(11)T(\hat{i})=\begin{pmatrix} 1 \\ 1 \end{pmatrix}T(i^)=(11​) and T(j^)=(11)T(\hat{j})=\begin{pmatrix} 1 \\ 1 \end{pmatrix}T(j^​)=(11​), then the matrix is (1111)\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}(11​11​); applying it to (3−1)\begin{pmatrix} 3 \\ -1 \end{pmatrix}(3−1​) yields (22)\begin{pmatrix} 2 \\ 2 \end{pmatrix}(22​).
    1. Matrix–vector multiplication as weighted columns
    • 🎯 One-line definition: Multiplying a matrix by a vector forms a weighted sum of the matrix’s columns using the vector’s entries as weights. For example, (3102)(23)=2(30)+3(12)=(96)\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}\begin{pmatrix} 2 \\ 3 \end{pmatrix} = 2\begin{pmatrix} 3 \\ 0 \end{pmatrix} + 3\begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 9 \\ 6 \end{pmatrix}(30​12​)(23​)=2(30​)+3(12​)=(96​).
    • 🏠 Everyday analogy: It’s like mixing two paint colors: the columns are your base colors, and the vector’s numbers tell you how much of each color to use. The final color is their blend.
    • 🔧 Technical explanation: If A=(acbd)A = \begin{pmatrix} a & c \\ b & d \end{pmatrix}A=(ab​cd​) and x⃗=(xy)\vec{x} = \begin{pmatrix} x \\ y \end{pmatrix}x=(xy​), then Ax⃗=x(ab)+y(cd)=(ax+cybx+dy)A\vec{x} = x\begin{pmatrix} a \\ b \end{pmatrix} + y\begin{pmatrix} c \\ d \end{pmatrix} = \begin{pmatrix} ax + cy \\ bx + dy \end{pmatrix}Ax=x(ab​)+y(cd​)=(ax+cybx+dy​). For example, with A=(2−113)A=\begin{pmatrix} 2 & -1 \\ 1 & 3 \end{pmatrix}A=(21​−13​) and x⃗=(45)\vec{x}=\begin{pmatrix} 4 \\ 5 \end{pmatrix}x=(45​), the result is (319)\begin{pmatrix} 3 \\ 19 \end{pmatrix}(319​).
    • 💡 Why it matters: This interpretation makes computations transparent and helps you catch mistakes by comparing the arithmetic to the geometric mixing of columns.
    • 📝 Concrete example: Take A=(0110)A=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}A=(01​10​) and x⃗=(7−2)\vec{x}=\begin{pmatrix} 7 \\ -2 \end{pmatrix}x=(7−2​). Then Ax⃗=(−27)A\vec{x}=\begin{pmatrix} -2 \\ 7 \end{pmatrix}Ax=(−27​), which swaps coordinates.
    1. Recognizing linear vs. non-linear actions
    • 🎯 One-line definition: Linear transformations preserve straightness and the origin; non-linear ones fail one or both conditions. For example, translations fail because they move the origin.
    • 🏠 Everyday analogy: If you slide an entire drawing across a page (translation), the crosshair you drew at the center won’t be at the center anymore.
    • 🔧 Technical explanation: A pure translation by (pq)\begin{pmatrix} p \\ q \end{pmatrix}(pq​) sends (xy)\begin{pmatrix} x \\ y \end{pmatrix}(xy​) to (x+py+q)\begin{pmatrix} x+p \\ y+q \end{pmatrix}(x+py+q​), moving the origin to (pq)\begin{pmatrix} p \\ q \end{pmatrix}(pq​). For example, translating by (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}(11​) moves (00)\begin{pmatrix} 0 \\ 0 \end{pmatrix}(00​) to (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}(11​).
    • 💡 Why it matters: Distinguishing these helps you know when matrices alone are enough to model a transformation.
    • 📝 Concrete example: (23)\begin{pmatrix} 2 \\ 3 \end{pmatrix}(23​) under translation by (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}(11​) becomes (34)\begin{pmatrix} 3 \\ 4 \end{pmatrix}(34​), but this action cannot be captured by a 2x2 matrix acting on vectors alone.
    1. Scaling and rotation as core linear examples
    • 🎯 One-line definition: Scaling changes sizes uniformly and rotation turns the plane around the origin; both keep lines straight and the origin fixed. For instance, doubling length or rotating by 90∘90^{\circ}90∘ are linear.
    • 🏠 Everyday analogy: Scaling is like zooming a photo in or out from its exact center; rotation is like spinning a wheel around its axle.
    • 🔧 Technical explanation: A simple scaling by a factor kkk maps (xy)\begin{pmatrix} x \\ y \end{pmatrix}(xy​) to (kxky)\begin{pmatrix} kx \\ ky \end{pmatrix}(kxky​), such as k=2k=2k=2 sending (3−1)\begin{pmatrix} 3 \\ -1 \end{pmatrix}(3−1​) to (6−2)\begin{pmatrix} 6 \\ -2 \end{pmatrix}(6−2​). A 90∘90^{\circ}90∘ rotation sends (xy)\begin{pmatrix} x \\ y \end{pmatrix}(xy​) to (−yx)\begin{pmatrix} -y \\ x \end{pmatrix}(−yx​); for example, (41)\begin{pmatrix} 4 \\ 1 \end{pmatrix}(41​) goes to (−14)\begin{pmatrix} -1 \\ 4 \end{pmatrix}(−14​).
    • 💡 Why it matters: These show how common geometric moves fit into the linear framework and can be computed with matrices.
    • 📝 Concrete example: The scaling matrix (2003)\begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}(20​03​) sends (14)\begin{pmatrix} 1 \\ 4 \end{pmatrix}(14​) to (212)\begin{pmatrix} 2 \\ 12 \end{pmatrix}(212​).
    1. Projection-like compression onto a line
    • 🎯 One-line definition: Sending both i-hat and j-hat to the same vector collapses the plane onto the line in that vector’s direction, like squashing onto y=xy = xy=x. For example, columns both equal to (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}(11​) do this.
    • 🏠 Everyday analogy: Imagine pushing a wide blanket so it lies along a single piece of string; everything ends up on that line.
    • 🔧 Technical explanation: With A=(1111)A=\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}A=(11​11​), any input (xy)\begin{pmatrix} x \\ y \end{pmatrix}(xy​) maps to (x+yx+y)\begin{pmatrix} x+y \\ x+y \end{pmatrix}(x+yx+y​), which lies on y=xy = xy=x. For instance, (3−1)\begin{pmatrix} 3 \\ -1 \end{pmatrix}(3−1​) goes to (22)\begin{pmatrix} 2 \\ 2 \end{pmatrix}(22​).
    • 💡 Why it matters: This shows how matrices can reduce dimensionality and shape space in simple, visual ways.
    • 📝 Concrete example: (−25)\begin{pmatrix} -2 \\ 5 \end{pmatrix}(−25​) maps to (33)\begin{pmatrix} 3 \\ 3 \end{pmatrix}(33​), again on the line y=xy = xy=x.
    1. Step-by-step implementation guide for computations
    • Step 1: Identify the action on i^\hat{i}i^ and j^\hat{j}j^​. For example, suppose T(i^)=(41)T(\hat{i})=\begin{pmatrix} 4 \\ 1 \end{pmatrix}T(i^)=(41​) and T(j^)=(−23)T(\hat{j})=\begin{pmatrix} -2 \\ 3 \end{pmatrix}T(j^​)=(−23​).
    • Step 2: Build the matrix with these as columns: A=(4−213)A=\begin{pmatrix} 4 & -2 \\ 1 & 3 \end{pmatrix}A=(41​−23​). For example, that is exactly (4−213)\begin{pmatrix} 4 & -2 \\ 1 & 3 \end{pmatrix}(41​−23​).
    • Step 3: For any input vector x⃗=(xy)\vec{x}=\begin{pmatrix} x \\ y \end{pmatrix}x=(xy​), compute Ax⃗=(4x−2yx+3y)A\vec{x}=\begin{pmatrix} 4x-2y \\ x+3y \end{pmatrix}Ax=(4x−2yx+3y​). For example, with x⃗=(25)\vec{x}=\begin{pmatrix} 2 \\ 5 \end{pmatrix}x=(25​), the output is (−217)\begin{pmatrix} -2 \\ 17 \end{pmatrix}(−217​).
    • Step 4: Interpret the result geometrically: We mixed the destination of i^\hat{i}i^ (four units right, one up) with the destination of j^\hat{j}j^​ (two left, three up). For example, 2 copies of (41)\begin{pmatrix} 4 \\ 1 \end{pmatrix}(41​) plus 5 copies of (−23)\begin{pmatrix} -2 \\ 3 \end{pmatrix}(−23​) equals (−217)\begin{pmatrix} -2 \\ 17 \end{pmatrix}(−217​).
    1. Tips and warnings
    • Keep the origin fixed: If your rule doesn’t send (00)\begin{pmatrix} 0 \\ 0 \end{pmatrix}(00​) to (00)\begin{pmatrix} 0 \\ 0 \end{pmatrix}(00​), it’s not a linear transformation. For example, translation by (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}(11​) moves the origin to (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}(11​).
    • Use the column picture: Always read matrices as images of i^\hat{i}i^ and j^\hat{j}j^​. For instance, the matrix (3102)\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}(30​12​) says i^↦(30)\hat{i}\mapsto\begin{pmatrix} 3 \\ 0 \end{pmatrix}i^↦(30​) and j^↦(12)\hat{j}\mapsto\begin{pmatrix} 1 \\ 2 \end{pmatrix}j^​↦(12​).
    • Double-check arithmetic with small test vectors: Try (10)\begin{pmatrix} 1 \\ 0 \end{pmatrix}(10​) and (01)\begin{pmatrix} 0 \\ 1 \end{pmatrix}(01​) first; the outputs should match your matrix columns. For example, (3102)(10)=(30)\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}\begin{pmatrix} 1 \\ 0 \end{pmatrix}=\begin{pmatrix} 3 \\ 0 \end{pmatrix}(30​12​)(10​)=(30​) and (3102)(01)=(12)\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}\begin{pmatrix} 0 \\ 1 \end{pmatrix}=\begin{pmatrix} 1 \\ 2 \end{pmatrix}(30​12​)(01​)=(12​).
    • Beware of swapping rows vs. columns: The images of basis vectors go in columns, not rows. For example, if T(i^)=(ab)T(\hat{i})=\begin{pmatrix} a \\ b \end{pmatrix}T(i^)=(ab​) and T(j^)=(cd)T(\hat{j})=\begin{pmatrix} c \\ d \end{pmatrix}T(j^​)=(cd​), the matrix is (acbd)\begin{pmatrix} a & c \\ b & d \end{pmatrix}(ab​cd​), not (abcd)\begin{pmatrix} a & b \\ c & d \end{pmatrix}(ac​bd​).
    • Think geometrically: Visualize how the grid tilts and stretches; it helps prevent mistakes. For example, identical columns signal a collapse onto a line such as y=xy = xy=x.
    1. Putting it all together with core examples
    • Example A (given): T(i^)=(30)T(\hat{i})=\begin{pmatrix} 3 \\ 0 \end{pmatrix}T(i^)=(30​), T(j^)=(12)T(\hat{j})=\begin{pmatrix} 1 \\ 2 \end{pmatrix}T(j^​)=(12​), so A=(3102)A=\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}A=(30​12​). For x⃗=(23)\vec{x}=\begin{pmatrix} 2 \\ 3 \end{pmatrix}x=(23​), Ax⃗=(96)A\vec{x}=\begin{pmatrix} 9 \\ 6 \end{pmatrix}Ax=(96​).
    • Example B (given): T(i^)=(11)T(\hat{i})=\begin{pmatrix} 1 \\ 1 \end{pmatrix}T(i^)=(11​) and T(j^)=(11)T(\hat{j})=\begin{pmatrix} 1 \\ 1 \end{pmatrix}T(j^​)=(11​), so A=(1111)A=\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}A=(11​11​). Any (xy)\begin{pmatrix} x \\ y \end{pmatrix}(xy​) maps to (x+yx+y)\begin{pmatrix} x+y \\ x+y \end{pmatrix}(x+yx+y​), which lies on y=xy = xy=x.

    With these steps, definitions, and examples, you can recognize linear transformations, construct their matrices from the images of i^\hat{i}i^ and j^\hat{j}j^​, and compute where any vector goes using matrix–vector multiplication. The geometry (lines stay straight, origin fixed) and the algebra (columns record T(i^)T(\hat{i})T(i^) and T(j^)T(\hat{j})T(j^​)) match perfectly, giving you both intuition and a calculation method you can trust.

    04Examples

    • 💡

      Scaling example: Input is vector (2, -3) and the rule is “double every vector.” Processing means computing 2 times each coordinate, giving (4, -6). The output is the new vector (4, -6). The key point is that scaling keeps lines straight and the origin fixed, so it’s linear.

    • 💡

      Rotation example: Rotate every vector by 90 degrees counterclockwise around the origin. Input (3, 1) is processed by swapping and negating the first coordinate, giving (-1, 3). Output is (-1, 3). This shows rotation is linear and keeps the origin fixed.

    • 💡

      Translation not linear: Shift every vector by (1,1). Input (0,0) is processed by adding (1,1), sending it to (1,1). Output is (1,1), proving the origin moved, so this rule is not linear.

    • 💡

      Matrix from basis images (given): i-hat goes to (3,0) and j-hat goes to (1,2). Build matrix (3102)\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}(30​12​) and multiply by input (2,3). Output is (9,6). This demonstrates columns equal images of basis vectors.

    • 💡

      Projection-like compression (given): Both i-hat and j-hat go to (1,1). Build matrix (1111)\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}(11​11​) and apply to (3,-1). Output is (2,2), which lies on the line y = x. This shows collapsing the plane to a single line.

    • 💡

      Column-mixing view: Let matrix be (2−113)\begin{pmatrix} 2 & -1 \\ 1 & 3 \end{pmatrix}(21​−13​) and input be (4,5). Processing takes 4 copies of column 1 (2,1) and 5 copies of column 2 (-1,3), summing to (3,19). Output is (3,19). This example reinforces the weighted-column interpretation.

    • 💡

      Identity action: Take matrix (1001)\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}(10​01​) which leaves every vector unchanged. Input (7,-2) is multiplied to get (7,-2) again. Output is identical to input. This shows a transformation that trivially preserves lines and the origin.

    • 💡

      Coordinate swap: Matrix (0110)\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}(01​10​) swaps x and y. Input (2,9) becomes (9,2). Output is (9,2). This visualizes a flip across the line y = x while keeping the origin.

    • 💡

      Anisotropic scaling: Matrix (2003)\begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}(20​03​) stretches x by 2 and y by 3. Input (1,4) becomes (2,12). Output is (2,12). This shows lines stay lines even with different stretch factors per axis.

    • 💡

      Mixture with tilt: Matrix (1201)\begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix}(10​21​) sends i-hat to (1,0) and j-hat to (2,1). Input (3,2) maps to (13 + 22, 03 + 12) = (7,2). Output is (7,2). This shows how adding a portion of j-hat’s image to i-hat’s direction tilts the grid.

    • 💡

      Zeroing one direction: Matrix (1000)\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}(10​00​) keeps x and kills y. Input (5,-3) goes to (5,0). Output lies on the x-axis. This example shows collapsing one dimension entirely while keeping lines straight.

    • 💡

      Multiple inputs through one matrix: With (3102)\begin{pmatrix} 3 & 1 \\ 0 & 2 \end{pmatrix}(30​12​), test inputs (1,0), (0,1), and (2,3). Outputs are (3,0), (1,2), and (9,6). This confirms columns are basis images and any vector is a blend of them.

    • 💡

      Testing linearity: Propose rule T(x,y) = (x + y, x + y) and try two inputs (1,2) and (3,0). Compute outputs (3,3) and (3,3) and also check a sum input (4,2) → (6,6). Outputs lie on y = x and origin stays fixed, indicating linear behavior that matches a matrix with identical columns.

    • 💡

      Non-linear curve maker: Consider a squaring rule S(x,y) = (x2x^2x2, y). Input (2,3) becomes (4,3). Lines like y = constant stay lines, but lines like x = constant remain lines; however, general lines bend under x2x^2x2 change. This breaks the “lines stay lines” rule, showing non-linear behavior.

    • 💡

      Visual grid effect: Take matrix (1111)\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}(11​11​) and imagine the whole grid. Vertical lines x = c and horizontal lines y = c both map to the same diagonal family. Every input ends up on y = x. This highlights the large-scale geometry change caused by identical columns.

    05Conclusion

    This lesson connected geometry and algebra through the idea of linear transformations and their matrix representations. A linear transformation is defined by two simple geometric rules: it keeps straight lines straight and it keeps the origin fixed. From this foundation, you learned that the images of the two standard basis vectors, i-hat and j-hat, completely determine the transformation in 2D. Placing these images as the columns of a 2x2 matrix creates a compact record of the entire action. Matrix–vector multiplication then becomes a clear geometric process: take x copies of the first column and y copies of the second column and add them to get the transformed vector.

    You saw examples that make the ideas stick: scaling and rotation are linear because they obey both rules, while translation is not linear because it moves the origin. A vivid case where both columns are identical showed how a matrix can collapse the plane onto a single line, like a projection onto y = x. The core habit to build is reading matrices as actions: columns are where the basis vectors go, and any vector’s image is just a weighted mix of those columns.

    For practice, try inventing your own transformations by choosing destinations for i-hat and j-hat, building the matrix, and applying it to several vectors. Sketch how the grid would look after the transformation to reinforce your geometric intuition. A great mini-project is to pick three or four different matrices—like a scaling, a rotation by 90 degrees, a column-duplicate matrix, and a coordinate-swap matrix—and apply each to a set of sample points and to the unit square. Compute the outputs and draw them to see the tilt, stretch, or collapse.

    As next steps, study how to combine transformations using matrix multiplication and how composition corresponds to multiplying matrices. This naturally leads to exploring inverses (undoing transformations) and determinants (area scaling factors), topics that build directly on the column-based, geometric view you practiced here. Throughout, keep the core message in mind: linear transformations are powerful because they preserve simple structure—straight lines and the origin—and matrices capture that power with just a few meaningful numbers. When you think in terms of basis vectors and columns, you can both see and compute what’s happening, turning abstract math into a clear mental picture and a reliable calculation tool.

    ✓
    Use identical columns to model a collapse onto a single line like y = x. Recognize that this reduces dimensionality—many inputs map to the same output. Expect outputs to align with that repeated column direction. This helps you predict and explain results.
  • ✓Be consistent about columns vs. rows: columns are basis images, rows compute each output coordinate. Mixing them up causes wrong matrices and outputs. Keep this mental label every time you write a matrix. It avoids a very common beginner error.
  • ✓Think of the identity matrix as a baseline: it leaves vectors unchanged. Modifying columns from (1,0) and (0,1) shows how the transformation deviates from identity. Small changes to columns mean gentle tilts or stretches. This incremental view aids design and debugging.
  • ✓If a transformation feels confusing, track just what happens to i-hat and j-hat. These two arrows tell the whole story. Build the matrix, then try a few inputs to confirm your understanding. This reduces complex problems to two simple vectors.
  • ✓Interpret outputs both arithmetically and geometrically: compute numbers, then imagine weighted columns. If both views match, you likely did it right. If not, re-check the placements and signs. Dual-checks build confidence and accuracy.
  • ✓Keep the big picture: linear transformations are valuable because they preserve structure. They don’t bend lines or drift the center, so they’re easy to model and compute. This structural respect is why matrices are so powerful. Carry this principle into every new topic you study.
  • j-hat (standard basis vector)

    The vector pointing one unit along the y-axis, written as (0,1). It represents a single step in the vertical direction. Along with i-hat, it forms a pair that can express any 2D vector. Knowing where j-hat goes under a transformation completes the matrix. It becomes the second column.

    Matrix

    A rectangular array of numbers that records where basis vectors go under a linear transformation. In 2D, it has two columns that are the images of i-hat and j-hat. Multiplying a matrix by a vector applies the transformation. It’s a compact, powerful way to represent the whole action. Reading columns as images builds strong intuition.

    2x2 matrix

    A matrix with 2 rows and 2 columns. In this context, it represents a linear transformation of the 2D plane. The first column is where (1,0) goes, and the second column is where (0,1) goes. It fully describes how any vector moves. It’s the basic building block for 2D linear actions.

    Matrix–vector multiplication

    The process of applying a linear transformation to a vector by multiplying the matrix by that vector. Geometrically, it’s making a weighted sum of the matrix’s columns using the vector’s entries. Algebraically, each output coordinate is a weighted sum of a row with the vector. It’s how we compute the new position of any vector. It ties numbers to geometry in a clean way.

    +21 more (click terms in content)