- Natural numbers ($\mathbb N$) are all **integers** greater than zero.
- Integers ($\mathbb Z$) are all non-decimal numbers.
- Rational numbers ($\mathbb Q$) are all numbers representable as a fraction.
- Irrational numbers are all **real** numbers not representable as a fraction.
- Real numbers ($\mathbb R$) are all rational or irrational numbers.
The **subset sign** ($\subseteq$) indicates that one **set** is strictly within another. The **not subset sign** ($\not\subseteq$) indicates that at least one element in the first set is not in the second.
!!! example
- Natural numbers are a subset of integers, or $\mathbb N \subseteq \mathbb Z$.
- Integers are not a subset of natural numbers, or $\mathbb Z \not\subseteq \mathbb N$.
!!! warning
The subset sign is not to be confused with the **element of** sign ($\in$), as the former only applies to sets while the latter only applies to elements.
Sets can be subtracted with a **backslash** (\\), returning a set with all elements in the first set not in the second.
!!! example
The set of irrational numbers can be represented as the difference between the real and rational number sets, or:
$$\mathbb R \backslash \mathbb Q$$
## Complex numbers
A complex number can be represented in the form:
$$x+yj$$
where $x$ and $y$ are real numbers, and $j$ is the imaginary $\sqrt{-1}$ (also known as $i$ outside of engineering). This implies that every real number is also in the set of complex numbers as $y$ can be set to zero.
!!! definition
- $Re(z)$ is the **real component** of complex number $z$.
- $Im(z)$ is the **imaginary component** of complex number $z$.
These numbers can be treated effectively like any other number.
### Properties of complex numbers
All of these properties can be derived from expanding the standard forms.
The modulus of a number is represented by the absolute value sign. It is equal to its magnitude if the complex number were a vector.
$$|z| = \sqrt{x^2+y^2}$$
!!! example
The modulus of complex number $2-j$ is:
$$
\begin{align*}
|2-j|&=\sqrt{2^2+(-1)^2} \\
&= -5
\end{align*}
$$
If there is no imaginary component, a complex number's modulus is its absolute value.
$$z\in\mathbb R: |z|=|Re(z)|$$
Complex numbers cannot be directly compared because imaginary numbers have no inequalities, but their moduli can — the modulus of one complex number can be greater than another's.
#### Properties of moduli
These can be also be manually derived.
If the modulus is zero, the complex number is zero.
$$|z|=0 \iff z=0$$
The modulus of the conjugate is equal to the modulus of the original.
\biggr|\frac{z}{w}\biggr| &= \frac{|z|}{|w|}, w \neq 0 \\
|zw| &= |z||w|
\end{align*}
$$
The moduli of the sum is always less than the sum of the moduli of the individual numbers — this is also known as the triangle inequality theorem.
$$|z+w| \leq |z|+|w|$$
### Geometry
In setting the x- and y-axes to the imaginary and real components of a complex number, complex numbers can be represented almost as vectors.
<imgsrc="https://upload.wikimedia.org/wikipedia/commons/6/69/Complex_conjugate_picture.svg">(Source: Wikimedia Commons, GNU FGL 1.2 or later)</img>
The complex number $x+yj$ will be on the point $(x, y)$, and the modulus is the magnitude of the vector. Complex number moduli can be compared graphically if their points lie within a drawn circle centred on the origin with a point on another vector.
### Polar form
The variable $r$ is equal to the modulus of a complex number $|z|$.
From the Pythagorean theorem, the polar form of a complex number can be expressed using the angle of the modulus to the real axis. Where $\theta$ is the angle of the modulus to the real axis:
$$z=r(\cos\theta + j\sin\theta)$$
Trigonometry can be used to calculate $\cos\theta$ and $\sin\theta$ as $\cos\theta = \frac{x}{r}$ and $\sin\theta = \frac{y}{r}$.
Please see [SL Math - Analysis and Approaches 2#Vectors](/g11/mcv4u7/#vectors) and [SL Physics 1#1.3 - Vectors and scalars](/g11/sph3u7/#13-vectors-and-scalars) for more information.
Vectors of different dimensions cannot be compared — the missing dimensions cannot be treated as 0.
The standard form of a vector is written as the difference between two points: $\vec{OA}$ where $O$ is the origin and $A$ is any point. $\vec{AB}$ is the vector as a difference between two points.
If a vector can be expressed as the sum of a scalar multiple of other vectors, that vector is the **linear combination** of those vectors. Formally, $\vec{y}$ is a linear combination of $\vec{a}, \vec{b}, \vec{c}$ if and only if any **real** constant(s) multiplied by each vector return $\vec{y}$:
The **norm** of a vector is its magnitude or distance from the origin, represented by double absolute values. In $\mathbb R^2$ and $\mathbb R^3$, the Pythagorean theorem can be used.
Please see [SL Math - Analysis and Approaches 2#Cross product](/g11/mcv4u7/#cross-product) for more information.
### Vector equations
Please see [SL Math - Analysis and Approaches 2#Vector line equations in two dimensions](/g11/mcv4u7/#vector-line-equations-in-two-dimensions) for more information.
### Vector planes
Please see [SL Math - Analysis and Approaches 2#Vector planes](/g11/mcv4u7/#vector-planes) for more information.
!!! definition
- A **hyperplane** is an $\mathbb R^{n-1}$ plane in an $\mathbb R^n$ space.
The **scalar equation** of a vector shows the normal vector $\vec{n}$ and a point on the plane $P(a,b,c)$ which can be condensed into the constant $d$.
The **reduced row echelon form** of a matrix makes a system even more rapidly solvable by performing even more elimination on the system such that each **leading variable** is equal to one, and that variable is the only variable in the coefficient matrix.
In addition, for resultant vectors with $m$ dimensions, the system is only consistent if $\text{rank}(A) = m$
Each variable $x_n$ is a **leading variable** if there is a leading entry in $A$. Otherwise, it is a **free variable**. Systems with free variables have infinite solutions and can be represented by a vector **parameter**.
- $M_{m\times n}(\mathbb R)$ is the set of all real matrices.
- A **square matrix** has $m=n$.
- The **zero matrix** $0_{m\times n}$ has every entry equal to 0.
In a $m\times n$ matrix $A$, $a_{ij}$ or $(A)_{ij}$ represents the entry in the $i$th row and $j$th column.
$$A=[a_{ij}]$$
Two matrices with size $m\times n$ $[a_{ij}]$ and $[b_{ij}]$ are equal if and only if $a_{ij} = b_{ij}$ for every i and j (formally, for every $i=1, ..., m, j = 1, ..., n$).
Properties of matrices include:
- $(A+B)_{ij} = (A)_{ij} + (B)_{ij}$
- $(cA)_{ij} = (cB)_{ij}, c\in\mathbb R$
- $A-B=A+(-1)B$
The **matrix transpose** $A^T$ is the matrix satisfying $(A^T)_{ij}=(A)_j$, as if it was reflected along the primary diagonal.
A matrix is **symmetric** if $A^T = A$, implying a square matrix.
In an augmented matrix, the system is consistent **if and only if** the resultant vector is a linear combination of the columns of the coefficient matrix.
In a **homogeneous system** ($\vec{b} = \vec{0}$), any linear combinations of the solutions to the system ($\vec{x}_1, ... \vec{x}_n$) are also solutions to the system.
The identity matrix ($I_n$) is a **square matrix** of size $n$ with the value 1 along the main diagonal and 0 everywhere else. The $i$th column is equal to the $i$th row, which is known as $\vec{e}_i$.
- A **probability vector** $\vec s$ has only **non-negative** entries that sum to 1.
- A **stochastic** matrix has only probability vectors as its columns.
- A **state vector** $s_k$ in a Markov chain represents the state of the system.
A Markov chain is a sequence of probability vectors $\vec s_0, \vec s_1, ...$ and stochastic matrix $P$ such that:
$$s_{k+1} = P_{s_k}$$
for any non-negative integer $k$.
The state vector $\vec s$ is the **steady-state vector for $P$** if $P\vec s = \vec s$. Each stochastic matrix converges to a steady state.
If the stochastic matrix is **regular**, there are only positive integers, which is true if at some $P^n$ there are only positive integers. Regular matrices converge to exactly one steady state vector.
In order to determine the steady state for any stochastic cmatrix:
The **conjugate** of a matrix is the conjugate of each of its elements.
$$\overline A = [\overline a_{ij}]$$
Conjugates are distributive, i.e. $\overline{A\vec z} = \overline A \ \overline{\vec{z}}$.
### Matrix inversion
The **unique** inverse matrix $A^{-1}$ of $A$ is such that $AA^{-1} = I = A^{-1}A$. Both matrices must be square for this to work and have ranks equal to that of their length.
Properties of inverse matrices:
- $(cA)^{-1} = \frac{1}{c}A^{-1}$
- $(ABCD)^{-1} = A^{-1}B^{-1}C^{-1}D^{-1}$
- $(A^k)^{-1} = (A^{-1})^k$ if $k>0$
- $(A^T)^{-1} = (A^{-1})^T$
To determine an inverse matrix, the augmented matrix of it and the identity matrix should be solved.
$$\begin{bmatrix}A\ |\ I\end{bmatrix}$$
If it is row reducible, it will form an identity matrix and the inverse on the other side.
$$\begin{bmatrix}I\ |\ A^{-1}\end{bmatrix}$$
If it is not row reducible or has free variables, it is not invertible.
If a matrix is invertible, $A\vec x = \vec b$ is **guaranteed to have a unique solution** for any $\vec b$.
- A **network** is a system of junctions connected by directed lines, similar to a directed graph.
In a **junction**, the flow in must equal the flow out. A network that follows the junction rule is at **equilibrium**.
In an electrical diagram, if a reference direction is selected, flow going opposite the reference direction is negative.
Matrices can be applied by applying the junction rule to systems with equal flow in and flow out for each of the **smaller systems** (i.e., not trying to meet every point)
The span of a finite set of vectors in $\mathbb R^n$ is the infinite set of all linear combinations of those vectors, such that **Span $B$ is spanned by $B$** and **$B$ is a spanning set for Span $B$**.
The set $B=\{\vec v_1, \vec v_2, \vec v_3\}$ can be represented as matrix $A=[\vec v_1, \vec v_2, \vec v_3]$. A vector $\vec x$ is in Span $B$ if and only if $A\vec c = \vec x$ is consistent — which is to say that if it can be expressed as a linear combination, it is in the span.
### Linear independence
A set is:
- linearly **dependent** if at least one non-zero linear combination of the set is equal to $\vec 0$.
- linearly **independent** if the only solution is setting all coefficients to zero.
Effectively, if there is at least one vector in the set that is a linear combination of the other elements, it is redundant and thus the set is **linearly dependent.**
This can be solved by testing if there are no free variables in the homogeneous system — the vector that is free is the dependent one.
!!! warning
- Any set with the zero vector will be an linearly **dependent** set.
- The empty set is linearly **independent**.
Subsets can be proven to be dependent via contradiction.
!!! example
To prove $\{\vec v_1 ... \vec v_{k-1}\}$ is LI given $\{\vec v_1 ... \vec v_k\}$ is LI, assume that the former is LD, which results in the latter being LD, which cannot be true, therefore the proof holds.
## Subspaces
A subset $\mathbb S$ of $\mathbb R^n$ is a subspace of $\mathbb R^n$ if and only if:
A **basis** $B$ of subspace $\mathbb S$ is a set that is **linearly independent** such that the span of $B$ is equal to the subspace. All elements in that subspace must have a unique linear combination of the elements in $B$, such that the rank of a matrix from the basis is always the number of vectors.
Its dimension is equal to the number of free vectors in RREF.
The **column space** of a matrix is the set of all linear combinations of its columns, which can be found by taking a linearly independent subset of the matrix (non-free vectors in RREF).
The **row space** of a matrix is the set of all linear combinations of its rows, which can be found by taking each row, excluding free vectors, from RREF.
Two row spaces are equal if and only if they can be manipulated into each other via elementary row operations. This indicates that systems to the homogeneous system for one apply to the other as well.
A linear transformation is a line that passes through the origin. If the transformation does not change the dimension of the vector, the function is a **linear operator**. Matrix transformation **preserve** linear combinations — that is, every matrix transformation is a linear transformation.
Its standard matrix is equal to the original vector (found by substituting the identity matrix).