Integration is an operation that finds the **net** area under a curve, and is the opposite operation of differentiation. As such, it is also known as **anti-differentiation**.
The area under a curve between the interval of x-values $[a,b]$ is:
$$A=\lim_{x\to\infty}\sum^n_{i=1}f(x_i)\Delta x$$
which can be simplified to, where $dx$ indicates that integration should be performed with respect to $x$:
$$A=\int^b_a f(x)dx$$
While $\Sigma$ refers to a finite sum, $\int$ refers to the sum of a limit.
As integration is the opposite operation of differentiation, they can cancel each other out.
$$\frac{d}{dx}\int f(x)dx=f(x)$$
The **integral** or **anti-derivative** of a function is capitalised by convention. Where $C$ is an unknown constant:
$$\int f(x)dx=F(x)+C$$
When integrating, there is always an unknown constant $C$ as there are infinitely many possible functions that have the same rate of change but have different vertical translations.
!!! definition
- $C$ is known as the **constant of integration**.
- $f(x)$ is the **integrand**.
### Integration rules
$$
\begin{align*}
&\int 1dx &= &&x+C \\
&\int (ax^n)dx, n≠-1 &=&&\frac{a}{n+1}x^{n+1} + C \\
&\int (x^{-1})dx&=&&\ln|x|+C \\
&\int (ax+b)^{-1}dx&=&&\frac{\ln|ax+b|}{a}+C \\
&\int (ae^{kx})dx &= &&\frac{a}{k}e^{kx} + C \\
&\int (\sin kx)dx &= &&\frac{-\cos kx}{k}+C \\
&\int (\cos kx)dx &= &&\frac{\sin kx}{k}+C \\
\end{align*}
$$
Similar to differentiation, integration allows for constant multiples to be brought out and terms to be considered individually.
Similar to limit evaluation, the substitution of complex expressions involving $x$ and $dx$ with $u$ and $du$ is generally used to work with the chain rule.
Regions **under** the x-axis are treated as negative while those above are positive, cancelling each other out, so the definite integral finds something like the net area over an interval.
If $f(x)$ is continuous at $[a,b]$ and $F(x)$ is the anti-derivative, the definite integral is equal to:
$$\int^b_a f(x)dx=F(x)\biggr]^b_a=F(b)-F(a)$$
As such, it can be evaluated manually by integrating the function and subtracting the two anti-derivatives.
!!! warning
If $u$-substitution is used, the limits of integration must be adjusted accordingly.
To find the total **area** enclosed between the x-axis, $x=a$, $x=b$, and $f(x)$, the function needs to be split at each x-intercept and the absolute value of each definite integral in those intervals summed.
$$A=\int^b_a \big|f(x)\big| dx$$
### Properties of definite integration
The following rules only apply while $f(x)$ and $g(x)$ are continuous in the interval $[a,b]$ and $c$ is a constant.
To find the area enclosed between two curves, the graph should be sketched if possible and their points of intersection determined to identify which parts of each function are on top of the other at any given time. An interval chart may be helpful. For each section, where $f(x)$ is always greater than $g(x)$ in the interval $[a,b]$:
$$A=\int^b_a [f(x)-g(x)]dx, f(x)\geq g(x)\text{ in } [a,b]$$
If the limits of integration are not given, they are the outermost points of intersection of the two curves.
Shapes formed by rotating a line or curve about a fixed axis, such as cones, spheres, and cylinders are all known as **solids of revolution**. By splicing each shape into infinitely small disks, the cylinder volume formula can be used to find the volume of the solid.
Events $A$ and $B$ are **disjoint** or mutually exclusive if no outcomes between them are common and can never happen simultaneously. As such the probability of one of the events happening is equal to their sum.
$$
P(A\cup B)=P(A)+P(B) \\
P(A\cap B)=0
$$
Events $A$ and $B$ are **exhaustive** if their union includes all possible outcomes in the sample space: $A\cup B=U$.
### Probability distributions and discrete random variables
The **discrete random variable**, $X$, represents a **quantifiable**, measurable, discrete quantity. The lowercase $x$ represents a possible value of $X$.
The probability that $X$ takes on any one of the specific possible outcomes is written as $P(X=x)$. The sum of the probability all possible outcomes must still remain $1$:
$$\Sigma P(X=x)=1$$
!!! example
In an experiment of tossing a coin twice, possible values of $X$ include $0,1,2$ so $x\in\{0, 1, 2\}$.
A **probability distribution** is a distribution of outcomes and their probabilities. Events/outcomes are placed on the top row while probability is provided on the bottom in the form of a fraction. Probability distributions can also be graphed with the outcomes on the x-axis and their probabilities on the y-axis with lines similar to a bar graph sitting on the grid lines to represent a probability..
!!! example
For the coin ross experiment in the previous example, where $X$ is the number of tails when tossing a coin twice:
The **expected value** of an experiment or the "expectation of $X$" is the mean value of $X$ that is expected to be obtained over many trials. It is equal to the sum of the value of all outcomes multiplied by their probability.
$$
\begin{align*}
E(X)&=\Sigma P(X=x)x \\
&=\mu=x_1p_1+x_2p_2+...+x_kp_k
\end{align*}
$$
!!! warning
It is possible that the expected value will not be a value in the set, and the expected value should **not be mistaken** with the outcome with the highest probability.
**Bernoulli trials** have a fixed number of trials that are independent of each other and identical with only two possible outcomes — a success or failure.
Where $r$ is the number of successes in a Bernoulli trial:
$$P(X=r)={n\choose r}p^rq^{n-r}$$
where ${n\choose r}=\frac{n!}{r!(n-r)!}$
A binomial distribution is a probability distribution of two possible events, a success or a failure. The distribution is defined by the number of trials, $n$, and the probability of a success, $p$. The probability of failure is defined as $q=1-p$.
$X\sim$ denotes that the random variable $X$ is distributed in a certain way. Therefore, the binomial distribution of $X$ is expressed as:
$$X\sim B(n, p)$$
In a binomial distribution, the expected value and **variance** are as follows:
$$
E(X)=np \\
Var(X)=npq
$$
On a graphing display calculator, where $r$ is the number of successes:
Also known as **Gaussian distribution** or in its graphical form, a normal or bell curve, the normal distribution is a **continuous** probability distribution for the random variable $x$.
- The normal curve is bell-shaped and symmetric about the mean.
- The area under the curve is equal to one.
- The normal curve approaches but does not touch the x-axis as it approaches $\pm \infty$.
From $\mu-\sigma$ to $\mu+\sigma$, the curve curves downward. $\mu\pm\sigma$ are the **inflection points** of the graph. It is expressed graphically as:
~68%, ~95%, and ~99.7% of the data is found within one, two, and three standard deviations of the mean, respectively.
### Standard normal distribution
The **standard normal distribution** has a mean of 0 and standard deviation of 1. The horizontal scale of the standard normal curve corresponds to **$z$-scores** that represent the number of standard deviations away from the mean. To convert an $x$-score to a $z$-score:
$$z=\frac{x-\mu}{\sigma}$$
A **Standard Normal Table** can be used to determine the cumulative area under the standard normal curve to the left of scores -3.49 to 3.49. The area to the *right* of the score is equal to $1-z_\text{left}$. The area *between* two z-scores is the difference in between the area of the two z-scores.
To standardise a normal random variable, it should be converted from the form $X\sim N(\mu,\sigma^2)$ to $Z\sim N(0,1)$ via the formula to convert between x- and z-scores.
The probability of a z-score being less than a value can be rewritten as phi.
$$P(z<a)=\phi(a)$$
Some z-score rules partially taken from probability rules:
$$
\begin{align*}
P(z>-a)&=P(z<a) \\
1-P(z>a)&=P(z<a)
\end{align*}
$$
On a graphing display calculator:
The `normalcdf` command can be used to find the cumulative probabilty in a normal distribution in the format $\text{normalcdf}(a,b,\mu,\sigma)$, which will solve for $P(a<x<b)$.$-1000$isgenerallyasufficientlylowvaluetosolveforjust$P(x<b)$.
Please see [SL Physics 1#1.3 - Vectors and Scalars](/sph3u7/#13-vectors-and-scalars) for more information.
One vector can be represented in a variety of methods. The algebraic form $(1, 2)$ can also be represented in the alternate algebraic forms $[1, 2]$ and $1\choose 2$.
Where $v$ is the vector, $A$ is the initial and $B$ is the terminal point of the vector, a vector can be identified by any of the following symbols:
- $\vec{AB}$
- $\vec{v}$
- $\boldsymbol{v}$ (bolded)
The special **zero vector** $\vec{0}$ is a vector of undefined direction and zero magnitude.
Vectors with the same magnitude but opposite directions are equal to one another except one is the negative of the other.
**Colinear** vectors are those that parallel with one another — that is, with identical or opposite directions. Vectors that are colinear must also be **scalar multiples** of each other:
$$\vec{u}=k\vec{v}$$
**Position** vectors are vectors where the initial point is at the origin — where the terminal point is $A$, a position vector can be written as $\vec{OA}$.
**Colinear points** are points that lie on the same straight line. If two colinear vectors that share a common point can be formed between three points, those points are colinear.
Please see [SL Physics 1#Adding/subtracting vectors diagrammatically](/sph3u7/#addingsubtracting-vectors-diagrammatically) for more details. The sum of two vectors is known as the **resultant** while the negative (opposite direction) version of that vector is known as the **equilibrant**.
Also known as the scalar product, the dot product between two vectors returns a **scalar** value representing the horizontal displacement after multiplication. Wheree $\theta$ is the angle contained between the vectors $\vec{u}$ and $\vec{v}$ when arranged tail-to-tail:
The vector equation for a straight line solves for an unknown position vector $\vec{r}$ on the line using a known position vector $\vec{r_0}$ on the line, a direction vector parallel to the line $\vec{m}$, and the variable **parameter** $t$. It is roughly similar to $y=b+xm$.
$$\vec{r}=\vec{r_0}+t\vec{m},t\in\mathbb{R}$$
The equation can be rewritten in the algebraic form to be
$$[x,y]=[x_0,y_0]+t[m_1,m_2], t\in\mathbb{R}$$
The direction vector is effectively the slope of a line.
To determine if a point lies along a line defined by a vector equation, the parameter $t$ should be checked to be the same for the $x$ and $y$ coordinates of the point.
!!! warning
Vector equations are **not unique** — there can be different position vectors and direction vectors that return the same line.
The **parametric** form of a line breaks the vector form into components.
$$
\begin{align*}
x&=x_0+tm_1 \\
y&=y_0+tm_2,t\in\mathbb{R}
\end{align*}
$$
The **symmetric** form of the equation takes the parametric form and equates the two equations to each other using $t$.
Two lines are parallel if their direction vectors are scalar multiples of each other.
$$\vec{m_1}=k\vec{m_2},k\in\mathbb{R}$$
Two lines are coincident if they are parallel and share at least one point. Otherwise, they are distinct.
If two lines are not parallel and in two dimensions, they intersect. To solve for the point of intersection, the x and y variables in the parametric form can be equated and the parameter $t$ solved.
In three dimensions, there is a final possibility should the lines not be parallel: the lines may be *skew*. To determine if the lines are skew, the x, y, and z variables of **two** parametric equations should be equated to their counterparts in the other vector as if they intersect. The resulting $t$ and $s$ from the first and second line respectively should be substituted into the third equation and an equality check performed. Should there not be a solution that fulfills the third equation, the lines are skew. Otherwise, they intersect.
If two vectors $\vec{a}$ and $\vec{b}$ are placed tail-to-tail, the **component** of $\vec{a}$ in the direction of $\vec{b}$ is known as the **vector projection of $\vec{a}$ onto $\vec{b}$**. Represented by $Projection$, its magnitude is called the **scalar projection**.
The cross product or **vector product** is a vector that is perpendicular of two vectors that are not colinear. Where $\vec{u}_1,\vec{u}_2,\vec{3}$ represent the x, y, and z coordinates of the position vector $\vec{u}$, respectively:
- associative over scalars: $m(\vec{u}\times\vec{v})=(m\vec{u})\times\vec{v}=(m\vec{v})\times\vec{u}$
The **magnitude** of a cross product is opposite that of the dot product. Where $\theta$ is the smaller angle between the two vectors ($0\leq\theta\leq180^\circ$):
A **triple scalar product** is the result of a cross product performed first then put in a dot product.
$$|\vec{c}\bullet(\vec{a}\times\vec{b})|$$
In a **parallelpiped**, or a three-dimensional shape with six faces each a parallelogram with an identical one opposite it, the volume is the triple scalar product of the distinct three vectors that make up its side lengths:
For an object moving at **constant velocity in 2D space**, where $\vec{s}$ is its displacement, $\vec{s}_0$ is its initial displacement at $t=0$, $t$ is the time elapsed, and $\vec{v}$ is its velocity:
**Torque** ($\vec{\tau}$ or $\vec{M}$) is the ability to rotate an object — effectively angular/rotational force — and is the cross product of the **outward-pointing radius vector** ($\vec{r}$) and the **force** vector ($\vec{F}$).
**Force** and **velocity** are vectors with magnitude and direction. See [SL Physics 1#Force diagrams](/sph3u7/#force-diagrams) and [SL Physics 1#Velocity](/sph3u7/#velocity) for more information.
If **Cartesian vectors** (see [SL Physics 1#Adding/subtracting vectors algebraically](/sph3u7/#addingsubtracting-vectors-algebraically) for more details) cannot be used, the **sine and cosine laws** can be used, which are, respectively:
A line intersects a plane if the dot product between the two is not zero, and the resulting scalar multiple found can be used to find the point of intersection. Otherwise, once the equations are substituted into each other, if the statement is true, the line and plane are **parallel and coincident**. Otherwise, they are parallel.
The shortest distance between two **skew lines** $L_1$ and $L_2$ is equal to:
Otherwise, the planes intersect, the line along which is equal to the cross product between the two direction vectors.
$$\vec m=\vec n_1\times\vec n_2$$
An initial point vector can be solved by setting any of the variables ($x,y,z$) to zero and solving for the others. Alternatively, the parameter $t$ can be set equal to one of the variables instead and the parametric equation derived that way.
The **angle between two planes** is equal to the angle between their normal direction vectors, which can be determined using the dot product formula.
- If all three $D$-values are those same scalar multiples, the planes are parallel and coincident and they have infinite points of intersection along the plane equation.
- Otherwise, there are no solutions and the planes are parallel and distinct and/or parallel and coincident for two.
If two normals are scalar multiples:
- If the two parallel planes are coincident with the same $D$-values, there will be a line of intersection much like solving for intersection between two planes.
- Otherwise, the two parallel planes are distinct, forming a Z-pattern with the third plane and so there is no solution.
If no normals are scalar multiples:
- If the triple scalar product of the three planes is equal to zero, the normal vectors are not coplanar and so there will be a point of intersection.
- Alternatively, by solving the scalar equations for the planes, if:
- the result is a contradiction (e.g., $0 = 3$), there is no solution
- the result is true with no variable (e.g., $0 = 0$), there are is an infinite number of solutions along a line
- the result contains a variable (e.g., $t = 4$), there is a single point of intersection at the parameter $t$.
A **matrix** is a two-dimensional array with rows and columns, represented by a capital letter and a grid denoted by square brackets.
$$
A=
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6
\end{bmatrix}
$$
$A_{ij}$ represents the element in the $i$th row and the $j$th column.
A **coefficient matrix** contains coefficients of variables.
$$
A=
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6
\end{bmatrix}
$$
An **augmented matrix** also contains constants, separated by a vertical line.
$$
A=
\left[\begin{array}{rrr|r}
1 & 2 & 3 & 5 \\
4 & 5 & 6 & 10
\end{array}\right]
$$
!!! example
The equation system
$$
x+2y-4z=3 \\
-2x+y+3z=4 \\
4x-3y-z=-2
$$
can be written as the matrix
$$
A=
\left[\begin{array}{rrr|r}
1 & 2 & -4 & 3 \\
-2 & 1 & 3 & 4 \\
4 & -3 & -1 & -2
\end{array}\right]
$$
### Gaussian elimination
Gaussian elimination is used to solve a system of linear relations, such as that of plane equations. It aims to reduce a matrix into its **row echelon form** shown below to solve for each variable.
$$
A=
\left[\begin{array}{rrr|r}
a & b & c & d \\
0 & e & f & g \\
0 & 0 & h & i
\end{array}\right]
$$
The following **row operations** can be performed on the matrix to achieve this state:
- swapping (interchanging) the position of two rows
- $R_a \leftrightarrow R_b$
- multiplying a row by a non-zero constant
- $AR_a \to R_a$
- adding/subtracting rows, overwriting the destination row
- $R_a\pm R_b\to R_b$
- multiplying a row by a non-zero constant and then adding/subtracting it to another row
- $AR_a + R_b \to R_b$
!!! example
In the matrix from the previous example, by performing $R_1\leftrightarrow R_2$: