eifueo/docs/2a/ece204.md
2023-11-20 16:01:05 -05:00

295 lines
8.6 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# ECE 204: Numerical Methods
## Linear regression
Given a regression $y=mx+b$ and a data set $(x_{i..n}, y_{i..n})$, the **residual** is the difference between the actual and regressed data:
$$E_i=y_i-b-mx_i$$
### Method of least squares
This method minimises the sum of the square of residuals.
$$\boxed{S_r=\sum^n_{i=1}E_i^2}$$
$m$ and $b$ can be found by taking the partial derivative and solving for them:
$$\frac{\partial S_r}{\partial m}=0, \frac{\partial S_r}{\partial b}=0$$
This returns, where $\overline y$ is the mean of the actual $y$-values:
$$
\boxed{m=\frac{n\sum^n_{i=1}x_iy_i-\sum^n_{i=1}x_i\sum^n_{i=1}y_i}{n\sum^n_{i=1}x_i^2-\left(\sum^n_{i=1}x_i\right)^2}} \\
b=\overline y-m\overline x
$$
The total sum of square around the mean is based off of the actual data:
$$\boxed{S_t=\sum(y_i-\overline y)^2}$$
Error is measured with the **coefficient of determination** $r^2$ — the closer the value is to 1, the lower the error.
$$
r^2=\frac{S_t-S_r}{S_t}
$$
If the intercept is the **origin**, $m$ reduces down to a simpler form:
$$m=\frac{\sum^n_{i=1}x_iy_i}{\sum^n_{i=1}x_i^2}$$
## Non-linear regression
### Exponential regression
Solving for the same partial derivatives returns the same values, although bisection may be required for the exponent coefficient ($e^{bx}$) Instead, linearising may make things easier (by taking the natural logarithm of both sides. Afterward, solving as if it were in the form $y=mx+b$ returns correct
!!! example
$y=ax^b\implies\ln y = \ln a + b\ln x$
### Polynomial regression
The residiual is the offset at the end of a polynomial.
$$y=a+bx+cx^2+E$$
Taking the relevant partial derivatives returns a system of equations which can be solved in a matrix.
## Interpolation
Interpolation ensures that every point is crossed.
### Direct method
To interpolate $n+1$ data points, you need a polynomial of a degree **up to $n$**, and points that enclose the desired value. Substituting the $x$ and $y$ values forms a system of equations for a polynomial of a degree equal to the number of points chosen - 1.
### Newton's divided difference method
This method guesses the slope to interpolate. Where $x_0$ is an existing point:
$$\boxed{f(x)=b_0+b_1(x-x_0)}$$
The constant is an existing y-value and the slope is an average.
$$
\begin{align*}
b_0&=f(x_0) \\
b_1&=\frac{f(x_1)-f(x_0)}{x_1-x_0}
\end{align*}
$$
This extends to a quadratic, where the second slope is the average of the first two slopes:
$$\boxed{f(x)=b_0+b_1(x-x_0)+b_2(x-x_0)(x-x_1)}$$
$$
b_2=\frac{\frac{f(x_2)-f(x_1)}{x_2-x_1}-\frac{f(x_1)-f(x_0)}{x_1-x_0}}{x_2-x_0}
$$
## Derivatives
Derivatives are estimated based on first principles:
$$f'(x)=\frac{f(x+h)-f(x)}{h}$$
### Derivatives of continuous functions
At a desired $x$ for $f'(x)$:
1. Choose an arbitrary $h$
2. Calculate derivative via first principles
3. Shrink $h$ and recalculate derivative
4. If the answer is drastically different, repeat step 3
### Derivatives of discrete functions
Guesses are made based on the average slope between two points.
$$f'(x_i)=\frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i}$$
### Divided differences
- Using the next term, or a $\Delta x > 0$ indicates a **forward divided difference (FDD)**.
- Using the previous term, or a $\Delta x < 0$ indicates a **backward divided difference (BDD)**.
The **central divided difference** averages both if $h$ or $\Delta x$ of the forward and backward DDs are equal.
$$f'(x)=CDD=\frac{f(x+h)-f(x-h)}{2h}$$
### Higher order derivatives
Taking the Taylor expansion of the function or discrete set and then expanding it as necessary can return any order of derivative. This also applies for $x-h$ if positive and negative are alternated.
$$f(x+h)=f(x)+f'(x)h+\frac{f''(x)}{2!}h^2+\frac{f'''(x)}{3!}h^3$$
!!! example
To find second order derivatives:
\begin{align*}
f''(x)&=\frac{2f(x+h)-2f(x)-2f'(x)h}{h^2} \\
&=\frac{2f(x+h)-2f(x)-(f(x+h)-f(x-h))}{h^2} \\
&=\frac{f(x+h)-2f(x)+f(x-h)}{h^2}
\end{align*}
!!! example
$f''(3)$ if $f(x)=2e^{1.5x}$ and $h=0.1$:
\begin{align*}
f''(3)&=\frac{f(3.1)-2\times2f(3)+f(2.9)}{0.1^2} \\
&=405.08
\end{align*}
For discrete data:
- If the desired point does not exist, differentiating the surrounding points to create a polynomial interpolation of the derivative may be close enough.
!!! example
| t | 0 | 10 | 15 | 20 | 22.5 | 30 |
| --- | --- | --- | --- | --- | --- | --- |
| v(t) | 0 | 227.04 | 362.78 | 517.35 | 602.47 | 901.67 |
$v'(16)$ with FDD:
Using points $t=15,t=20$:
\begin{align*}
v'(x)&=\frac{f(x+h)-f(x)}{h} \\
&=\frac{f(15+5)-f(15)}{5} \\
&=\frac{517.35-362.78}{5} \\
&=30.914
\end{align*}
$v'(16)$ with Newton's first-order interpolation:
\begin{align*}
v(t)&=v(15)+\frac{v(20)-v(15)}{20-15}(t-15) \\
&=362.78+30.914(t-15) \\
&=-100.93+30.914t \\
v'(t)&=\frac{v(t+h)-v(t)}{2h} \\
&=\frac{v(16.1)-v(15.9)}{0.2} \\
&=30.914
\end{align*}
- If the spacing is not equal (to make DD impossible), again creating an interpolation may be close enough.
- If data is noisy, regressing and then solving reduces random error.
## Integrals
If you can represent a function as an $n$-th order polynomial, you can approximate the integral with the integral of that polynomial.
### Trapezoidal rule
The **trapezoidal rule** looks at the first order polynomial and
From $a$ to $b$, if there are $n$ trapezoidal segments, where $h=\frac{b-a}{n}$ is the width of each segment:
$$\int^b_af(x)dx=\frac{b-a}{2n}[f(a)+2(\sum^{n-1}_{i=1}f(a+ih))+f(b)]$$
The error for the $i$th trapezoidal segment is $|E_i|=\left|\frac{h^3}{12}\right|f''(x)$. This can be approximated with a maximum value of $f''$:
$$\boxed{|E_T|\leq(b-a)\frac{h^2}{12}M}$$
### Simpson's 1/3 rule
This uses the second-order polynomial with **two segments**. Three points are usually used: $a,\frac{a+b}{2},b$. Thus for two segments:
$$\int^b_af(x)dx\approx\frac h 3\left[f(a)+4f\left(\frac{a+b}{2}\right)+f(b)\right]$$
For an arbitrary number of segments, as long as there are an **even number** of **equal** segments:
$$\int^b_af(x)dx=\frac{b-a}{3n}\left[f(x_0)+4\sum^{n-1}_{\substack{i=1 \\ \text{i is odd}}}f(x_i)+2\sum^{n-2}_{\substack{i=2 \\ \text{i is even}}}f(x_i)+f(x_n)\right]$$
The error is:
$$|E_T|=(b-a)\frac{h^4}{180}M$$
## Ordinary differential equations
### Initial value problems
These problems only have results for one value of $x$.
**Euler's method** converts the function to the form $f(x,y)$, where $y'=f(x,y)$.
!!! example
$y'+2y=1.3e^{-x},y(0)=5\implies f(x,y)=1.3e^{-x}-2y,y(0)=5$
Where $h$ is the width of each estimation (lower is better):
$$y_{n+1}=y_n+hf(x_n,y_n)$$
!!! example
If $f(x,y)=2xy,h=0.1$, $y_{n+1}=y_n+h2x_ny_n$
$$
y(1.1)=y(1)+0.1×2×1×\underbrace{y(1)}_{1 via IVP}=1.2 \\
y(1.2)=y(1.1)+0.1×2×1.1×\underbrace{y(1.1)}_{1.2}=1.464
$$
**Heun's method** uses Euler's formula as a predictor. Where $y^*$ is the Euler solution:
$$y_{n+1}=y_n+h\frac{f(x_n,y_n)+f(x_{n+1},y^*_{n+1}}{2}$$
!!! example
For $f(x,y)=2xy,h=0.1, y(1)=1$:
Euler's formula returns $y^*_{n+1}=y_n+2hx_ny_n\implies y^*(1.1)=1.2$.
Applying Heun's correction:
\begin{align*}
y(1.1)&=y(1)=0.1\frac{2×1×y(1)+2×1.1×y^*(1.1)}{2} \\
&=1+0.1\frac{2×1×1+2×1.1×1.2}{2} \\
&=1.232
\end{align*}
The **Runge-Kutta fourth-order method** is the most accurate of the three methods:
$$y_n+1=y_n+\frac 1 6(k_1+2k_2+2k_3+k_4)$$
- $k_1=hf(x_n,y_n)$
- $k_2=hf(x_n+\tfrac 1 2h,y_n+\tfrac 1 2k_1)$
- $k_3=hf(x_n+\tfrac 1 2 h, y_n+\tfrac 1 2 k_2)$
- $k_4=hf(x_n+h,y_n+k_3)$
### Higher order ODEs
Higher order ODEs can be solved by reducing them to first order ODEs by creating a system of equations. For a second order ODE: Let $y'=u$.
$$
y'=u \\
u'=f(x,y,u)
$$
For each ODE, the any method can be used:
$$
y_{n+1}=y_n+hu_n \\
u_{n+1}=u_n+hf(x_n,y_n,u_n)
$$
!!! example
For $y''+xy'+y=0,y(0)=1,y'(0)=2,h=0.1$:
\begin{align*}
y'&= u \\
u'&=-xu-y \\
y_1&=y_0+0.1u_0 \\
&=1+0.1×2 \\
&=1.2 \\
\\
u_1&=u_0+0.1×f(x_0,y_0,u_0) \\
&=u_0+0.1(-x_0u_0-y_0] \\
&=2+0.1(-0×2-1) \\
&=1.9
\end{align*}
### Boundary value problems
The **finite difference method** divides the interval between the boundary into $n$ sub-intervals, replacing derivatives with their first principles representations. Solving each $n-1$ equation returns a proper system of equations.
!!! example
For $y''+2y'+y=x^2, y(0)=0.2,y(1)=0.8,n=4\implies h=0.25$:
$x_0=0,x_1=0.25,x_2=0.5,x_3=0.75,x_4=1$
Replace with first principles:
$$\frac{y_{i+1}-2y_i+y_{i-1}{h^2}+2\frac{y_{i+1}-y_i}{h}+y_i=x_i^2$$