From 4252a734e2e3d83cb09769c96469e38932ddf208 Mon Sep 17 00:00:00 2001 From: eggy Date: Mon, 20 Nov 2023 16:01:05 -0500 Subject: [PATCH] ece204: add odes --- docs/2a/ece204.md | 120 ++++++++++++++++++++++++++++++++++++++++++++++ docs/2a/ece205.md | 7 +++ docs/2a/ece222.md | 4 ++ 3 files changed, 131 insertions(+) diff --git a/docs/2a/ece204.md b/docs/2a/ece204.md index 9de59df..306a154 100644 --- a/docs/2a/ece204.md +++ b/docs/2a/ece204.md @@ -171,4 +171,124 @@ For discrete data: - If the spacing is not equal (to make DD impossible), again creating an interpolation may be close enough. - If data is noisy, regressing and then solving reduces random error. + +## Integrals + +If you can represent a function as an $n$-th order polynomial, you can approximate the integral with the integral of that polynomial. + +### Trapezoidal rule + +The **trapezoidal rule** looks at the first order polynomial and + +From $a$ to $b$, if there are $n$ trapezoidal segments, where $h=\frac{b-a}{n}$ is the width of each segment: + +$$\int^b_af(x)dx=\frac{b-a}{2n}[f(a)+2(\sum^{n-1}_{i=1}f(a+ih))+f(b)]$$ + +The error for the $i$th trapezoidal segment is $|E_i|=\left|\frac{h^3}{12}\right|f''(x)$. This can be approximated with a maximum value of $f''$: + +$$\boxed{|E_T|\leq(b-a)\frac{h^2}{12}M}$$ + +### Simpson's 1/3 rule + +This uses the second-order polynomial with **two segments**. Three points are usually used: $a,\frac{a+b}{2},b$. Thus for two segments: + +$$\int^b_af(x)dx\approx\frac h 3\left[f(a)+4f\left(\frac{a+b}{2}\right)+f(b)\right]$$ + +For an arbitrary number of segments, as long as there are an **even number** of **equal** segments: +$$\int^b_af(x)dx=\frac{b-a}{3n}\left[f(x_0)+4\sum^{n-1}_{\substack{i=1 \\ \text{i is odd}}}f(x_i)+2\sum^{n-2}_{\substack{i=2 \\ \text{i is even}}}f(x_i)+f(x_n)\right]$$ + +The error is: +$$|E_T|=(b-a)\frac{h^4}{180}M$$ + +## Ordinary differential equations + +### Initial value problems + +These problems only have results for one value of $x$. + +**Euler's method** converts the function to the form $f(x,y)$, where $y'=f(x,y)$. + +!!! example + $y'+2y=1.3e^{-x},y(0)=5\implies f(x,y)=1.3e^{-x}-2y,y(0)=5$ + +Where $h$ is the width of each estimation (lower is better): + +$$y_{n+1}=y_n+hf(x_n,y_n)$$ + +!!! example + If $f(x,y)=2xy,h=0.1$, $y_{n+1}=y_n+h2x_ny_n$ + $$ + y(1.1)=y(1)+0.1×2×1×\underbrace{y(1)}_{1 via IVP}=1.2 \\ + y(1.2)=y(1.1)+0.1×2×1.1×\underbrace{y(1.1)}_{1.2}=1.464 + $$ + +**Heun's method** uses Euler's formula as a predictor. Where $y^*$ is the Euler solution: + +$$y_{n+1}=y_n+h\frac{f(x_n,y_n)+f(x_{n+1},y^*_{n+1}}{2}$$ + +!!! example + For $f(x,y)=2xy,h=0.1, y(1)=1$: + + Euler's formula returns $y^*_{n+1}=y_n+2hx_ny_n\implies y^*(1.1)=1.2$. + + Applying Heun's correction: + + \begin{align*} + y(1.1)&=y(1)=0.1\frac{2×1×y(1)+2×1.1×y^*(1.1)}{2} \\ + &=1+0.1\frac{2×1×1+2×1.1×1.2}{2} \\ + &=1.232 + \end{align*} + +The **Runge-Kutta fourth-order method** is the most accurate of the three methods: + +$$y_n+1=y_n+\frac 1 6(k_1+2k_2+2k_3+k_4)$$ + +- $k_1=hf(x_n,y_n)$ +- $k_2=hf(x_n+\tfrac 1 2h,y_n+\tfrac 1 2k_1)$ +- $k_3=hf(x_n+\tfrac 1 2 h, y_n+\tfrac 1 2 k_2)$ +- $k_4=hf(x_n+h,y_n+k_3)$ + +### Higher order ODEs + +Higher order ODEs can be solved by reducing them to first order ODEs by creating a system of equations. For a second order ODE: Let $y'=u$. + +$$ +y'=u \\ +u'=f(x,y,u) +$$ + +For each ODE, the any method can be used: + +$$ +y_{n+1}=y_n+hu_n \\ +u_{n+1}=u_n+hf(x_n,y_n,u_n) +$$ + +!!! example + For $y''+xy'+y=0,y(0)=1,y'(0)=2,h=0.1$: + + \begin{align*} + y'&= u \\ + u'&=-xu-y \\ + y_1&=y_0+0.1u_0 \\ + &=1+0.1×2 \\ + &=1.2 \\ + \\ + u_1&=u_0+0.1×f(x_0,y_0,u_0) \\ + &=u_0+0.1(-x_0u_0-y_0] \\ + &=2+0.1(-0×2-1) \\ + &=1.9 + \end{align*} + +### Boundary value problems + +The **finite difference method** divides the interval between the boundary into $n$ sub-intervals, replacing derivatives with their first principles representations. Solving each $n-1$ equation returns a proper system of equations. + +!!! example + For $y''+2y'+y=x^2, y(0)=0.2,y(1)=0.8,n=4\implies h=0.25$: + $x_0=0,x_1=0.25,x_2=0.5,x_3=0.75,x_4=1$ + + Replace with first principles: + + $$\frac{y_{i+1}-2y_i+y_{i-1}{h^2}+2\frac{y_{i+1}-y_i}{h}+y_i=x_i^2$$ diff --git a/docs/2a/ece205.md b/docs/2a/ece205.md index 4216215..7188490 100644 --- a/docs/2a/ece205.md +++ b/docs/2a/ece205.md @@ -225,6 +225,13 @@ Two boundary conditions are requred to solve the problem for all $t>0$ — that - $u(x,0)=f(x),0\leq x\leq L$ - e.g., $u(0,t)=u(L,t)=0,t>0$ +Thus the general solution is: + +$$ +\boxed{u(x,t)=\sum^\infty_{n=1}a_ne^{-\left(\frac{n\pi a}{L}\right)^2t}\sin(\frac{n\pi x}{L})} \\ +f(x)=\sum^\infty_{n=1}a_n\sin(\frac{n\pi x}{L}) +$$ + ### Periodicity The **period** of a function is an increment that always returns the same value: $f(x+T)=f(x)$, and its **fundamental period** of a function is the smallest possible period. diff --git a/docs/2a/ece222.md b/docs/2a/ece222.md index 5676eec..02931b6 100644 --- a/docs/2a/ece222.md +++ b/docs/2a/ece222.md @@ -124,3 +124,7 @@ $$\text{time}=n_{instructions}\times\underbrace{\frac{\text{cycles}}{\text{instr Pipelining changes the granularity of a clock cycle to be per step, instead of per-instruction. This allows multiple instructions to be processed concurrently. (Source: Wikimedia Commons) + +### Data forwarding + +If data needs to be used from a prior operation, a pipeline stall would normally be required to remove the hazard and wait for the desired result (a **read-after-write** data hazard). However, a processor can mitigate this hazard by allowing the stalled instrution to read from the prior instruction's result instead.