# 数学代写|数值分析代写numerical analysis代考|Numerical Instability

## 数学代写|数值分析代写numerical analysis代考|Numerical Instability

Loss of accuracy does not necessarily happen all at once. Sometimes it can happen through a process that amplifies errors at each stage until the error overwhelms that computation.

Consider the integral $I_n:=\int_0^1 x^n e^{-x} d x$. This is easy to compute analytically for $n=0: \int_0^1 e^{-x} d x=1-e^{-1}$. We can create a recursive algorithm for computing these values:
\begin{aligned} I_{n+1} & =\int_0^1 x^{n+1} e^{-x} d x=-\int_0^1 x^{n+1} \frac{d}{d x}\left(e^{-x}\right) d x \ & =-\left.x^{n+1} e^{-x}\right|{x=0} ^{x=1}+\int_0^1 \frac{d}{d x}\left(x^{n+1}\right) e^{-x} d x \ & =-e^{-1}+(n+1) I_n . \end{aligned} With this recurrence we can compute $I_1, I_2, \ldots$ However, problems become apparent using this scheme by the time we get to $I{20}$, as can be seen in Table 1.4.2 which shows the computed values for double precision.

The results are evidently wrong for $n \geq 17$ as $I_n>0$ for all $n$. Furthermore, $I_n \sim 1 /(e(n+1))$ as $n \rightarrow \infty$. The error grows before it becomes evident: $I_5-$ $\widehat{I}5 \approx 4.1 \times 10^{-15}, I{10}-\widehat{I}{10} \approx 1.2 \times 10^{-10}$, and $I{15}-\widehat{I}{15} \approx 4.4 \times 10^{-5}$. To see why, consider \begin{aligned} \widehat{I}{n+1} & =f\left((n+1) \widehat{I}n-e^{-1}\right) \ & =\left((n+1) \widehat{I}_n\left(1+\epsilon{n, 1}\right)-e^{-1}\right)\left(1+\epsilon_{n, 2}\right) \quad \text { while } \ I_{n+1} & =(n+1) I_n-e^{-1} . \end{aligned}
So
$$I_{n+1}-\widehat{I}{n+1}=(n+1)\left(I_n-\widehat{I}_n\right)+\left[\epsilon{n, 1}(n+1) \widehat{I}n-\epsilon{n, 2} e^{-1}\right] .$$
Assuming the quantity in $[\cdots]$ is $\mathcal{O}(\mathbf{u})$, which is reasonable before $\widehat{I_n}$ “blows up”, the error in the results grows according to
\begin{aligned} I_{n+1}-\widehat{I}_{n+1} & =(n+1)\left(I_n-\widehat{I}_n\right)+\mathcal{O}(\mathbf{u}), \quad \text { so } \ I_n-\widehat{I_n} & =\mathcal{O}(n ! \mathbf{u}) . \end{aligned}

## 数学代写|数值分析代写numerical analysis代考|Adding Many Numbers

The apparently trivial problem of adding many numbers can reveal surprising depth in numerical computation. If we aim for maximal accuracy, such as implementing a special mathematical function for many people to use, or doing some other highprecision computation, we might want to find how to add numbers with the least error.

The standard, or naive, algorithm for adding an array of numbers is shown in Algorithm 6.

For the error analysis for Algorithm 6 , we use the notation $\widehat{s}i$ to be the computed value of $s$ after adding $a_i$ on line 4 . Let $\widehat{s}_0=0$. Then $\widehat{s}{i+1}=f\left(\widehat{s}i+a_i\right)=\left(\widehat{s}_i+\right.$ $\left.a_i\right)\left(1+\epsilon_i\right)$ with $\left|\epsilon_i\right| \leq \mathbf{u}$. If $n=$ length $(\boldsymbol{a})$, the returned value is \begin{aligned} \widehat{s}_n & =\sum{i=1}^n a_i \prod_{j=i}^n\left(1+\epsilon_j\right), \quad \text { which leads to the bound } \ \left|\widehat{s}n-\sum{i=1}^n a_i\right| & \leq \sum_{i=1}^n\left|a_i\right|(n-i+1) \mathbf{u}+\mathcal{O}(n \mathbf{u})^2 . \end{aligned}
This means that for maximum accuracy numbers should be added from smallest to largest.

But our analysis of the naive Algorithm 6 indicates that the more additions applied to a sum with a given term $a_i$, the larger the bound on the roundoff error. We can think of this in terms of the depth of term in the sum:
$$\left(\left(\cdots\left(\left(\left(a_1+a_2\right)+a_3\right)+a_4\right) \cdots+a_{n-2}\right)+a_{n-1}\right)+a_n .$$
If we reduce the depth of the terms, we have the possibility of reducing the error in the sum. One way of reducing the maximum depth is to split sums in the middle: $\sum_{\ell=i}^j a_{\ell}=\sum_{\ell=i}^m a_{\ell}+\sum_{\ell=m+1}^j a_{\ell}$ with $m=\lfloor(i+j) / 2\rfloor$, for example. The maximum depth is then $\left\lceil\log _2 n\right\rceil$ and we can get bounds on the rounding error of $\approx\left(\log _2 n\right) \mathbf{u} \max _i\left|a_i\right|$. Pseudo-code for this can be found in Algorithm 2 .

Finally, the pseudo-random character of roundoff error should remind us that sometimes a statistical analysis can be beneficial for understanding the behavior of roundoff error for long sums of terms of similar magnitude, as can occur solving ordinary differential equations.

# 数值分析代考

## 数学代写|数值分析代写numerical analysis代考|Numerical Instability

$$I_{n+1}=\int_0^1 x^{n+1} e^{-x} d x=-\int_0^1 x^{n+1} \frac{d}{d x}\left(e^{-x}\right) d x \quad=-x^{n+1} e^{-x} \mid x=0^{x=1}+\int_0^1$$

$$\hat{I} n+1=f\left((n+1) \hat{I} n-e^{-1}\right) \quad=\left((n+1) \hat{I}n(1+\epsilon n, 1)-e^{-1}\right)\left(1+\epsilon{n, 2}\right)$$

$$I_{n+1}-\hat{I} n+1=(n+1)\left(I_n-\hat{I}n\right)+\left[\epsilon n, 1(n+1) \hat{I} n-\epsilon n, 2 e^{-1}\right] .$$ 假设数量在 $[\cdots]$ 是 $\mathcal{O}(\mathbf{u})$ ，之前是合理的 $\widehat{I}_n$ “爆炸”，结果中的误差根据 $$I{n+1}-\hat{I}_{n+1}=(n+1)\left(I_n-\hat{I}_n\right)+\mathcal{O}(\mathbf{u}), \quad \text { so } I_n-\widehat{I_n} \quad=\mathcal{O}(n ! \mathbf{u})$$

## 数学代写|数值分析代写numerical analysis代考|Adding Many Numbers

$$\left(\left(\cdots\left(\left(\left(a_1+a_2\right)+a_3\right)+a_4\right) \cdots+a_{n-2}\right)+a_{n-1}\right)+a_n .$$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: