# 数学代写|数值分析代写numerical analysis代考|CIVL2060

## 数学代写|数值分析代写numerical analysis代考|Rational Function Interpolation

Splines are widely used in engineering and in mathematics. They are among the most important tools in approximation theory, and are often used in the finite element method for solving partial differential equations. They’re important in computer graphics. Their utility goes well beyond making a smooth plot out of discrete data. However, we now turn to other techniques.

We came to splines as we sought to improve upon the weaknesses of polynomial interpolation. We noted that we had to either give up “polynomial” or give up “interpolation” to get a better approximant. Techniques that give up interpolation are can be used to get good approximations. Splines gave up polynomial in favor of piecewise polynomial. Another approach is to turn to rational functions, which are defined as functions that are ratios of polynomials
$$\frac{P_n(x)}{Q_d(x)}=\frac{p_n x^n+p_{n-1} x^{n-1}+\ldots+p_1 x+p_0}{q_d x^d+q_{d-1} x^{d-1}+\ldots+q_1 x+q_0}$$
where $Q_d(x)$ is not the zero polynomial. Hence the two polynomials have no roots in common. Note that every polynomial is a rational function, as we can take $Q_d(x)=1$

The degree of a rational function is defined to be $n+d$ and its degree type is the pair $(n, d)$. Interpolating a rational function to a set of data is known as rational function interpolation. Typically $n$ and $d$ are equal or nearly so, and are chosen in advance by the user. If $n \leq d$ we say that the rational function is proper.

An advantage of rational function interpolation is that a function $R(x)=$ $P_n(x) / Q_d(x)$ may have a pole, that is, a point $x_0$ where $Q_d\left(x_0\right)=0$, leading to a singularity in the function at that point (possibly cancellable by corresponding zeros of $P_n(x)$ ). This allows us to model a broader class of functions. Additionally, if we attempt to approximate a real function $f(x)$ by a polynomial interpolant, poles of $f(x)$ off the real line (in the complex plane) can lead to a poor approximation on the real line, as they do for power series; rational functions avoid these issues and can give good approximations under such circumstances. In fact, rational function interpolation and approximation are commonly employed for theoretical work in complex analysis.

## 数学代写|数值分析代写numerical analysis代考|The Best Approximation Problem

Let’s discuss the approximation problem in a more general sense than just interpolation. What is the best way to approximate a function $f$ by a simpler function $g$ ? There is no one answer. First we must specify what we mean by a “simple” function-say, $P_2$, the set of all polynomials of degree at most two. But even then we must address the question: “Best” in what sense? Best might mean easiest to manipulate, easiest to calculate, closest to the function, or some compromise between these goals, depending on the application.

Even something as seemingly simple as asking that the approximant $g$ give the closest approximation to $f$ of all functions in the set of simple functions from which $g$ is drawn doesn’t define best unambiguously. By closest do we mean minimizing the $L_{\infty}$ norm of the difference $f-g$
$$\max |f(x)-g(x)|$$
over the interval of interest $[a, b]$, or minimizing the $L_1$ norm of the difference
$$\int_a^b|f(x)-g(x)| d x$$
or minimizing the $L_2$ norm of the difference
$$\left(\int_a^b(f(x)-g(x))^2 d x\right)^{1 / 2}$$
or perhaps something else? (These Hölder norms on function spaces are defined in analogy with the corresponding vector norms.) In many areas of scientific computing we are asked by users for the best way to do something, and can spend a lot of time getting at what the user really means by best.

For examplé, consider the functions $f_1(x)=x^{-3}$ and $f_2(x)=0$ on $[.1,1]$. With respect to the $L_{\infty}$ norm of Eq. (6.1), the functions are far apart, for
$$\max \left|f_1(x)-f_2(x)\right|=1000$$
over [.1,1]; but with respect to the $L_1$ norm of Eq. (6.2), they are much closer,
$$\int_a^b\left|f_1(x)-f_2(x)\right| d x \doteq 49.5$$
(the area between the curves is only about 50 units). If we hope to use $f_2$ to approximate the values of $f_1$ then we must expect errors on the order of $10^3$, but if we hope to use $f_2$ to approximate integrals of $f_1$ then we should expect errors on the order of $10^1. # 凸优化代考 ## 数学代写|数值分析代写numerical analysis代考|Rational Function Interpolation 样条曲线广泛用于工程和数学中。它们是近似理论中最重要的工具之一，并且经常用于求解偏微分方程的 有限元方法。它们在计算机图形学中很重要。它们的实用性远远超出了从离散数据中绘制平滑图的作用。 然而，我们现在转向其他技术。 当我们试图改进多项式揷值的弱点时，我们开始使用样条曲线。我们注意到，我们必须要么放弃“多项式”， 要么放弃“揷值”以获得更好的近似值。放弃揷值的技术可用于获得良好的近似值。样条曲线放弃了多项式， 取而代之的是分段多项式。另一种方法是转向有理函数，它被定义为多项式的比率函数 $$\frac{P_n(x)}{Q_d(x)}=\frac{p_n x^n+p_{n-1} x^{n-1}+\ldots+p_1 x+p_0}{q_d x^d+q_{d-1} x^{d-1}+\ldots+q_1 x+q_0}$$ 在哪里$Q_d(x)$不是雱多项式。因此，这两个多项式没有共同的根。请注意，每个多项式都是有理函数，因 为我们可以取$Q_d(x)=1$有理函数的度定义为$n+d$它的度数类型是pair$(n, d)$. 将有理函数揷值到一组数据称为有理函数揷值。通常$n$和$d$相等或几乎相等，并且由用户预先选择。如果$n \leq d$我们说有理函数是适当的。 有理函数揷值的一个优点是函数$R(x)=P_n(x) / Q_d(x)$可能有一个极点，也就是一个点$x_0$在哪里$Q_d\left(x_0\right)=0$，导致函数在该点出现奇点 (可能被对应的雺点取消)$P_n(x)$)。这使我们能够对更广泛的 函数类别进行建模。此外，如果我们试图逼近一个实函数$f(x)$通过多项式揷值，极点$f(x)$偏离实线 (在复 平面上) 可能导致对实线的逼近很差，就像幂级数一样；有理函数避免了这些问题，并且可以在这种情况 下给出很好的近似值。事实上，有理函数揷值和逼近通常用于筫分析的理论工作。 ## 数学代写|数值分析代写numerical analysis代考|The Best Approximation Problem 让我们从更一般的意义上讨论近似问题，而不仅仅是揷值。逼近函数的最佳方法是什么 通过更简单的功能$g$? 没有一个答案。首先我们必须明确我们所说的”简单”函数是什么意思一一比如说，$P_2$，最多为二的所有 多项式的集合。但即便如此，我们也必须解决这个问题: “最佳”在什么意义上? 最佳可能意味着最容易操 作、最容易计算、最接近函数，或者这些目标之间的某种折袁，具体取决于应用程序。 甚至像问近似值这样看似简单的事情$g$给出最接近的近似值$f$简单函数集中的所有函数$g$绘制并没有明确定义 最佳。最接近是指最小化$L_{\infty}$差异规范$f-g$$$\max |f(x)-g(x)|$$ 在感兴趣的区间内$[a, b]$，或最小化$L_1$差异规范 $$\int_a^b|f(x)-g(x)| d x$$ 或最小化$L_2$差异规范 $$\left(\int_a^b(f(x)-g(x))^2 d x\right)^{1 / 2}$$ 或者别的什么? (函数空间上的这些 Hölder 范数与相应的向量范数类似地定义。）在科学计算的许多领域 中，用户会询问我们做某事的最佳方式，并且可能会花费大量时间来了解用户的真实情况最好的意思。例如，考虑函数$f_1(x)=x^{-3}$和$f_2(x)=0$上$[.1,1]$. 相对于该$L_{\infty}$方程的范数。(6.1)，函数相距甚远，因为 $$\max \left|f_1(x)-f_2(x)\right|=1000$$ 超过$[.1,1]$; 但关于$L_1$方程的范数。(6.2)，它们更接近， $$\int_a^b\left|f_1(x)-f_2(x)\right| d x \doteq 49.5$$ (曲线之间的面积只有大约 50 个单位) 。如果我们希望使用$f_2$近似值$f_1$那么我们必须预料到错误的顺序$10^3$，但如果我们希望使用$f_2$近似积分$f_1$那么我们应该预计会出现大约$\$10^{\wedge} 1$ 的错误。

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: