## 数学代写|数值分析代写numerical analysis代考|Numerical Eigenproblems

We’ve focused on solving the linear system $A x=b$ in this and the previous chapter. There are other types of matrix computations that are of interest, however. The two primary types of matrix problems are those of solving the linear system $A x=b$ and of finding the eigenvalues of $A$, and most other matrix problems are solved in the service of one of these two goals.

Recall that the spectrum of $A, \operatorname{sp}(A)$, is the set of all eigenvalues of $A$, that is, the set of all complex numbers $\lambda$ such that
$$A x=\lambda x$$
for some nonzero vector $x$, called an eigenvector of $A$ associated with $\lambda$. These are of importance in many physical and mathematical applications, such as determining whether a control system is stable, or finding the singular values and hence condition number of a matrix (since the nonzero singular values of $A$ are the square roots of the eigenvalues of $A^T A$; see Sec. 2.10).

All methods for finding the eigenvalues of a general matrix will be iterative methods, for finding the eigenvalues of an $n \times n$ matrix is equivalent to finding the roots of its characteristic polynomial
$$c(\lambda)=a_n \lambda^n+a_{n-1} \lambda^{n-1}+\ldots+a_1 \lambda+a_0$$
(where $a_n$ is nonzero), and there is no way to find the roots of a general polynomial of degree 5 or higher without using iterative methods ${ }^9$. Since finding eigenvalues is equivalent to finding the roots of $c(\lambda)$, the same must be true of eigenvalue problems. (In fact, often we find the zeroes of a polynomial by constructing a matrix that has it as characteristic polynomial and then finding the eigenvalues of the matrix.). Hence, we will not have a finite-step method like Gaussian elimination for numerical eigenproblems; unlike the case for linear systems, we can not choose between direct methods and iterative methods.

## 数学代写|数值分析代写numerical analysis代考|The QR Method of Francis

Inverse iteration finds only one eigenpair at a time. There is a Krylov subspace generalization of the power method that locates several eigenpairs of a matrix. The method generates a set of vectors called a Krylov sequence
$$\left{v_0, A v_0, A^2 v_0, \ldots, A^{k-1} v_0\right}$$
in which we effectively perform the power method computations but retain previously computed vectors. Next, we find an orthonormal basis $\left{q_1, \ldots, q_k\right}$ for the Krylov subspace corresponding to this Krylov sequence, that is, we form the vector space
$$K_k=\operatorname{span}\left{v_0, A v_0, A^2 v_0, \ldots, A^{k-1} v_0\right}$$
then find an orthonormal basis such that
$$K_k=\operatorname{span}\left{q_1, \ldots, q_k\right}$$
which is called Arnoldi’s method (or Arnoldi iteration) in this context. This is followed by a matrix version of the Rayleigh quotient. This is primarily just an efficient implementation of the power method, but well worth the effort.

There is also the method of simultaneous iteration (or the block power method) that applies the power method to several vectors at once; that is, it computes $$A^k\left[\begin{array}{llll} v_0 & w_0 & \ldots & z_0 \end{array}\right]$$
where the vectors are normalized appropriately. Once again an orthogonalization procedure must be applied. This method yields the first several eigenvectors of $A$ (that is, those associated with the eigenvalues of largest magnitude). For many physical applications, where the largest eigenvalues indicate the speed at which the solution to a differential equation is decreasing (because the solution is a combination of exponentials of the form $\exp \left(c_i \lambda_i t\right)$ for some $c_i$, possibly all unity, and where the $\lambda_i$ are eigenvalues), it’s enough to know what the largest few are as the remaining eigenvalues give negligible contributions to the sum.

But what if we want all the eigenvalues of a matrix? To take just one example, the roots command uses the eigenvalues of a matrix to find all roots of a polynomial that is the characteristic polynomial of that matrix. Certainly we want all of the eigenvalues in this case. There are many others.

The natural idea, at least in the case where $A$ is diagonalizable (meaning that its eigenvectors span $\mathbb{R}^n$ ), is to look at the eigenvalue decomposition
$$A=P D P^{-1}$$
where $D$ is a diagonal matrix with the eigenvalues of $A$ on the main diagonal and $P$ is a nonsingular matrix with the corresponding eigenvectors as its columns. Recall that in general the diagonal elements of $D$ and corresponding columns of $P$ may be complex even though $A$ is real.

# 凸优化代考

## 数学代写|数值分析代写数值分析代考|数值特征问题

$$A x=\lambda x$$
，称为与$\lambda$相关的$A$的特征向量。这些在许多物理和数学应用中都很重要，比如确定控制系统是否稳定，或者找到矩阵的奇异值和条件数(因为$A$的非零奇异值是$A^T A$的特征值的平方根;参见第2.10节)。

$$c(\lambda)=a_n \lambda^n+a_{n-1} \lambda^{n-1}+\ldots+a_1 \lambda+a_0$$
(其中$a_n$为非零)，不使用迭代方法${ }^9$是无法求出5次或更高的一般多项式的根的。由于寻找特征值等价于寻找$c(\lambda)$的根，因此对于特征值问题也是如此。(事实上，我们经常通过构造一个矩阵来找到一个多项式的零点，这个矩阵把它作为特征多项式，然后找到这个矩阵的特征值。)因此，对于数值本征问题，我们不会有像高斯消去那样的有限步方法;与线性系统的情况不同，我们不能在直接方法和迭代方法之间做出选择

## 数学代写|数值分析代写数值分析代考| Francis的QR法

$$\left{v_0, A v_0, A^2 v_0, \ldots, A^{k-1} v_0\right}$$

$$K_k=\operatorname{span}\left{v_0, A v_0, A^2 v_0, \ldots, A^{k-1} v_0\right}$$
，然后找到一组标准正交基，使
$$K_k=\operatorname{span}\left{q_1, \ldots, q_k\right}$$
，在这个上下文中称为Arnoldi方法(或Arnoldi迭代)。接下来是瑞利商的矩阵版本。这主要只是power方法的一个有效实现，但值得一试

，其中向量被适当地规范化。同样，必须采用正交化程序。该方法得到$A$的前几个特征向量(即与最大幅值特征值相关的特征向量)。对于许多物理应用，其中最大的特征值表示微分方程解的下降速度(因为解是一个指数的组合，形式为$\exp \left(c_i \lambda_i t\right)$，对于一些$c_i$，可能是所有的单位，其中$\lambda_i$是特征值)，知道最大的几个是什么就足够了，因为剩下的特征值对和的贡献可以忽略不计

$$A=P D P^{-1}$$
，其中$D$是一个对角矩阵，其特征值$A$在主对角线上，而$P$是一个以相应的特征向量为列的非奇异矩阵。回想一下，通常$D$的对角线元素和$P$的相应列可能是复杂的，即使$A$是实数

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: