## 数学代写|高等线性代数代写Advanced Linear Algebra代考|Algorithms based on matrix vector products

If we are in a situation where it is hard to deal with the whole square matrix $A$, but we are able to compute a vector product $A \mathbf{v}$, are we still able to compute eigenvalues of $A$, or solve an equation $A \mathrm{x}=\mathbf{b}$. Examples of such a situation include

• A sparse matrix $A$; that is, a matrix with relatively few nonzero entries. While the matrix may be huge, computing a product $A \mathbf{v}$ may be doable.
• A situation where the matrix $A$ represents the action of some system in which we can give inputs and measure outputs. If the input is $\mathbf{u}$ and the output is $\mathbf{y}$, then by giving the system the input $\mathbf{u}$ and by measuring the output $\mathbf{y}$ we would in effect be computing the product $\mathbf{y}=A \mathbf{u}$. In this situation we would not know the (complete) inner workings of this system, but assume (or just guess as a first try) that the system can be modeled/approximated by a simple matrix multiplication.
• The matrices $M$ and $P$ from Section 7.1.
Here is a first algorithm that computes the eigenvalue of the largest modulus in case it has geometric multiplicity 1.
Theorem 7.2.1 (Power method) Let $A \in \mathbb{C}^{n \times n}$ have eigenvalues $\left{\lambda_1, \ldots, \lambda_n\right}$ with $\lambda_1>\max {j=2, \ldots, n}\left|\lambda_j\right|$. Let $\mathbf{v}$ be so that $\mathbf{v} \notin \operatorname{Ker} \prod{j=2}^n\left(A-\lambda_j\right)$. Then the iteration
$$\mathbf{v}0:=\mathbf{v}, \quad \mathbf{v}{k+1}=\frac{1}{\left|A \mathbf{v}k\right|} A \mathbf{v}_k, \mu_k:=\frac{\mathbf{v}_k^* A \mathbf{v}_k}{\mathbf{v}_k^* \mathbf{v}_k}, k=1,2, \ldots,$$ has the property that $\lambda_1=\lim {k \rightarrow \infty} \mu_k$ and $\mathbf{w}:=\lim _{k \rightarrow \infty} \mathbf{v}_k$ is a unit eigenvector for at $\lambda_1$, thus $A \mathbf{w}=\lambda_1 \mathbf{w}$ and $|\mathbf{w}|=1$.

## 数学代写|高等线性代数代写Advanced Linear Algebra代考|Why use matrices when computing roots

We saw in Section $5.4$ that in order to compute the QR factorization of a matrix only simple arithmetic computations are required. Indeed, one only needs addition, subtraction, multiplication, division, and taking square roots to find the QR factorization of a matrix. Amazingly, doing it repeatedly in a clever way provides an excellent way to compute eigenvalues of a matrix. This is surprising since finding roots of a polynomial is not as easy as performing simple algebraic operations (other than for degree 1,2,3, 4 polynomials, using the quadratic formula (for degree 2) and its generalizations; for polynomials of degree 5 and higher it was shown by Niels Hendrik Abel in 1823 that no algebraic formula exists for its roots). In fact, it works so well that for finding roots of a polynomial one can just build its corresponding companion matrix, and subsequently apply the $\mathrm{QR}$ algorithm to compute its roots. Let us give an example.

Example 7.3.1 Let $p(t)=t^3-6 t^2+11 t-6(=(t-1)(t-2)(t-3))$. Its companion matrix is
$$A=\left(\begin{array}{ccc} 0 & 0 & 6 \ 1 & 0 & -11 \ 0 & 1 & 6 \end{array}\right) .$$
Computing its QR factorization, we find
$$A=Q R=\left(\begin{array}{lll} 0 & 0 & 1 \ 1 & 0 & 0 \ 0 & 1 & 0 \end{array}\right)\left(\begin{array}{ccc} 1 & 0 & -11 \ 0 & 1 & 6 \ 0 & 0 & 6 \end{array}\right) .$$

# 高等线性代数代考

## 数学代写|高等线性代数代写高级线性代数代考|基于矩阵向量乘积的算法

• 稀疏矩阵$A$也就是说，一个非零元素相对较少的矩阵。虽然矩阵可能很大，但计算一个产品$A \mathbf{v}$可能是可行的。
• 一种情况，其中矩阵$A$表示某个系统的动作，在该系统中我们可以给出输入并测量输出。如果输入是$\mathbf{u}$，输出是$\mathbf{y}$，那么通过给系统输入$\mathbf{u}$并测量输出$\mathbf{y}$，我们实际上将计算产品$\mathbf{y}=A \mathbf{u}$。在这种情况下，我们不会知道这个系统的(完整的)内部工作，但假设(或只是第一次尝试猜测)系统可以用一个简单的矩阵乘法建模/近似。第7.1节中的矩阵$M$和$P$。这里是第一个算法，计算最大模的特征值，如果它具有几何多重性1。
定理7.2.1(幂法)设$A \in \mathbb{C}^{n \times n}$有特征值$\left{\lambda_1, \ldots, \lambda_n\right}$ with $\lambda_1>\max {j=2, \ldots, n}\left|\lambda_j\right|$。让$\mathbf{v}$成为$\mathbf{v} \notin \operatorname{Ker} \prod{j=2}^n\left(A-\lambda_j\right)$。然后迭代
$$\mathbf{v}0:=\mathbf{v}, \quad \mathbf{v}{k+1}=\frac{1}{\left|A \mathbf{v}k\right|} A \mathbf{v}_k, \mu_k:=\frac{\mathbf{v}_k^* A \mathbf{v}_k}{\mathbf{v}_k^* \mathbf{v}_k}, k=1,2, \ldots,$$具有$\lambda_1=\lim {k \rightarrow \infty} \mu_k$和$\mathbf{w}:=\lim _{k \rightarrow \infty} \mathbf{v}_k$是at $\lambda_1$的单位特征向量的性质，因此$A \mathbf{w}=\lambda_1 \mathbf{w}$和$|\mathbf{w}|=1$

## 数学代写|高等线性代数代写高级线性代数代考|为什么在计算根时使用矩阵

$$A=\left(\begin{array}{ccc} 0 & 0 & 6 \ 1 & 0 & -11 \ 0 & 1 & 6 \end{array}\right) .$$

$$A=Q R=\left(\begin{array}{lll} 0 & 0 & 1 \ 1 & 0 & 0 \ 0 & 1 & 0 \end{array}\right)\left(\begin{array}{ccc} 1 & 0 & -11 \ 0 & 1 & 6 \ 0 & 0 & 6 \end{array}\right) .$$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: