## 数学代写|高等线性代数代写Advanced Linear Algebra代考|The dual space

Let $V$ be a vector space over the field $\mathbb{F}$. We call a linear map $f: V \rightarrow \mathbb{F}$ that takes values in the underlying field, a linear functional. Linear functionals, as all function with values in a field, allow for addition among them, as well as scalar multiplication:
$$(f+g)(\mathbf{v}):=f(\mathbf{v})+g(\mathbf{v}),(c f)(\mathbf{x}):=c f(\mathbf{x}) .$$
With these operations the linear functions form a vector space $V^{\prime}$, the dual space of $V$. Thus
$$V^{\prime}={f: V \rightarrow \mathbb{F}: f \text { is linear }} .$$
The first observation is that the dual space of a finite-dimensional space $V$ has the same dimension as $V$.

Proposition 6.3.1 Let $V$ be a finite-dimensional space, and $V^{\prime}$ be its dual space. Then
$$\operatorname{dim} V=\operatorname{dim} V^{\prime} .$$
When $\left{\mathbf{v}1, \ldots, \mathbf{v}_n\right}$ is a basis for $V$, then a basis for $V^{\prime}$ is given by $\left{f_1, \ldots, f_n\right}$, where $f_j \in V^{\prime}, j=1, \ldots, n$, is so that $$f_j\left(\mathbf{v}_k\right)= \begin{cases}0 & \text { if } k \neq j, \ 1 & \text { if } k=j .\end{cases}$$ The basis $\left{f_1, \ldots, f_n\right}$ above is called the dual basis of $\left{\mathbf{v}_1, \ldots, \mathbf{v}_n\right}$. Proof. When $\mathbf{v}=\sum{k=1}^n c_k \mathbf{v}k$, then $f_j(\mathbf{v})=c_j$, yielding a well-defined linear functional on $V$. Let us show that $\left{f_1, \ldots, f_n\right}$ is linearly independent. For this, suppose that $d_1 f_1+\cdots+d_n f_n=\mathbf{0}$. Then $$0=\mathbf{0}\left(\mathbf{v}_k\right)=\left(\sum{j=1}^n d_j f_j\right)\left(\mathbf{v}k\right)=\sum{j=1}^n d_j f_j\left(\mathbf{v}k\right)=d_k, k=1, \ldots, n,$$ showing linear independence. Next, we need to show that $\operatorname{Span}\left{f_1, \ldots, f_n\right}=V^{\prime}$, so let $f \in V^{\prime}$ be arbitrary. We claim that $$f=f\left(\mathbf{v}_1\right) f_1+\cdots+f\left(\mathbf{v}_n\right) f_n .$$ Indeed, for $k=1, \ldots, n$, we have that $$f\left(\mathbf{v}_k\right)=f\left(\mathbf{v}_k\right) f_k\left(\mathbf{v}_k\right)=\sum{j=1}^n f\left(\mathbf{v}_j\right) f_j\left(\mathbf{v}_k\right) .$$

## 数学代写|高等线性代数代写Advanced Linear Algebra代考|Matrices you can’t write down, but would still like to use

In previous chapters we have done computations with matrices to learn the concepts, and they were all small matrices (at most $8 \times 8$ ). Bigger matrices (say, number of rows and columns in the thousands) you may not want to deal with by hand, but working with them in a spreadsheet or other software seems doable. But what do we do when matrices are simply too big to store anywhere (say, if the number of rows or columns run in the billions), or if it is simply impossible to gather all the data? Can we still work with the matrix?
Here are two examples to begin with, both used in search engines:

• A matrix $P$ where there is a row and column for every existing web page, and the $(i, j)$ th entry $p_{i j}$ represents the probability that you go from web page $i$ to web page $j$. Currently (October 2015), there are about $4.76$ billion indexed web pages, so this matrix is huge. However, if you have a way of looking at a page $i$ and determining all the probabilities $p_{i j}$, then determining a row of this matrix is not a big deal.
• A matrix $M$ where there is a row for every web page, and a column for every search word. The $(i, j)$ th entry $m_{i j}$ of this matrix is set to be 1 if search word $j$ appears on page $i$, and 0 otherwise. Again, this matrix is huge, but determining row $i$ is easily done by looking at this particular page.

One big difference between these two matrices is obvious: $P$ is square and $M$ is not. Thus $P$ has eigenvectors, and $M$ does not. In fact, it is the eigenvector of $P^T$ at the eigenvalue 1 that is of interest. Notice that for these matrices it may not be convenient to $11 s e$ numbers $1,2, \ldots$ as indices for the rows and cólumns, as wé usually do.

# 高等线性代数代考

$$(f+g)(\mathbf{v}):=f(\mathbf{v})+g(\mathbf{v}),(c f)(\mathbf{x}):=c f(\mathbf{x}) .$$

$$V^{\prime}={f: V \rightarrow \mathbb{F}: f \text { is linear }} .$$

$$\operatorname{dim} V=\operatorname{dim} V^{\prime} .$$

## 数学代写|高等线性代数代写高级线性代数代考|矩阵你不能写下来，但仍然想使用

• 一个矩阵$P$，其中每个现有的网页都有一个行和一列，$(i, j)$的第一个条目$p_{i j}$表示从网页$i$到网页$j$的概率。目前(2015年10月)，有大约$4.76$亿的索引网页，所以这个矩阵是巨大的。然而，如果你有办法查看一个页面$i$并确定所有的概率$p_{i j}$，那么确定这个矩阵的一行就不是什么大问题了。
• 一个矩阵$M$，其中每个网页都有一行，每个搜索词都有一列。如果搜索词$j$出现在页面$i$上，则该矩阵的第$(i, j)$项$m_{i j}$设置为1，否则为0。同样，这个矩阵是巨大的，但是通过查看这个特定的页面就可以很容易地确定行$i$这两个矩阵之间的一个很大的区别是显而易见的:$P$是正方形的，而$M$不是。因此$P$有特征向量，而$M$没有。事实上，我们感兴趣的是特征值为1的特征向量$P^T$。注意，对于这些矩阵，使用$11 s e$数字$1,2, \ldots$作为行和cólumns的索引可能不方便，而wé通常是这样做的。

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: