# 数学代写|数值分析代写numerical analysis代考|Wiener Processes

## 数学代写|数值分析代写numerical analysis代考|Wiener ProcessesA

Our understanding of Brownian motion was developed by Einstein, Smoluchowski, and Norbert Wiener. It was Wiener who put it all on a rigorous foundation. For this reason we often refer to Wiener processes. The first property of Wiener processes is that it is an independent increments process: that is, for $a<b<c$ the random variables $\boldsymbol{W}c-\boldsymbol{W}_b$ and $\boldsymbol{W}_b-\boldsymbol{W}_a$ are independent. The second property is translation invariance: $\boldsymbol{W}_b-\boldsymbol{W}_a$ has the same distribution as $\boldsymbol{W}{b-a}-\boldsymbol{W}_0$. The third property is that $\boldsymbol{W}_b-\boldsymbol{W}_a$ has finite variance. Because of the independent increments property and finite variance,
\begin{aligned} \operatorname{Var}\left[\boldsymbol{W}_c-\boldsymbol{W}_a\right] & =\operatorname{Var}\left[\left(\boldsymbol{W}_c-\boldsymbol{W}_b\right)+\left(\boldsymbol{W}_b-\boldsymbol{W}_a\right)\right] \ & =\operatorname{Var}\left[\boldsymbol{W}_c-\boldsymbol{W}_b\right]+\operatorname{Var}\left[\boldsymbol{W}_b-\boldsymbol{W}_a\right] \end{aligned}
Combined with translation invariance, we see that $\operatorname{Var}\left[\boldsymbol{W}_c-\boldsymbol{W}_a\right]$ must be proportional to $c-a$. The final property is that $\mathbb{V a r}\left[\boldsymbol{W}_1-\boldsymbol{W}_0\right]=I$. Then $\mathbb{V a r}\left[\boldsymbol{W}_c-\right.$ $\left.\boldsymbol{W}_a\right]=(c-a) I$. It is a standard Wiener process if, in addition, $\boldsymbol{W}_0=\mathbf{0}$ with probability one.

The Central Limit Theorem (Theorem 7.4) implies that each $\boldsymbol{W}_t$ and difference $\boldsymbol{W}_b-\boldsymbol{W}_a$ must be normally distributed.

We can create approximate Wiener processes by selecting a time-step $h>0$, and then take $\boldsymbol{W}{h, t}$ to be the linear interpolant over $t$ of $\boldsymbol{W}{h, k h}, k=0,1,2, \ldots$ with $\boldsymbol{W}{h, 0}=\mathbf{0}$ and $\boldsymbol{W}{h,(k+1) h}=\boldsymbol{W}{h, k h}+\sqrt{h} \boldsymbol{Z}_k^{(h)}$ where $\boldsymbol{Z}_k^{(h)} \sim \operatorname{Normal}(\mathbf{0}, I)$ is an independent increment. The trouble with this approach is that we cannot take meaningful limits as $h \downarrow 0$ : unless we have some relationship between $\boldsymbol{Z}_k^{(h)}$ and $\boldsymbol{Z}{2 k}^{(h / 2)}$, for example, we cannot expect the $\boldsymbol{W}_{h, t}$ to converge as $h \downarrow 0$.

To see how to create a convergent sequence $\boldsymbol{W}_{h, k h}$, we focus on the scalar case. The vector case can be dealt with by treating each component independently.

Start with the values $W_{1,0}=0$ and $W_{1,1} \sim \operatorname{Normal}(0,1)$ and “fill in” the values between $t=0$ and $t=1$. For $h=1 / 2$ we set $W_{1 / 2,0}=W_{1,0}$ and $W_{1 / 2,1}=W_{1,1}$. However, we need to determine a value $W_{1 / 2,1 / 2}$ so that $W_{1 / 2,1 / 2}-W_{1 / 2,0}$ and $W_{1 / 2,1}-W_{1 / 2,1 / 2}$ are independent and are both distributed as $\operatorname{Normal}(0,1 / 2)$. We generate an independent normally distributed sample $U_1^{(1 / 2)} \sim \operatorname{Normal}(0,1 / 4)$ and set $W_{1 / 2,1 / 2}=\frac{1}{2}\left(W_{0,0}+W_{1,1}\right)+U_1^{(1 / 2)}$. We chose $U_1^{(1 / 2)}$ to have variance $1 / 4$ so that the variances add properly:
$$\frac{1}{2}=\operatorname{Var}\left[W_{1 / 2,1 / 2}\right]=\operatorname{Var}\left[\frac{1}{2}\left(W_{0,0}+W_{1,1}\right)\right]+\operatorname{Var}\left[U_1^{(1 / 2)}\right]=\frac{1}{4}+\operatorname{Var}\left[U_1^{(1 / 2)}\right]$$

## 数学代写|数值分析代写numerical analysis代考|Itô Stochastic Differential Equations

Interpreting an ordinary differential equation (ODE)
$$\frac{d x}{d t}=f(t, x(t)), \quad x\left(t_0\right)=x_0$$
can be done in terms of integrals:
$$\boldsymbol{x}(t)=\boldsymbol{x}0+\int{t_0}^t \boldsymbol{f}(s, \boldsymbol{x}(s)) d s \quad \text { for all } t \geq t_0 .$$
Interpreting the stochastic differential equation (SDE)
$$d \boldsymbol{X}t=\boldsymbol{f}\left(t, \boldsymbol{X}_t\right) d t+\sigma\left(t, \boldsymbol{X}_t\right) d \boldsymbol{W}_t$$ in the sense of Itô in terms of integrals $$\boldsymbol{X}_t=\boldsymbol{X}{t_0}+\int_{t_0}^t\left[\boldsymbol{f}\left(s, \boldsymbol{X}s\right) d s+\sigma\left(s, \boldsymbol{X}_s\right) d \boldsymbol{W}_s\right]$$ is a more difficult task as we need to interpret the integral $\int{t_0}^t \sigma\left(s, \boldsymbol{X}_s\right) d \boldsymbol{W}_s$ which involves products of random quantities. A summary of Itô calculus is [191].

A specific example we can consider is $\int_0^t W_s d W_s$. If we used standard methods from calculus, a change of variable $u(s)=\frac{1}{2} W_s^2$ would give $d u=W_s d W_s$ and so $\int_0^t W_s d W_s=\left.\frac{1}{2} W_s^2\right|{s=0} ^{s=t}=\frac{1}{2} W_t^2$. Note that its expectation is $\frac{1}{2} t$. On the other hand, if we approximate the integral by the sum $$\sum{k=0}^{n-1} W_{h k}\left(W_{h(k+1)}-W_{h k}\right)$$
where $t=n h$ we get a random quantity whose expectation is zero: $W_{h(k+1)}-$ $W_{h k}$ is independent of $W_{h k}=W_{h k}-W_0$ by the independent increments property, so $\mathbb{E}\left[W_{h k}\left(W_{h(k+1)}-W_{h k}\right)\right]=\mathbb{E}\left[W_{h k}\right] \mathbb{E}\left[W_{h(k+1)}-W_{h k}\right]=0$. Taking the limit as $h \rightarrow 0$ gives $\mathbb{E}\left[\int_0^t W_s d W_s\right]=0$, not $\frac{1}{2} t$

# 数值分析代考

## 数学代写|数值分析代写numerical analysis代考|Wiener ProcessesA

$$\frac{1}{2}=\operatorname{Var}\left[W_{1 / 2,1 / 2}\right]=\operatorname{Var}\left[\frac{1}{2}\left(W_{0,0}+W_{1,1}\right)\right]+\operatorname{Var}\left[U_1^{(1 / 2)}\right]=\frac{1}{4}+\operatorname{Var}\left[U_1^{(1 / 2)}\right]$$

## 数学代写|数值分析代写numerical analysis代考|Itô Stochastic Differential Equations

$$\frac{d x}{d t}=f(t, x(t)), \quad x\left(t_0\right)=x_0$$

$$\boldsymbol{x}(t)=\boldsymbol{x} 0+\int t_0{ }^t \boldsymbol{f}(s, \boldsymbol{x}(s)) d s \quad \text { for all } t \geq t_0 .$$

$$d \boldsymbol{X} t=\boldsymbol{f}\left(t, \boldsymbol{X}t\right) d t+\sigma\left(t, \boldsymbol{X}_t\right) d \boldsymbol{W}_t$$ 在伊藤意义上的积分 $$\boldsymbol{X}_t=\boldsymbol{X} t_0+\int{t_0}^t\left[\boldsymbol{f}(s, \boldsymbol{X} s) d s+\sigma\left(s, \boldsymbol{X}s\right) d \boldsymbol{W}_s\right]$$ 是一项更困难的任务，因为我们需要解释积分 $\int t_0{ }^t \sigma\left(s, \boldsymbol{X}_s\right) d \boldsymbol{W}_s$ 其中涉及随机数量 的产品。Itô 微积分的总结是 [191]。 我们可以考虑的一个具体例子是 $\int_0^t W_s d W_s$. 如果我们使用微积分的标准方法，变量的 变化 $u(s)=\frac{1}{2} W_s^2$ 会给 $d u=W_s d W_s$ 所以 $\int_0^t W_s d W_s=\frac{1}{2} W_s^2 \mid s=0^{s=t}=\frac{1}{2} W_t^2$. 请注意，它的期望是 $\frac{1}{2} t$. 另一方面，如果我们用总和来近似积分 $$\sum k=0^{n-1} W{h k}\left(W_{h(k+1)}-W_{h k}\right)$$在哪里 $t=n h$ 我们得到一个随机数量，其期望为零: $W_{h(k+1)}-W_{h k}$ 独立于 $W_{h k}=W_{h k}-W_0$ 由独立增量属性，所以 $\mathbb{E}\left[W_{h k}\left(W_{h(k+1)}-W_{h k}\right)\right]=\mathbb{E}\left[W_{h k}\right] \mathbb{E}\left[W_{h(k+1)}-W_{h k}\right]=0$. 取极限为 $h \rightarrow 0$ 给 $\mathbb{E}\left[\int_0^t W_s d W_s\right]=0$ ， 不是 $\frac{1}{2} t$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: