# 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|OLET5610

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|The Multinormal Distribution

The multinormal distribution with mean $\mu$ and covariance $\Sigma>0$ has the density
$$f(x)=|2 \pi \Sigma|^{-1 / 2} \exp \left{-\frac{1}{2}(x-\mu)^{\top} \Sigma^{-1}(x-\mu)\right} .$$
We write $X \sim N_p(\mu, \Sigma)$.
How is this multinormal distribution with mean $\mu$ and covariance $\Sigma$ related to the multivariate standard normal $N_p\left(0, \mathcal{I}_p\right)$ ? Through a linear transformation using the results of Sect. $4.3$, as shown in the next theorem.

Theorem 4.5 Let $X \sim N_p(\mu, \Sigma)$ and $Y=\Sigma^{-1 / 2}(X-\mu)$ (Mahalanobis transformation). Then
$$Y \sim N_p\left(0, \mathcal{I}p\right),$$ i.e., the elements $Y_j \in \mathbb{R}$ are independent, one-dimensional $N(0,1)$ variables. Proof Note that $(X-\mu)^{\top} \Sigma^{-1}(X-\mu)=Y^{\top} Y$. Application of (4.45) gives $\mathcal{J}=$ $\Sigma^{1 / 2}$, hence $$f_Y(y)=(2 \pi)^{-p / 2} \exp \left(-\frac{1}{2} y^{\top} y\right)$$ which is by (4.47) the pdf of a $N_p\left(0, \mathcal{I}_p\right)$. Note that the above Mahalanobis transformation yields in fact a random variable $Y=\left(Y_1, \ldots, Y_p\right)^{\top}$ composed of independent one-dimensional $Y_j \sim N_1(0,1)$ since \begin{aligned} f_Y(y) & =\frac{1}{(2 \pi)^{p / 2}} \exp \left(-\frac{1}{2} y^{\top} y\right) \ & =\prod{j=1}^p \frac{1}{\sqrt{2 \pi}} \exp \left(-\frac{1}{2} y_j^2\right) \ & =\prod_{j=1}^p f_{Y_j}\left(y_j\right) \end{aligned}

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Sampling Distributions and Limit Theorems

In multivariate statistics, we observe the values of a multivariate random variable $X$ and obtain a sample $\left{x_i\right}_{i=1}^n$, as described in Chap. 3. Under random sampling, these observations are considered to be realizations of a sequence of i.i.d. random variables $X_1, \ldots, X_n$, where each $X_i$ is a $p$-variate random variable which replicates the parent or population random variable $X$. Some notational confusion is hard to avoid: $X_i$ is not the $i$-th component of $X$, but rather the $i$-th replicate of the $p$-variate random variable $X$ which provides the $i$-th observation $x_i$ of our sample.

For a given random sample $X_1, \ldots, X_n$, the idea of statistical inference is to analyze the properties of the population variable $X$. This is typically done by analyzing some characteristic $\theta$ of its distribution like the mean, covariance matrix, efc. Statistical inference in a multivariate setup is considered in more detail in Chaps. 6 and 7.

Inference can often be performed using some observable function of the sample $X_1, \ldots, X_n$, i.e., a statistics. Examples of such statistics were given in Chap. 3: the sample mean $\bar{x}$, the sample covariance matrix $\mathcal{S}$. To get an idea of the relationship between a statistics and the corresponding population characteristic, one has to derive the sampling distribution of the statistic. The next example gives some insight into the relation of $(\bar{x}, S)$ to $(\mu, \Sigma)$.

Example $4.15$ Consider an i.i.d. sample of $n$ random vectors $X_i \in \mathbb{R}^p$ where $\mathrm{E}\left(X_i\right)=$ $\mu$ and $\operatorname{Var}\left(X_i\right)=\Sigma$. The sample mean $\bar{x}$ and the covariance matrix $\mathcal{S}$ have already been defined in Sect. 3.3. It is easy to prove the following results:
\begin{aligned} & \mathrm{E}(\bar{x})=n^{-1} \sum_{i=1}^n \mathrm{E}\left(X_i\right)=\mu \ & \operatorname{Var}(\bar{x})=n^{-2} \sum_{i=1}^n \operatorname{Var}\left(X_i\right)=n^{-1} \Sigma=\mathrm{E}\left(\bar{x} \bar{x}^{\top}\right)-\mu \mu^{\top} \end{aligned}

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|The Multinormal Distribution

$$\left.f(x)=\mid 2 \backslash p i \backslash \text { Sigma }\left.\right|^{\wedge}{-1 / 2} \backslash \exp \backslash \text { left }\left{-\backslash f r a c{1}{2}(x-\backslash m u)^{\wedge} \backslash \text { top }\right} \backslash \text { Sigma^ }{-1}(x-\backslash m u) \backslash \text { right }\right} 。$$

## 统计代写|多元统计分析代写Multivariate Statistical Analysis代考|Sampling Distributions and Limit Theorems

1. 在随机抽样下，这些观察被认为是一系列 iid 随机变量的实现 $X_1, \ldots, X_n$ ，其中每个 $X_i$ 是 个 $p$-variate 随机变量，它复制父或总体随机变量 $X$.一些符号混溾是难以避免的: $X_i$ 不是 $i$ – 的第一 个组成部分 $X$ ，而是 $i$ – 的第一个副本 $p$-variate 随机变量 $X$ 它提供了 $i$-th观察 $x_i$ 我们的样本。
对于给定的随机样本 $X_1, \ldots, X_n$ ，统计推断的思想是分析总体变量的性质 $X$. 这通常是通过分析 某些特征来完成的 $\theta$ 它的分布，如均值、协方差矩阵、efc。客变量设置中的䋃计推断在第 2 章中有 更详细的讨论。 6 和 7 。
通常可以使用样本的某些可观察函数进行推理 $X_1, \ldots, X_n$ ，即统计数据。此类統计数据的示例在 第 1 章中给出。3: 样本均值 $\bar{x}$ ，样本协方差矩阵 $\mathcal{S}$. 要了解统计数据与相应人口特征之间的关系， 必须推导出统计数据的抽样分布。下一个例子给出了对关系的一些洞察 $(\bar{x}, S)$ 至 $(\mu, \Sigma)$.
例子 $4.15$ 考虑一个 iid 样本 $n$ 随机向量 $X_i \in \mathbb{R}^p$ 在哪里 $\mathrm{E}\left(X_i\right)=\mu$ 和 $\operatorname{Var}\left(X_i\right)=\Sigma$. 样本均值 $\bar{x}$ 和 协方差矩阵 $\mathcal{S}$ 已经在 Sect 中定义了。3.3. 很容易证明以下结果:
$$\mathrm{E}(\bar{x})=n^{-1} \sum_{i=1}^n \mathrm{E}\left(X_i\right)=\mu \quad \operatorname{Var}(\bar{x})=n^{-2} \sum_{i=1}^n \operatorname{Var}\left(X_i\right)=n^{-1} \Sigma=\mathrm{E}\left(\bar{x} \bar{x}^{\top}\right)-\mu \mu^{\top}$$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: