# 统计代写|广义线性模型代写generalized linear model代考|MAST30025

## 统计代写|广义线性模型代写generalized linear model代考|Least-Squares Regression

The model just discussed can be expressed in matrix form by noting that
$Y_1=\beta_0+\beta_1 x_{11}+\cdots+\beta_{p-1} x_{1, p-1}+E_1$
$Y_2=\beta_0+\beta_1 x_{21}+\cdots+\beta_{p-1} x_{2, p-1}+E_2$
or
$$\mathbf{Y}=\mathbf{X} \boldsymbol{\beta}+\mathbf{E}$$
where the $n \times 1$ random vector $\mathbf{Y}=\left(Y_1, \ldots, Y_n\right)^{\prime}$, the $p \times 1$ vector $\boldsymbol{\beta}=$ $\left(\beta_0, \beta_1 \ldots, \beta_{p-1}\right)^{\prime}$, the $n \times 1$ random vector $\mathbf{E}=\left(E_1, \ldots, E_n\right)^{\prime}$ and the $n \times p$ matrix
$$\mathbf{X}=\left[\begin{array}{cccc} 1 & x_{11} & \cdots & x_{1, p-1} \ 1 & x_{21} & \cdots & x_{2, p-1} \ \vdots & \vdots & & \vdots \ 1 & x_{n 1} & \cdots & x_{n, p-1} \end{array}\right]$$
Furthermore, $\mathrm{E}\left(E_i\right)=0$ for all $i=1, \ldots, n$ implies $\mathrm{E}(\mathbf{E})=\mathbf{0}_{n \times 1}$. Therefore $\mathrm{E}(\mathbf{Y})=\mathbf{X} \boldsymbol{\beta}$. For the present, assume that the $E_i$ ‘s are independent, identically distributed random variables where $\operatorname{var}\left(E_i\right)=\sigma^2$ for all $i=1, \ldots, n$. Since the $E_i$ ‘s are independent, $\operatorname{cov}\left(E_i, E_j\right)=0$ for all $i \neq j$. Therefore, the covariance matrix of $\mathbf{E}$ is given by $\boldsymbol{\Sigma}=\operatorname{cov}(\mathbf{E})=\sigma^2 \mathbf{I}_n$. In later sections of this chapter more complicated error structures are considered.

Note that $\boldsymbol{\Sigma}$ has been used to represent the covariance matrix of the $n \times 1$ random error vector $\mathbf{E}$. However, $\boldsymbol{\Sigma}$ is also the covariance matrix of the $n \times 1$ random vector $\mathbf{Y}$ since
\begin{aligned} \operatorname{cov}(\mathbf{Y}) &=\mathrm{E}\left[(\mathbf{Y}-\mathbf{X} \boldsymbol{\beta})(\mathbf{Y}-\mathbf{X} \boldsymbol{\beta})^{\prime}\right] \ &=\mathrm{E}\left[\mathbf{E E}^{\prime}\right] \ &=\boldsymbol{\Sigma} \end{aligned}

## 统计代写|广义线性模型代写generalized linear model代考|BEST LINEAR UNBIASED ESTIMATORS

In many problems it is of interest to estimate linear combinations of $\beta_0, \ldots, \beta_{p-1}$, say, $\mathbf{t}^{\prime} \boldsymbol{\beta}$, where $\mathbf{t}$ is any nonzero $p \times 1$ vector of known constants. In the next definition the “best” linear unbiased estimator of $\boldsymbol{t}^{\prime} \boldsymbol{\beta}$ is identified.

Definition 5.2.1 Best Linear Unbiased Estimator $(B L U E)$ of $\mathbf{t}^{\prime} \boldsymbol{\beta}$ : The best linear unbiased estimator of $\mathbf{t}^{\prime} \boldsymbol{\beta}$ is
(i) a linear function of the observed vector $\mathbf{Y}$, that is, a function of the form $\mathbf{a}^{\prime} \mathbf{Y}+a_0$ where $\mathbf{a}$ is an $n \times 1$ vector of constants and $a_0$ is a scalar and
(ii) the unbiased estimator of $\mathbf{t}^{\prime} \boldsymbol{\beta}$ with the smallest variance.
In the next important theorem $\mathbf{t}^{\prime} \hat{\beta}=\mathbf{t}^{\prime}\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{Y}$ is shown to be the BLUE of $\mathbf{t}^{\prime} \beta$ when $\mathrm{E}(\mathbf{E})=\mathbf{0}$ and $\operatorname{cov}(\mathbf{E})=\sigma^2 \mathbf{I}_n$. The theorem is called the Gauss-Markov theorem.

Theorem 5.2.1 Let $\mathbf{Y}=\mathbf{X} \beta+\mathbf{E}$ where $\mathrm{E}(\mathbf{E})=\mathbf{0}$ and $\operatorname{cov}(\mathbf{E})=\sigma^2 \mathbf{I}_n$. Then the least-squares estimator of $\mathbf{t}^{\prime} \boldsymbol{\beta}$ is given by $\mathbf{t}^{\prime} \hat{\boldsymbol{\beta}}=\mathbf{t}^{\prime}\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{Y}$ and $\mathbf{t}^{\prime} \hat{\boldsymbol{\beta}}$ is the $B L U E$ of $\mathbf{t}^{\prime} \beta$.

Proof: First, the least-squares estimator of $\boldsymbol{t}^{\prime} \hat{\beta}$ is shown to be $\boldsymbol{t}^{\prime} \hat{\beta}$. Let $\mathbf{T}$ be a $p \times p$ nonsingular matrix such that $\mathbf{T}=\left(\mathbf{t} \mid \mathbf{T}_0\right)$ where $t$ is a $p \times 1$ vector and $\mathbf{T}_0$ is a $p \times(p-1)$ matrix. If $\mathbf{R}=\mathbf{T}^{\prime-1}$ then
\begin{aligned} \mathbf{Y} &=\mathbf{X} \boldsymbol{\beta}+\mathbf{E} \ &=\mathbf{X R T}^{\prime} \boldsymbol{\beta}+\mathbf{E} \ &=\mathbf{U} \boldsymbol{\omega}+\mathbf{E} \end{aligned}
where $\mathbf{U}=\mathbf{X R}$ and
$$\boldsymbol{\omega}=\mathbf{T}^{\prime} \boldsymbol{\beta}=\left[\begin{array}{c} \mathbf{t}^{\prime} \boldsymbol{\beta} \ \mathbf{T}_0^{\prime} \beta \end{array}\right]$$

# 广义线性模型代考

## 统计代写|广义线性模型代写generalized linear model代考|最小二乘回归

\begin{aligned} &Y_1=\beta_0+\beta_1 x_{11}+\cdots+\beta_{p-1} x_{1, p-1}+E_1 \ &Y_2=\beta_0+\beta_1 x_{21}+\cdots+\beta_{p-1} x_{2, p-1}+E_2 \end{aligned}

$$\mathbf{Y}=\mathbf{X} \boldsymbol{\beta}+\mathbf{E}$$

$$\operatorname{cov}(\mathbf{Y})=\mathrm{E}\left[(\mathbf{Y}-\mathbf{X} \boldsymbol{\beta})(\mathbf{Y}-\mathbf{X} \boldsymbol{\beta})^{\prime}\right] \quad=\mathrm{E}\left[\mathbf{E} \mathbf{E}^{\prime}\right]=\boldsymbol{\Sigma}$$

## 统计代写|广义线性模型代写generalized linear model代考|最佳线性无偏估计

(i) 观察向量的线性函数 $\mathbf{Y}$ ，即形式的函数 $\mathbf{a}^{\prime} \mathbf{Y}+a_0$ 在哪里 $\mathbf{a}$ 是一个 $n \times 1$ 常数向量和 $a_0$ 是一个标量并且
(ii) 的无偏估计量 $\mathbf{t}^{\prime} \boldsymbol{\beta}$ 具有最小的方差。

$$\mathbf{Y}=\mathbf{X} \boldsymbol{\beta}+\mathbf{E} \quad=\mathbf{X R T} \mathbf{T}^{\prime} \boldsymbol{\beta}+\mathbf{E}=\mathbf{U} \boldsymbol{\omega}+\mathbf{E}$$

$$\boldsymbol{\omega}=\mathbf{T}^{\prime} \boldsymbol{\beta}=\left[\mathbf{t}^{\prime} \boldsymbol{\beta} \mathbf{T}_0^{\prime} \beta\right]$$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: