# 统计代写|广义线性模型代写generalized linear model代考|STAT6175

## 统计代写|广义线性模型代写generalized linear model代考|WEIGHTED LEAST-SQUARES REGRESSION

In the first three sections of this chapter the model was confined to $\mathbf{Y}=\mathbf{X} \beta+\mathbf{E}$ where $\mathrm{E}(\mathbf{E})=\mathbf{0}$ and $\operatorname{cov}(\mathbf{E})=\sigma^2 \mathbf{I}n$. In this section, the model $\mathbf{Y}=\mathbf{X} \beta+\mathbf{E}$ is considered when $\mathrm{E}(\mathbf{E})=\mathbf{0}, \operatorname{cov}(\mathbf{E})=\sigma^2 \mathbf{V}$, and $\mathbf{V}$ is an $n \times n$ symmetric, positive definite matrix of known constants. Because $\mathbf{V}$ is positive definite, there exists an $n \times n$ nonsingular matrix $\mathbf{T}$ such that $\mathbf{V}=\mathbf{T T}^{\prime}$. Premultiplying both sides of the model $\mathbf{Y}=\mathbf{X} \beta+\mathbf{E}$ by $\mathbf{T}^{-1}$ we obtain \begin{aligned} \mathbf{T}^{-1} \mathbf{Y} &=\mathbf{T}^{-1} \mathbf{X} \boldsymbol{\beta}+\mathbf{T}^{-1} \mathbf{E} \ \mathbf{Y}{\mathrm{w}} &=\mathbf{X}{\mathrm{w}} \boldsymbol{\beta}+\mathbf{E}{\mathrm{w}} \end{aligned}
where $\mathbf{Y}{\mathrm{w}}=\mathbf{T}^{-1} \mathbf{Y}, \mathbf{X}{\mathrm{w}}=\mathbf{T}^{-1} \mathbf{X}$, and $\mathbf{E}{\mathrm{w}}=\mathbf{T}^{-1} \mathbf{E}$. Therefore, $\mathbf{E}\left(\mathbf{E}{\mathrm{w}}\right)=$ $\mathbf{T}^{-1} \mathrm{E}(\mathbf{E})=\mathbf{0}{p \times 1}$ and $\operatorname{cov}\left(\mathbf{E}{\mathrm{w}}\right)=\operatorname{cov}\left(\mathbf{T}^{-1} \mathbf{E}\right)=\mathbf{T}^{-1}\left(\sigma^2 \mathbf{V}\right) \mathbf{T}^{-1 \prime}=\sigma^2 \mathbf{I}n$. The weighted least-squares estimators of $\beta$ and $\sigma^2$ are derived using the ordinary leastsquares estimator formulas with the model $\mathbf{Y}{\mathrm{w}}=\mathbf{X}{\mathrm{w}} \boldsymbol{\beta}+\mathbf{E}{\mathrm{w}}$. That is, the weighted least-squares estimators of $\beta$ and $\sigma^2$ are given by
\begin{aligned} \hat{\boldsymbol{\beta}}{\mathbf{w}} &=\left(\mathbf{X}{\mathbf{w}}^{\prime} \mathbf{X}{\mathbf{w}}\right)^{-1} \mathbf{X}{\mathbf{w}}^{\prime} \mathbf{Y}{\mathbf{w}} \ &=\left(\mathbf{X}^{\prime} \mathbf{T}^{-1} \mathbf{T}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{T}^{-1} \mathbf{T}^{-1} \mathbf{Y} \ &=\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{Y} \end{aligned} and \begin{aligned} \hat{\sigma}{\mathrm{w}}^2 &=\left(\mathbf{Y}{\mathrm{w}}-\mathbf{X}{\mathrm{w}} \hat{\beta}{\mathrm{w}}\right)^{\prime}\left(\mathbf{Y}{\mathrm{w}}-\mathbf{X}{\mathrm{w}} \hat{\beta}{\mathrm{w}}\right) /(n-p) \ &=\left[\mathbf{Y}^{\prime}\left(\mathbf{V}^{-1}-\mathbf{V}^{-1} \mathbf{X}\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V}^{-1}\right) \mathbf{Y}\right] /(n-p) \end{aligned}
The Gauss-Markov theorem can also be generalized for the model $\mathbf{Y}=\mathbf{X} \boldsymbol{\beta}+\mathbf{E}$ where $E(\mathbf{E})=0$ and $\operatorname{cov}(\mathbf{E})=\sigma^2 \mathbf{V}$. For this model, the weighted least-squares estimator of $\mathbf{t}^{\prime} \boldsymbol{\beta}$ is given by $\mathbf{t}^{\prime} \hat{\beta}{\mathrm{w}}=\mathbf{t}^{\prime}\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{Y}$ and $\mathbf{t}^{\prime} \hat{\beta}{\mathrm{w}}$ is the BLUE of $\mathbf{t}^{\prime} \boldsymbol{\beta}$. The proof is left to the reader.

## 统计代写|广义线性模型代写generalized linear model代考|LACK OF FIT TEST

In this section assume that the $n \times 1$ random error vector $\mathbf{E} \sim \mathrm{N}n\left(\mathbf{0}, \sigma^2 \mathbf{I}_n\right)$. It is of interest to check whether the proposed model adequately fits the data. This lack of fit test requires replicate observations at one or more of the combinations of the $x_1, x_2, \ldots, x{p-1}$ values.

Since the elements of the $n \times 1$ random vector $\mathbf{Y}=\left(Y_1, \ldots, Y_n\right)^{\prime}$ can be listed in any order, we adopt the convention that sets of $Y_i$ values that share the same $x_1, \ldots, x_{p-1}$ values are listed next to each other in the $Y$ vector. For example, in the data set from Table 5.1.1, the $10 \times 1$ vector $\mathbf{Y}=(1.7,2.0,1.9,1.6,3.2,2.0,2.5$, $5.4,5.7,5.1)^{\prime}$ with $Y_1-Y_4$ sharing a speed equal to 20 and a speed $\times$ grade equal to $0, Y_5$ having a speed equal to 20 and a speed $\times$ grade equal to $120, Y_6-Y_7$ sharing a speed equal to 50 and a speed $\times$ grade equal to 0 , and $Y_8-Y_{10}$ sharing a speed equal to 50 and a speed $\times$ grade equal to 300 .

When replicate observations exist within combinations of the $x_1, \ldots, x_{p-1}$ values, the residual sum of squares can be partitioned into a sum of squares due to pure error plus a sum of squares due to lack of fit. The pure error component is a measure of the variation between $Y_i$ observations that share the same $x_1, \ldots, x_{p-1}$ values.

In the example data set from Table 5.1.1, the pure error sum of squares is the sum of squares of $Y_1-Y_4$ around their mean plus the sum of squares of $Y_5$ around its mean (zero in this case), plus the sum of squares of $Y_6-Y_7$ around their mean plus the sum of squares of $Y_8-Y_{10}$ around their mean.
In general the sum of squares pure error is given by
$$\mathbf{S S} \text { (pure error) }=\mathbf{Y}^{\prime} \mathbf{A}{\text {pe }} \mathbf{Y}$$ where $\mathbf{A}{\text {pe }}$ is an $n \times n$ block diagonal matrix with the $j^{\text {th }}$ block equal to $\mathbf{I}{r_j}-\frac{1}{r_j} \mathbf{J}{r_j}$ for $j=1, \ldots, k$ where $k$ is the number of combinations of $x_1, \ldots, x_{p-1}$ values that contain at least one observation and $r_j$ is the number of $Y_i$ values in the $j^{\text {th }}$ combination with $n=\sum_{j=1}^k r_j$. Note that $\mathbf{A}{p e}$ is an idempotent matrix of rank $n-k$ and $\mathbf{J}_n \mathbf{A}{\text {pe }}=\mathbf{0}{n \times n}$. Furthermore, the first $r_1$ rows of the matrix $\mathbf{X}$ are the same, the next $r_2$ rows of $\mathbf{X}$ are the same, etc. Therefore, $\mathbf{A}{p e} \mathbf{X}=\mathbf{0}{n \times p}$. In balanced data structures $r_1=r_2=\cdots=r_k=r, n=r k$, and the $n \times n$ pure error matrix $\mathbf{A}{\mathrm{pe}}$ can be expressed as the Kronecker product $\mathbf{I}_k \otimes\left(\mathbf{I}_r-\frac{1}{r} \mathbf{J}_r\right)$.

For the fuel, speed, grade data set, the $10 \times 10$ pure error sum of squares matrix $\mathbf{A}_{\text {pe }}$ is derived in the next example.

# 广义线性模型代考

## 统计代写|广义线性模型代写generalized linear model代考|加权最小二乘回归

$$\mathbf{T}^{-1} \mathbf{Y}=\mathbf{T}^{-1} \mathbf{X} \boldsymbol{\beta}+\mathbf{T}^{-1} \mathbf{E} \mathbf{Y} w \quad=\mathbf{X} w \boldsymbol{\beta}+\mathbf{E w}$$

$$\hat{\boldsymbol{\beta}} \mathbf{w}=\left(\mathbf{X}{\mathbf{w}}^{\prime} \mathbf{X} \mathbf{w}\right)^{-1} \mathbf{X} \mathbf{w}^{\prime} \mathbf{Y} \mathbf{w} \quad=\left(\mathbf{X}^{\prime} \mathbf{T}^{-1} \mathbf{T}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{T}^{-1} \mathbf{T}^{-1} \mathbf{Y}=\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{Y}$$ 和 $$\hat{\sigma} \mathbf{w}^2=\left(\mathbf{Y} w-\mathbf{X w}{\mathbf{\beta}} \mathbf{w}\right)^{\prime}(\mathbf{Y} w-\mathbf{X} w \hat{\beta} w) /(n-p) \quad=\left[\mathbf{Y}^{\prime}\left(\mathbf{V}^{-1}-\mathbf{V}^{-1} \mathbf{X}\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V}^{-1}\right) \mathbf{Y}\right]$$

## 统计代写|广义线性模型代写generalized linear model代考|缺乏配合试验

$$\mathbf{S S}(\text { pure error })=\mathbf{Y}^{\prime} \mathbf{A} \text { pe } \mathbf{Y}$$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: