统计代写|广义线性模型代写generalized linear model代考|PARTITIONING THE SUM OF SQUARES REGRESSION

In Table 5.3.1 the sum of squares regression was expressed with $p-1$ degrees of freedom. This sum of squares represented the total influence of the variables $x_1, \ldots, x_{p-1}$ in the ordinary least-squares regression. It is often of interest to check the contribution of a particular variable (or variables) given that other variables are already in the model. Such contributions can be calculated by partitioning the $n \times p$ matrix $\mathbf{X}$ as
$$\mathbf{X}=\left(\mathbf{X}1\left|\mathbf{X}_2\right| \cdots \mid \mathbf{X}_m\right)$$ where $\mathbf{X}_j$ is an $n \times p_j$ matrix for $j=1, \ldots, m, p=\sum{j=1}^m p_j$, and $\mathbf{X}1=\mathbf{1}_n$. If $R_1=\mathbf{X}_1, R_2=\left(\mathbf{X}_1 \mid \mathbf{X}_2\right), \ldots, R{m-1}=\left(\mathbf{X}1\left|\mathbf{X}_2\right| \ldots \mid \mathbf{X}{m-1}\right)$ and $\mathbf{R}m=\mathbf{X}$, then the sum of squares due to the $p_j$ variables in $\mathbf{X}_j$ given that $\mathbf{X}_1, \mathbf{X}_2, \ldots, X{j-1}$ are already in the model is given by
\begin{aligned} \operatorname{SS}\left(\mathbf{X}j \mid \mathbf{X}_1, \mathbf{X}_2, \ldots, \mathbf{X}{j-1}\right)=& \mathbf{Y}^{\prime}\left[\mathbf{R}j\left(\mathbf{R}_j^{\prime} \mathbf{R}_j\right)^{-1} \mathbf{R}_j^{\prime}\right.\ &\left.-\mathbf{R}{j-1}\left(\mathbf{R}{j-1}^{\prime} \mathbf{R}{j-1}\right)^{-1} \mathbf{R}_{j-1}^{\prime}\right] \mathbf{Y} . \end{aligned}
Such conditional sums of squares are often called Type I sums of squares. The entire ANOVA table with the Type I sums of squares is presented in Table 5.6.1.

Note that the sum of squares due to all sources of variations still add up to the total sum of squares $\mathbf{Y}^{\prime} \mathbf{Y}$.

The Type I sums of squares for the fuel, speed, grade data set are provided below with the output provided in Appendix 1.

Example 5.6.1 Using the example data set from Table 5.1.1, the Type I sums of squares are provided for the overall mean, for the speed variable $x_1$ given the overall mean, and for the speed $\times$ grade variable $x_2$ given the overall mean and $x_1$.

统计代写|广义线性模型代写generalized linear model代考|THE MODEL $Y=X \beta+E$ IN COMPLETE, BALANCED FACTORIALS

The experiment presented in Section $4.1$ has $b$ random blocks, $t$ fixed treatments, and $r$ random replicates nested in each block treatment combination. The btr $\times 1$ random vector $\mathbf{Y}=\left(Y_{111}, \ldots, Y_{11 r}, \ldots, Y_{b r 1}, \ldots, Y_{b t r}\right)^{\prime} \sim \mathbf{N}{b t r}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ where the $b t r \times 1$ mean vector and the $b t r \times b t r$ covariance matrix are given by $\boldsymbol{\mu}=\mathbf{1}_b \otimes\left(\mu_1, \ldots, \mu_t\right)^{\prime} \otimes \mathbf{1}_r$ and $\quad \mathbf{\Sigma}=\sigma_B^2\left[\mathbf{I}_b \otimes \mathbf{J}_t \otimes \mathbf{J}_r\right]+\sigma{B T}^2\left[\mathbf{I}b \otimes\left(\mathbf{I}_t-\frac{1}{t} \mathbf{J}_t\right) \otimes \mathbf{J}_r\right]$ $+\sigma{R(B T)}^2\left[\mathbf{I}b \otimes \mathbf{I}_t \otimes \mathbf{I}_r\right]$. This experiment can be characterized by the general linear model $\mathbf{Y}=\mathbf{X} \boldsymbol{\beta}+\mathbf{E}$. First, $\operatorname{cov}(\mathbf{E})$ equals the btr $\times$ btr covariance matrix $\Sigma$. Next, the btr $\times 1$ vector $\mu$ must be reconciled with the $b t r \times 1$ mean vector $E(Y)=\mathbf{X} \beta$ from the general linear model. Note that the $b t r \times 1$ mean vector $\mu$ is a function of the $t$ unknown parameters $\mu_1, \ldots, \mu_t$. Therefore, the general linear model mean vector $\mathbf{X} \boldsymbol{\beta}$ must also be written as a function of $\mu_1, \ldots, \mu_t$. One simple approach is to let the $t \times 1$ vector $\beta=\left(\mu_1, \ldots, \mu_t\right)^{\prime}$ and let the $b t r \times t$ matrix $\mathbf{X}=\mathbf{1}_b \otimes \mathbf{I}_t \otimes \mathbf{1}_r$. Then the btr $\times 1$ mean vector of the general linear model is \begin{aligned} \mathbf{X} \boldsymbol{\beta} &=\left(\mathbf{1}_b \otimes \mathbf{I}_t \otimes \mathbf{1}_r\right)\left(\mu_1, \ldots, \mu_t\right)^{\prime} \ &=\left(\mathbf{1}_b \otimes \mathbf{I}_t \otimes \mathbf{1}_r\right)\left[1 \otimes\left(\mu_1, \ldots, \mu_t\right)^{\prime} \otimes 1\right] \ &=\mathbf{1}_b \otimes\left(\mu_1, \ldots, \mu_t\right)^{\prime} \otimes \mathbf{1}_r=\boldsymbol{\mu} . \end{aligned} The preceding example suggests a general approach for writing the mean vector $\boldsymbol{\mu}$ as $\mathbf{X} \boldsymbol{\beta}$ for complete, balanced factorial experiments. First, if $\boldsymbol{\mu}$ is a function of $p$ unknown parameters, then let $\beta$ be a $p \times 1$ vector whose elements are the $p$ unknown parameters in $\boldsymbol{\mu}$. In general these elements will be subscripted, such as $\mu{i j k}$. The elements of $\beta$ should be ordered so the last subscript changes first, the second to the last subscript changes next, etc. The corresponding $\mathbf{X}$ matrix can then be constructed using a simple algorithm. The previous experiment is used to develop the algorithm rules.

广义线性模型代考

统计代写|广义线性模型代写generalized linear model代考|划分平方和回归

$$\mathbf{X}=\left(\mathbf{X} 1\left|\mathbf{X}2\right| \cdots \mid \mathbf{X}_m\right)$$ 在哪里 $\mathbf{X}_j$ 是一个 $n \times p_j$ 矩阵 $j=1, \ldots, m, p=\sum j=1^m p_j$ ，和 $\mathbf{X} 1=\mathbf{1}_n$. 如果 $R_1=\mathbf{X}_1, R_2=\left(\mathbf{X}_1 \mid \mathbf{X}_2\right), \ldots, R m-1=\left(\mathbf{X} 1\left|\mathbf{X}_2\right| \ldots \mid \mathbf{X} m-1\right)$ 和 $\mathbf{R} m=\mathbf{X}$ ，那么平方和由于 $p_j$ 变量 $\mathbf{X}_j$ 鉴于 $\mathbf{X}_1, \mathbf{X}_2, \ldots, X j-1$ 已经在模型中 $$\operatorname{SS}\left(\mathbf{X}_j \mid \mathbf{X}_1, \mathbf{X}_2, \ldots, \mathbf{X} j-1\right)=\mathbf{Y}^{\prime}\left[\mathbf{R}_j\left(\mathbf{R}_j^{\prime} \mathbf{R}_j\right)^{-1} \mathbf{R}_j^{\prime} \quad-\mathbf{R} j-1\left(\mathbf{R} j-1^{\prime} \mathbf{R} j-1\right)^{-1} \mathbf{R}{j-1}^{\prime}\right] \mathbf{Y}$$

统计代写|广义线性模型代写generalized linear model代考|THE MODEL 是=Xb+和在完整、平衡的因子中

$\boldsymbol{\Sigma}=\sigma_B^2\left[\mathbf{I}_b \otimes \mathbf{J}_t \otimes \mathbf{J}_r\right]+\sigma B T^2\left[\mathbf{I} b \otimes\left(\mathbf{I}_t-\frac{1}{t} \mathbf{J}_t\right) \otimes \mathbf{J}_r\right]+\sigma R(B T)^2\left[\mathbf{I} b \otimes \mathbf{I}_t \otimes \mathbf{I}_r\right]$. 这个实验可 以用一般线性模型来表征 $\mathbf{Y}=\mathbf{X} \boldsymbol{\beta}+\mathbf{E}$. 第一的， $\operatorname{cov}(\mathbf{E})$ 等于 $b \operatorname{tr} \times \mathrm{btr}$ 协方差矩阵 $\Sigma$. 接下来， btr $\times 1$ 向量 $\mu$ 必须与 $b t r \times 1$ 平均向量 $E(Y)=\mathbf{X} \beta$ 从一般线性模型。请注意， $b t r \times 1$ 平均向量 $\mu$ 是一个函数 $t$ 末知参数 $\mu_1, \ldots, \mu_t$. 因此，一般线性模型均值向量 $\mathbf{X} \boldsymbol{\beta}$ 也必须写成 $\mu_1, \ldots, \mu_t$. 一种简单的方法是让 $t \times 1$ 向量 $\beta=\left(\mu_1, \ldots, \mu_t\right)^{\prime}$ 并让btr $\times t$ 矩阵 $\mathbf{X}=\mathbf{1}_b \otimes \mathbf{I}_t \otimes \mathbf{1}_r$. 然后 $\mathrm{btr} \times 1$ 一般线性模型的平均向量是 $\mathbf{X} \boldsymbol{\beta}=\left(\mathbf{1}_b \otimes \mathbf{I}_t \otimes \mathbf{1}_r\right)\left(\mu_1, \ldots, \mu_t\right)^{\prime} \quad=\left(\mathbf{1}_b \otimes \mathbf{I}_t \otimes \mathbf{1}_r\right)\left[1 \otimes\left(\mu_1, \ldots, \mu_t\right)^{\prime} \otimes 1\right]=\mathbf{1}_b \otimes\left(\mu_1, \ldots, \mu_t\right)^{\prime} \otimes \mathbf{1}_r$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: