## 统计代写|线性回归分析代写linear regression analysis代考|Least Squares Estimation

When the techniques of regression analysis are used for analyzing data from experimental designs, we find that the elements of $X$ are 0 or 1 (Chapter 8 ), and the columns of $\mathbf{X}$ are usually linearly dependent. We now give such an example.

EXAMPLE 3.6 Consider the randomized block design with two treatments and two blocks: namely,
$$Y_{i j}=\mu+\alpha_i+\tau_j+\varepsilon_{i j} \quad(i=1,2 ; j=1,2),$$

$$\left(\begin{array}{l} Y_{11} \ Y_{12} \ \hline Y_{21} \ Y_{22} \end{array}\right)=\left(\begin{array}{lllll} 1 & 1 & 0 & 1 & 0 \ 1 & 1 & 0 & 0 & 1 \ \hline 1 & 0 & 1 & 1 & 0 \ 1 & 0 & 1 & 0 & 1 \end{array}\right)\left(\begin{array}{c} \mu \ \alpha_1 \ \alpha_2 \ \tau_1 \ \tau_2 \end{array}\right)+\left(\begin{array}{c} \varepsilon_{11} \ \varepsilon_{12} \ \hline \varepsilon_{21} \ \varepsilon_{22} \end{array}\right)$$
or $\mathbf{Y}=\mathbf{X} \beta+\varepsilon$, where, for example, the first column of $\mathbf{X}$ is linearly dependent on the other columns.

In Section $3.1$ we developed a least squares theory which applies whether or not $\mathbf{X}$ has full rank. If $\mathbf{X}$ is $n \times p$ of rank $r$, where $r<p$, we saw in Section $3.1$ that $\hat{\beta}$ is no longer unique. In fact, $\hat{\beta}$ should be regarded as simply $a$ solution of the normal equations [e.g., $\left.\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-} \mathbf{X}^{\prime} \mathbf{Y}\right]$ which then enables us to find $\hat{\mathbf{Y}}=\mathbf{X} \hat{\boldsymbol{\beta}}, \hat{\mathbf{e}}=\mathbf{Y}-\mathbf{X} \hat{\boldsymbol{\beta}}$ and $\mathrm{RSS}=\mathbf{e}^{\prime} \mathbf{e}$, all of which are unique. We note that the normal equations $\mathbf{X}^{\prime} \mathbf{X} \beta=\mathbf{X}^{\prime} \mathbf{Y}$ always have a solution for $\beta$ as $\mathcal{C}\left(\mathbf{X}^{\prime}\right)=\mathcal{C}\left(\mathbf{X}^{\prime} \mathbf{X}\right.$ ) (by $\left.\mathbf{A} .2 .5\right)$. Our focus now is to consider methods for finding $\hat{\beta}$.

So far in this chapter our approach has been to replace $\mathbf{X}$ by an $n \times r$ matrix $\mathbf{X}_1$ which has the same column space as $\mathbf{X}$. Very often the simplest way of doing this is to select $r$ appropriate columns of $\mathbf{X}$, which amounts to setting some of the $\beta_i$ in $\mathbf{X} \beta$ equal to zero. Algorithms for carrying this out are described in Section 11.9.

In the past, two other methods have been used. The first consists of imposing identifiability constraints, $\mathbf{H} \beta=0$ say, which take up the “slack” in $\beta$ so that there is now a unique $\beta$ satisfying $\theta=\mathbf{X} \boldsymbol{\beta}$ and $\mathbf{H} \boldsymbol{\beta}=\mathbf{0}$. This approach is described by Scheffé [1959: p. 17]. The second method involves computing a generalized inverse. In Section $3.1$ we saw that a $\hat{\beta}$ is given by $\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-} \mathbf{X}^{\prime} \mathbf{Y}$, where $\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-}$is a suitable generalized inverse of $\mathbf{X}^{\prime} \mathbf{X}$. One commonly used such inverse of a matrix $\mathbf{A}$ is the Moore-Penrose inverse $\mathbf{A}^{+}$, which is unique (see A.10).

## 统计代写|线性回归分析代写linear regression analysis代考|Estimable Functions

Since $\hat{\beta}$ is not unique, $\beta$ is not estimable. The question then arises: What can we estimate? Since each element $\theta_i$ of $\theta(=\mathbf{X} \boldsymbol{\beta})$ is estimated by the $i$ th element of $\hat{\theta}=\mathbf{P Y}$, then every linear combination of the $\theta_i$, say $\mathbf{b}^{\prime} \theta$, is also estimable. This means that the $\theta_i$ form a linear subspace of estimable functions, where $\theta_i=\mathbf{x}_i^{\prime} \beta, \mathbf{x}_i^{\prime}$ being the $i$ th row of $\mathbf{X}$. Usually, we define estimable functions formally as follows.

Definition 3.1 The parametric function $\mathrm{a}^{\prime} \beta$ is said to be estimable if it has a linear unbiased estimate, $\mathbf{b}^{\prime} \mathbf{Y}$, say.

We note that if $\mathbf{a}^{\prime} \beta$ is estimable, then $\mathbf{a}^{\prime} \beta=E\left[\mathbf{b}^{\prime} \mathbf{Y}\right]=\mathbf{b}^{\prime} \boldsymbol{\theta}=\mathbf{b}^{\prime} \mathbf{X} \boldsymbol{\beta}$ identically in $\boldsymbol{\beta}$, so that $\mathbf{a}^{\prime}=\mathbf{b}^{\prime} \mathbf{X}$ or $\mathbf{a}=\mathbf{X}^{\prime} \mathbf{b}$ (A.11.1). Heñce $\mathbf{a}^{\prime} \beta$ is estimable if and only if $\mathbf{a} \in \mathcal{C}\left(\mathbf{X}^{\prime}\right)$.

EXAMPLE $3.8$ If $\mathrm{a}^{\prime} \beta$ is estimable, and $\hat{\beta}$ is any solution of the normal equations, then $\mathbf{a}^{\prime} \hat{\beta}$ is unique. To show this we first note that $\mathbf{a}=\mathbf{X}^{\prime} \mathbf{b}$ for some $\mathbf{b}$, so that $\mathbf{a}^{\prime} \boldsymbol{\beta}=\mathbf{b}^{\prime} \mathbf{X} \boldsymbol{\beta}=\mathbf{b}^{\prime} \boldsymbol{\theta}$. Similarly, $\mathbf{a}^{\prime} \hat{\boldsymbol{\beta}}=\mathbf{b}^{\prime} \mathbf{X} \hat{\boldsymbol{\beta}}=\mathbf{b}^{\prime} \hat{\boldsymbol{\theta}}$, which is unique. Furthermore, by Theorem $3.2, \mathbf{b}^{\prime} \hat{\boldsymbol{\theta}}$ is the BLUE of $\mathbf{b}^{\prime} \boldsymbol{\theta}$, so that $\mathbf{a}^{\prime} \hat{\boldsymbol{\beta}}$ is the BLUE of $\mathbf{a}^{\prime} \boldsymbol{\beta}$.

# 线性回归分析代考

## 统计代写|线性回归分析代写linear regression analysis代考|Least Squares Estimation

$$Y_{i j}=\mu+\alpha_i+\tau_j+\varepsilon_{i j} \quad(i=1,2 ; j=1,2),$$

## 统计代写|线性回归分析代写linear regression analysis代考|Estimable Functions

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: