# 统计代写|线性回归代写linear regression代考|CS6140

## 统计代写|线性回归代写linear regression代考|Response Transformations

Application $3.1$ was suggested by Olive (2004b, 2013b) for additive error regression models $Y=m(\boldsymbol{x})+e$. An advantage of this graphical method is that it works for linear models: that is, for multiple linear regression and for many experimental design models. Notice that if the plotted points in the transformation plot follow the identity line, then the plot is also a response plot. The method is also easily performed for MLR methods other than least squares.

A variant of the method would plot the residual plot or both the response and the residual plot for each of the seven values of $\lambda$. Residual plots are also useful, but they no not distinguish between nonlinear monotone relationships and nonmonotone relationships. See Fox (1991, p. 55).

Cook and Olive (2001) also suggest a graphical method for selecting and assessing response transformations under model (3.2). Cook and Weisberg (1994) show that a plot of $Z$ versus $\boldsymbol{x}^T \hat{\boldsymbol{\beta}}$ (swap the axis on the transformation plot for $\lambda=1$ ) can be used to visualize $t$ if $Y=t(Z)=\boldsymbol{x}^T \boldsymbol{\beta}+e$, suggesting that $t^{-1}$ can be visualized in a plot of $\boldsymbol{x}^T \hat{\boldsymbol{\beta}}$ versus $Z$.

If there is nonlinearity present in the scatterplot matrix of the nontrivial predictors, then transforming the predictors to remove the nonlinearity will often be a useful procedure. More will be said about response transformations for experimental designs in Section 5.4.

There has been considerable discussion on whether the response transformation parameter $\lambda$ should be selected with maximum likelihood (see Bickel and Doksum 1981), or selected by maximum likelihood and then rounded to a meaningful value on a coarse grid $\Lambda_L$ (see Box and Cox 1982 and Hinkley and Runger 1984). Suppose that no strong nonlinearities are present among the predictors $\boldsymbol{x}$ and that if predictor transformations were used, then the transformations were chosen without examining the response. Also assume that
$$Y=t_{\lambda_o}(Z)=\boldsymbol{x}^T \boldsymbol{\beta}+e .$$
Suppose that a transformation $t_\lambda$ is chosen without examining the response. Results in $\mathrm{Li}$ and Duan (1989), Chen and $\mathrm{Li}$ (1998), and Chang and Olive (2010) suggest that if $\boldsymbol{x}$ has an approximate elliptically contoured distribution, then the OLS ANOVA $F$, partial $F$, and Wald $t$ tests will have thé correct level asymptotically, even if $\hat{\lambda} \neq \lambda_o$.

## 统计代写|线性回归代写linear regression代考|Variable Selection and Multicollinearity

The literature on numerical methods for variable selection in the OLS multiple linear regression model is enormous. Three important papers are Jones (1946), Mallows (1973), and Furnival and Wilson (1974). Chatterjee and Hadi (1988, pp. 43-47) give a nice account on the effects of overfitting on the least squares estimates. Ferrari and Yang (2015) give a method for testing whether a model is underfitting. Section 3.4.1 followed Olive (2016a) closely. See Olive (2016b) for more on prediction regions. Also see Claeskins and Hjort (2003), Hjort and Claeskins (2003), and Efron et al. (2004). Texts include Burnham and Anderson (2002), Claeskens and Hjort (2008), and Linhart and Zucchini (1986).

Cook and Weisberg (1999a, pp. 264-265) give a good discussion of the effect of deleting predictors on linearity and the constant variance assumption. Walls and Weeks (1969) note that adding predictors increases the variance of a predicted response. Also $R^2$ gets large. See Freedman (1983).

Discussion of biases introduced by variable selection and data snooping include Hurvich and Tsai (1990), Leeb and Pötscher (2006), Selvin and Stuart (1966), and Hjort and Claeskins (2003). This theory assumes that the full model is known before collecting the data, but in practice the full model is often built after collecting the data. Freedman (2005, pp. 192-195) gives an interesting discussion on model building and variable selection.

The predictor variables can be transformed if the response is not used, and then inference can be done for the linear model. Suppose the $p$ predictor variables are fixed so $\boldsymbol{Y}=t(\boldsymbol{Z})=\boldsymbol{X} \boldsymbol{\beta}+\boldsymbol{e}$, and the computer program outputs $\hat{\boldsymbol{\beta}}$, after doing an automated response transformation and automated variable selection. Then the nonlinear estimator $\hat{\boldsymbol{\beta}}$ can be bootstrapped. See Olive (2016a). If data snooping, such as using graphs, is used to select the response transformation and the submodel from variable selection, then strong, likely unreasonable assumptions are needed for valid inference for the final nonlinear model.

# 线性回归代考

## 统计代写|线性回归代写线性回归代考|响应转换

Olive (2004b, 2013b)建议对相加误差回归模型$Y=m(\boldsymbol{x})+e$应用$3.1$。这种图形化方法的一个优点是它适用于线性模型:也就是说，适用于多元线性回归和许多实验设计模型。注意，如果转换图中的点沿着恒等线，那么该图也是响应图。除了最小二乘之外，该方法也很容易用于MLR方法

Cook和Olive(2001)还提出了在模型(3.2)下选择和评估响应转换的图形化方法。Cook和Weisberg(1994)表明，如果$Y=t(Z)=\boldsymbol{x}^T \boldsymbol{\beta}+e$，可以用$Z$ vs $\boldsymbol{x}^T \hat{\boldsymbol{\beta}}$的图(将转换图上的轴替换为$\lambda=1$)来可视化$t$，这表明$t^{-1}$可以在$\boldsymbol{x}^T \hat{\boldsymbol{\beta}}$ vs $Z$的图中可视化

$$Y=t_{\lambda_o}(Z)=\boldsymbol{x}^T \boldsymbol{\beta}+e .$$

## 统计代写|线性回归代写线性回归代考|变量选择和多重共线性

Cook和Weisberg (1999a, pp. 264-265)很好地讨论了删除预测因子对线性和恒定方差假设的影响。Walls和Weeks(1969)指出，添加预测因素会增加预测反应的方差。$R^2$也变大了。见弗里德曼(1983)

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: