电子工程代写|计算数学基础代写Mathematical Foundations of Computing代考|CSMAX170

电子工程代写|计算数学基础代写Mathematical Foundations of Computing代考|Inferring Population Parameters from Sample Parameters

Thus far, we have focused on statistics that describe a sample in various ways. A sample, however, is usually only a subset of the population. Given the statistics of a sample, what can we infer about the corresponding population parameters? If the sample is small or if the population is intrinsically highly variable, there is not much we can say about the population. However, if the sample is large, there is reason to hope that the sample statistics are a good approximation to the population parameters. We now quantify this intuition.

Our point of departure is the central limit theorem, which states that the sum of $n$ independent random variables, for large $n$, is approximately normally distributed (see Section 1.7.5). Suppose that we collect a set of $m$ samples, each with $n$ elements, from some population. (In the rest of the discussion, we will assume that $n$ is large enough that the central limit theorem applies.) If the elements of each sample are independently and randomly selected from the population, we can treat the sum of the elements of each sample as the sum of $n$ independent and identically distributed random variables $X_{1}, X_{2, \ldots}, X_{n}$. That is, the first element of the sample is the value assumed by the random variable $X_{1}$, the second element is the value assumed by the random variable $X_{2}$, and so on. From the central limit theorem, the sum of these random variables is normally distributed. The mean of each sample is the sum divided by a constant, so the mean of each sample is also normally distributed. This fact allows us to determine a range of values where, with high confidence, the population mean can be expected to lie.

To make this more concrete, refer to Figure $2.3$ and consider sample 1 . The mean of this sample is $\overline{x_{1}}=\frac{1}{n} \sum_{i} x_{1 i}$. Similarly, $\overline{x_{2}}=\frac{1}{n} \sum_{i} x_{2 i}$, and, in general, $\overline{x_{k}}=\frac{1}{n} \sum_{i} x_{k i}$.
Define the random variable $\bar{X}$ as taking on the values $\overline{x_{1}}, \overline{x_{2}}, \ldots, \overline{x_{n}}$. The distribution of $\bar{X}$ is called the sampling distribution of the mean. From the central limit theorem, $\bar{X}$ is approximately normally distributed. Moreover, if the elements are drawn from a population with mean $\mu$ and variance $\sigma^{2}$, we have already seen that $E(\bar{X})=\mu$ (Equation 2.6) and $V(\bar{X})=\sigma^{2} / n$ (Equation 2.9).

电子工程代写|计算数学基础代写Mathematical Foundations of Computing代考|Hypothesis Testing

Assertions about outcomes of an experiment can usually be reformulated in terms of testing a hypothesis: a speculative claim about the outcome of an experiment. The goal of an experiment is to show that either the hypothesis is unlikely to be true (i.e., we can reject the hypothesis), or the experiment is consistent with the hypothesis (i.e., the hypothesis need not be rejected).

This last statement bears some analysis. Suppose that we are asked to check whether a coin is biased. We will start with the tentative hypothesis that the coin is unbiased: $\mathrm{P}($ heads $)=\mathrm{P}($ tails $)=0.5$. Then, suppose we toss the coin three times and that we get three heads in a row. What does this say about our hypothesis? Conditional on the hypothesis being true, we have a probability of $0.5 * 0.5 * 0.5=12.5 \%$ that we obtain the observed outcome. This is not too unlikely, so perhaps the three heads in a row were simply due to chance. At this point, all we can state is that the experimental outcome is consistent with the hypothesis.

Now, suppose that we flip the coin ten times and see that it comes up heads nine times. If our hypothesis were true, the probability of getting nine heads in ten coin flips is given by the binomial distribution as $\left(\begin{array}{c}10 \ 1\end{array}\right) 0.5^{9} 0.5^{1}=10 * 0.5^{10}=10 / 1024<$ $1 \%$. Thus, if the hypothesis were true, this outcome is fairly unlikely: setting the bar for “unlikeliness” at $1 \%$. This is typically stated as: “We reject the hypothesis at the $1 \%$ confidence level.”

The probability of an outcome assuming that a hypothesis is true is called its $p$-value. If the outcome of an experiment has a $p$-value less than $1 \%$ (or $5 \%$ ), we would interpret the experiment as grounds for rejecting a hypothesis at the $1 \%$ (or $5 \%$ ) level.

It is important to realize that the nonrejection of a hypothesis does not mean that the hypothesis is valid. For example, instead of starting with the hypothesis that the coin was unbiased, we could have made the hypothesis that the coin was biased, with $\mathrm{P}($ heads $)=0.9$. If we toss the coin three times and get three heads, the probability of that event, assuming that the hypothesis were true, would be $0.9 *$ $0.9 * 0.9=0.73$. So, we cannot reject the hypothesis that the coin is biased. Indeed, with such a small number of experiments, we cannot invalidate an infinite number of mutually incompatible hypotheses!

We are therefore led to two inescapable conclusions. First, even the most careful experiment may lead to an incorrect conclusion due to random errors. Such errors may result in rejection of a hypothesis, even though it ought not be rejected, or in nonrejection, when it should. Second, an experiment cannot result in the acceptance of a hypothesis but only in its rejection or nonrejection, which is not the same as acceptance. We deal with each conclusion in turn.

电子工程代写|计算数学基础代写Mathematical Foundations of Computing代考|Hypothesis Testing

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: