# 计算机代写|机器学习代写machine learning代考|CS7641

## 计算机代写|机器学习代写machine learning代考|Gaussian Distribution

The univariate Gaussian distribution (a.k.a. the normal distribution) is often used to describe a continuous random variable $X$ that can take any real value in $\mathbb{R}$. The general form of a Gaussian distribution is
$$\mathcal{N}\left(x \mid \mu, \sigma^2\right)=\frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2 \sigma^2}},$$
where $\mu$ and $\sigma^2$ are two parameters. We can summarize some key properties for the univariate Gaussian distribution as follows:

• Parameters: $\mu \in \mathbb{R}$ and $\sigma^2>0$.
• Support: The domain of the random variable is $x \in \mathbb{R}$.
• Mean and variance:
$$\mathbb{E}[X]=\mu \text { and } \operatorname{var}(X)=\sigma^2 .$$
The sum-to-1 constraint:
$$\int_{-\infty}^{+\infty} \mathcal{N}(x \mid \mu, \sigma) d x=1 .$$
The Gaussian distribution is the well-known unimodal bell-shaped curve. As shown in Figure 2.10, the first parameter $\mu$ equals to the mean, indicating the center of the distribution, whereas the second parameters $\sigma$ equals to the standard deviation, indicating the spread of the distribution.

## 计算机代写|机器学习代写machine learning代考|Multivariate Gaussian Distribution

The multivariate Gaussian distribution extends the univariate Gaussian distribution to represent a joint distribution of multiple continuous random variables $\left{X_1, X_2, \cdots, X_n\right}$, each of which can take any real value in $\mathbb{R}$. If we arrange these random variables as an $n$-dimensional random vector, the multivariate Gaussian distribution takes the following compact form:
$$\mathcal{N}(\mathbf{x} \mid \mu, \boldsymbol{\Sigma})=\frac{1}{\sqrt{(2 \pi)^n|\mathbf{\Sigma}|}} e^{-\frac{(\mathbf{x}-\mu)^T \mathbf{\Sigma}^{-1}(x-\mu)}{2}},$$
where the vector $\mu \in \mathbb{R}^n$ and the symmetric matrix $\boldsymbol{\Sigma} \in \mathbb{R}^{n \times n}$ denote two parameters of the distribution. Note that the exponent in the multivariate Gaussian distribution is computed as follows:
$$\left[(\mathbf{x}-\mu)^{\top}\right]{1 \times d}\left[\mathbf{\Sigma}^{-1}\right]{d \times d}[\mathbf{x}-\mu]{d \times 1}=[\cdot]{1 \times 1} .$$
We can summarize some key properties for the multivariate Gaussian distribution as follows:

• Parameters: $\mu \in \mathbb{R}^n ; \Sigma \in \mathbb{R}^{n \times n}>0$ is symmetric, positive definite, and invertible.
• Support: The domain of all random variables: $x \in \mathbb{R}^n$.
• Mean vector and covariance matrix:
$$\mathbb{E}[\mathbf{x}]=\mu \text { and } \operatorname{cov}(\mathbf{x}, \mathbf{x})=\Sigma \text {. }$$
Therefore, the first parameter $\boldsymbol{\mu}$ is called the mean vector, and the second parameter $\boldsymbol{\Sigma}$ is called the covariance matrix. The inverse covariance matrix $\Sigma^{-1}$ is often called the precision matrix.
• The sum-to-1 constraint:
$$\int \mathcal{N}(\mathbf{x} \mid \boldsymbol{\mu}, \mathbf{\Sigma}) d \mathbf{x}=1$$
• Any marginal distribution or conditional distribution of these $n$ random variables is also Gaussian. (See Exercise Q2.8.)

# 机器学习代考

## 计算机代写|机器学习代写machine learning代考|Gaussian Distribution

$$\mathcal{N}\left(x \mid \mu, \sigma^2\right)=\frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2 \sigma^2}},$$

• 参数: $\mu \in \mathbb{R}$ 和 $\sigma^2>0$.
• 支持: 随机变量的域是 $x \in \mathbb{R}$.
• 均值和方差:
$$\mathbb{E}[X]=\mu \text { and } \operatorname{var}(X)=\sigma^2 .$$
sum-to-1 约束:
$$\int_{-\infty}^{+\infty} \mathcal{N}(x \mid \mu, \sigma) d x=1$$
高斯分布是著名的单峰钟形曲线。如图2.10所示，第一个参数 $\mu$ 等于均值，表示分布的中心，而第二 个参数 $\sigma$ 等于标准差，表示分布的分布。

## 计算机代写|机器学习代写machine learning代考|Multivariate Gaussian Distribution

$$\mathcal{N}(\mathbf{x} \mid \mu, \mathbf{\Sigma})=\frac{1}{\sqrt{(2 \pi)^n|\mathbf{\Sigma}|}} e^{-\frac{(x-\mu)^T \Sigma^{-1}(x-\mu)}{2}}$$

$$\left[(\mathbf{x}-\mu)^{\top}\right] 1 \times d\left[\mathbf{\Sigma}^{-1}\right] d \times d[\mathbf{x}-\mu] d \times 1=[\cdot] 1 \times 1 .$$

• 参数: $\mu \in \mathbb{R}^n ; \Sigma \in \mathbb{R}^{n \times n}>0$ 是对称的、正定的、可逆的。
• 支持: 所有随机变量的域: $x \in \mathbb{R}^n$.
• 均值向量和协方差矩阵:
$$\mathbb{E}[\mathbf{x}]=\mu \text { and } \operatorname{cov}(\mathbf{x}, \mathbf{x})=\Sigma .$$
因此，第一个参数 $\mu$ 称为均值向量，第二个参数 $\boldsymbol{\Sigma}$ 称为协方差矩阵。逆协方差矩阵 $\Sigma^{-1}$ 通常称为精度 矩阵。
• sum-to-1 约束:
$$\int \mathcal{N}(\mathbf{x} \mid \boldsymbol{\mu}, \mathbf{\Sigma}) d \mathbf{x}=1$$
• 这些的任何边际分布或条件分布 $n$ 随机变量也是高斯的。（见练习 Q2.8。)

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: