# 统计代写|随机过程代写stochastic process代考|MATH3801

## 统计代写|随机过程代写stochastic process代考|Markov chains with covariate information

Markov chains are a common model for discrete, longitudinal study data, where the state of each subject changes over time according to a Markov chain. Usually, covariate information is available for each subject and two situations have been considered. First, when the evolution of the subjects can be modeled hierarchically by a set of related, homogeneous Markov chains and, second, when the parameters of the chains are allowed to vary (slowly) over time.

Consider the first case. Suppose that we have $M$ subjects and let $\mathbf{x}m=$ $\left(x{m, 0}, \ldots, x_{m, n_m}\right)$, where $x_{m j} \in{1, \ldots, K}$ is the sequence of observed states for subject $m$ and the initial state is assumed known. Assume that covariate information $\mathbf{c}m$ is available for individual $m$. Then, the transition probabilities $p{m i j}=P\left(X_{m, t+1}=\right.$ $j \mid X_{m, t}=i$ ) are assumed to follow a polytomous regression model
$$\log \frac{p_{m i j}}{p_{m i K}}=\mathbf{c}m \boldsymbol{\theta}{i j}$$
so that $p_{m i j} \propto \exp \left(\mathbf{c}i \boldsymbol{\theta}{i j}\right)$, where $\boldsymbol{\theta}{i j}$ are unknown regression parameters. The regression parameters may be modeled with, standard, hierarchical, normal-Wishart prior distributions. Given the observed data, the logistic regression structure implies that the relevant conditional posterior distributions are log concave so that standard Gibbs sampling techniques could be used to sample the posterior distributions. Often, the complete paths will not be observed for all subjects. In such cases, the basic algorithm can be modified by conditioning on the missing data. Given the complete data, the Gibbs sampler for $\boldsymbol{\theta}$ proceeds as above. Given $\boldsymbol{\theta}$, the transition matrices for each subject are known so that the missing data can be sampled as in Section 3.3.7. In the second case, (3.8) can be extended so that $$\log \frac{p{m i j n}}{p_{m i K n}}-\mathbf{c}m \boldsymbol{0}{i j n},$$
where $\theta_{i n}$ develops over time according to a state space model, for example,
$$\boldsymbol{\theta}{i n}=\boldsymbol{\theta}{i(n-1)}+\boldsymbol{\epsilon}_n$$
and $\boldsymbol{\epsilon}_t$ is a noise term. Again, using the standard normal Wishart model, inference follows easily.

## 统计代写|随机过程代写stochastic process代考|Modeling the time series of wind directions

For these data, there is no particular evidence of any seasonal effects, as rose plots for different months of the year show very similar forms. This suggests that stationary time series models might be considered. We consider the following four possibilities:

1. An independent multinomial model, assuming that the wind direction, $\theta_n$ on day $n$ is independent of the wind directions on previous days so that $P\left(\theta_n=i \mid \pi\right)=\pi_i$ for $i=0, \ldots, 15$, where 0 represents North, $1 \mathrm{NNE}, 2 \mathrm{NE}, \ldots$, and $15 \mathrm{NNW}$.
2. A first-order Markov chain $P\left(\theta_n=j \mid \theta_{n-1}=i, \mathbf{P}\right)=p_{i j}$ for $i, j=0, \ldots, 15$
3. A parametric, wrapped Poisson HMM.
4. A semiparametric, multinomial HMM.
The first two models have heen descrihed previously. The following subsections outline the wrapped Poisson HMM and the multinomial HMM.
The wrapped Poisson HMM and its inference
Before considering the wrapped Poisson HMM, we shall first consider inference for the rate parameter of a single wrapped Poisson distribution, $\theta \mid \lambda \sim \mathrm{WP}(k, \lambda)$, as defined in Appendix A. In our context, we shall assume $k=16$ to represent the 16 wind directions.

Note first that this model is equivalent to assuming that $Y \mid \lambda, Z=z \sim \operatorname{Po}(\lambda)$ where $Y=\theta+k Z$ and $Z=z \in \mathbb{Z}^{+}$is an unwrapping coefficient with probability
$$P(Z=z \mid \lambda)=\sum_{j=0}^{k-1} \frac{\lambda^{k z+j} e^{-\lambda}}{(k z+j) !} \quad z=0,1,2, \ldots$$
This implies that
$$P(Z=z \mid \lambda, \theta) \propto \frac{\lambda^{\theta+k z}}{(\theta+k z) !} .$$

# 随机过程代考

## 统计代写|随机过程代写随机过程代考|带有协变量信息的马尔可夫链

$$\log \frac{p_{m i j}}{p_{m i K}}=\mathbf{c}m \boldsymbol{\theta}{i j}$$
，使$p_{m i j} \propto \exp \left(\mathbf{c}i \boldsymbol{\theta}{i j}\right)$，其中$\boldsymbol{\theta}{i j}$为未知回归参数。回归参数可以用标准的、分层的、正态- wishart先验分布建模。考虑到观测数据，逻辑回归结构意味着相关的条件后验分布是对数凹的，因此标准吉布斯抽样技术可以用于对后验分布进行抽样。通常，并不是所有受试者都能观察到完整的路径。在这种情况下，基本算法可以通过对缺失数据进行条件调整来进行修改。在数据完整的情况下，对$\boldsymbol{\theta}$进行吉布斯采样。给定$\boldsymbol{\theta}$，每个主题的转换矩阵都是已知的，因此可以像3.3.7节中那样对缺失的数据进行采样。在第二种情况下，(3.8)可以扩展到$$\log \frac{p{m i j n}}{p_{m i K n}}-\mathbf{c}m \boldsymbol{0}{i j n},$$
，其中$\theta_{i n}$根据状态空间模型随时间发展，例如，
$$\boldsymbol{\theta}{i n}=\boldsymbol{\theta}{i(n-1)}+\boldsymbol{\epsilon}_n$$
，而$\boldsymbol{\epsilon}_t$是一个噪声项。同样，使用标准标准Wishart模型，推断很容易进行

## 统计代写|随机过程代写随机过程代考|风向时间序列建模

1. 一个独立的多项式模型，假设$n$当天的风向$\theta_n$与前几天的风向无关，因此对于$i=0, \ldots, 15$$P\left(\theta_n=i \mid \pi\right)=\pi_i，其中0代表North, 1 \mathrm{NNE}, 2 \mathrm{NE}, \ldots和15 \mathrm{NNW} 2. 一个一阶马尔科夫链P\left(\theta_n=j \mid \theta_{n-1}=i, \mathbf{P}\right)=p_{i j}对于i, j=0, \ldots, 15 3. 一个参数，包裹泊松HMM。 4. 半参数，多项式HMM。前两种型号我们已经介绍过了。下面的小节概述了包装的泊松HMM和多项式HMM。在考虑包裹的泊松HMM之前，我们首先应该考虑单个包裹的泊松分布的速率参数的推断，\theta \mid \lambda \sim \mathrm{WP}(k, \lambda)，如附录a中定义的那样。在我们的上下文中，我们假设k=16表示16个风向首先注意，这个模型等价于假设Y \mid \lambda, Z=z \sim \operatorname{Po}(\lambda)，其中Y=\theta+k Z和Z=z \in \mathbb{Z}^{+}是一个概率$$ P(Z=z \mid \lambda)=\sum_{j=0}^{k-1} \frac{\lambda^{k z+j} e^{-\lambda}}{(k z+j) !} \quad z=0,1,2, \ldots $$的展开系数。这意味着$$ P(Z=z \mid \lambda, \theta) \propto \frac{\lambda^{\theta+k z}}{(\theta+k z) !} .$\$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: