统计代写|时间序列分析代写Time-Series Analysis代考|State Space Representations of Time Series

统计代写|时间序列分析代写Time-Series Analysis代考|State Space Representations of Time Series

What is a state space form or representation for a time series? We give the following definition.

Definition 8.2 A vector time series $Y_t$ is said to have a state space form or representation if there is a state space model for $\mathbf{Y}_t$ as specified by Eqs. (8.1) and (8.2).

There are a huge number of time series that can be represented in state space form. They include all the processes generated by SARIMA and VARMAX models in the previous chapters. Now let us have a look at a few examples.

Example 8.2 (State Space Form of the MA(1) Model) Given the following MA(1) model
$$Y_t=\mu+\varepsilon_t+\theta \varepsilon_{t-1},$$ and let $X_{t+1}=\theta \varepsilon_t$, then we obtain the representation of the MA(1) model as follows:
\begin{aligned} X_{t+1} & =0 \cdot X_t+\theta \varepsilon_t, \ Y_t & =\mu+X_t+\varepsilon_t . \end{aligned}
Example 8.3 (State Space Form of the ARMA(1,1) Model) Consider the following causal and invertible ARMA $(1,1)$ model
$$Y_t=\varphi_0+\varphi Y_{t-1}+\varepsilon_t+\theta \varepsilon_{t-1}$$
Let the state variable $X_t=Y_t-\varepsilon_t$. Then we have
\begin{aligned} X_{t+1} & =\varphi_0+\varphi X_t+(\theta+\varphi) \varepsilon_t, \ Y_t & =X_t+\varepsilon_t . \end{aligned}
These two equations form a state space representation for the $\operatorname{ARMA}(1,1)$ model (8.3) and are equivalent to it.

统计代写|时间序列分析代写Time-Series Analysis代考|Kalman Recursions

In this section, for simplicity and highlighting key ideas, we do not consider including the control variables in the state space model (8.1)-(8.2). That is, the state space model is now as follows:
$$\begin{gathered} \mathbf{X}{t+1}=\mathbf{c}_t+\mathbf{F}_t \mathbf{X}_t+\mathbf{R}_t \eta_t, \quad \eta_t \sim \mathrm{N}\left(\mathbf{0}, \mathbf{Q}_t\right), \ \mathbf{Y}_t=\mathbf{d}_t+\mathbf{Z}_t \mathbf{X}_t+\boldsymbol{\varepsilon}_t, \boldsymbol{\varepsilon}_t \sim \mathrm{N}\left(\mathbf{0}, \mathbf{H}_t\right), \ \mathbf{X}_1 \sim \mathrm{N}\left(\mathbf{a}_1, \mathbf{P}_1\right), \end{gathered}$$ which is adequate for most purposes. Once a state space model is specified for the observations $\mathbf{Y}{1: t}=$ $\left{\mathbf{Y}_1, \mathbf{Y}_2, \cdots, \mathbf{Y}_t\right}$, we can then consider a number of important algorithms and their applications. These algorithms are all recursion ones and often called the Kalman recursions due to the seminal papers by Kalman (1960) and Kalman and Bucy (1961). There are three fundamental problems associated with the state space model (8.7)-(8.9) as follows:

• Kalman filtering: Given $\mathbf{Y}_{1: t}$, to recover or estimate $\mathbf{X}_t$
• Kalman forecasting: Given $\mathbf{Y}_{1: t}$, to forecast $\mathbf{X}_s$ where $s>t$
• Kalman smoothing: Given $\mathbf{Y}{1: t}$, to estimate $\mathbf{X}_s$ where $s{h \mid j}=\mathbf{E}\left(\mathbf{X}h \mid \mathbf{Y}{1: j}\right)$ and $\mathbf{P}{h \mid j}=\operatorname{Var}\left(\mathbf{X}_h \mid \mathbf{Y}{1: j}\right)=\mathrm{E}\left[\left(\mathbf{X}h-\mathbf{X}{h \mid j}\right)\left(\mathbf{X}h-\mathbf{X}{h \mid j}\right)^{\prime} \mid \mathbf{Y}{1: j}\right]$ be, respectively, the conditional mean vector (viz., an estimate) and covariance matrix of $\mathbf{X}_h$ given $\mathbf{Y}{1: j}$, where we define the starting values $\mathbf{X}{1 \mid 0}=\mathbf{a}_1$ and $\mathbf{P}{1 \mid 0}=\mathbf{P}1$ for $h=1, j=0$. Note that if $\mathbf{X}{h \mid j}$ are viewed as the estimates of $\mathbf{X}h$ given $\mathbf{Y}{1: j}$, then $\mathbf{P}{h \mid j}$ is the covariance matrix of the estimation error. Furthermore, let $v_h=\mathbf{Y}_h-\mathbf{Z}_h \mathbf{X}{h \mid h-1}-\mathbf{d}h$ be the estimate of $\boldsymbol{\varepsilon}_h$ and $\mathbf{V}_h=\mathbf{Z}_h \mathbf{P}{h \mid h-1} \mathbf{Z}h^{\prime}+\mathbf{H}_h$ the conditional covariance matrix of $\boldsymbol{v}_h$ given $\mathbf{Y}{1:(h-1)}$. The following three theorems give a set of Kalman recursions to solve the three fundamental problems above.

Theorem 8.1 (Kalman Filtering) Kalman filtering has the recursion algorithm
\begin{aligned} & \mathbf{X}{t \mid t}=\mathbf{X}{t \mid t-1}+\mathbf{P}{t \mid t-1} \mathbf{Z}_t^{\prime} \mathbf{V}_t^{-1} \boldsymbol{v}_t \ & \mathbf{P}{t \mid t}=\mathbf{P}{t \mid t-1}-\mathbf{P}{t \mid t-1} \mathbf{Z}t^{\prime} \mathbf{V}_t^{-1} \mathbf{Z}_t \mathbf{P}{t \mid t-1} \end{aligned} where $\mathbf{X}{t \mid t}$ are the filtered estimates and $\mathbf{P}{t \mid t}$ are the corresponding error covariance matrices. Besides, the conditional distribution of $\mathbf{X}t$ given $\mathbf{Y}{1: t}$ is $N\left(\mathbf{X}{t \mid t}, \mathbf{P}{t \mid t}\right)$.

时间序列分析代考

统计代写|时间序列分析代写Time-Series Analysis代考|State Space Representations of Time Series

$$X_{t+1}=0 \cdot X_t+\theta \varepsilon_t, Y_t \quad=\mu+X_t+\varepsilon_t$$

$$Y_t=\varphi_0+\varphi Y_{t-1}+\varepsilon_t+\theta \varepsilon_{t-1}$$

$$X_{t+1}=\varphi_0+\varphi X_t+(\theta+\varphi) \varepsilon_t, Y_t \quad=X_t+\varepsilon_t$$

统计代写|时间序列分析代写Time-Series Analysis代考|Kalman Recursions

$$\mathbf{X} t+1=\mathbf{c}_t+\mathbf{F}_t \mathbf{X}_t+\mathbf{R}_t \eta_t, \quad \eta_t \sim \mathrm{N}\left(\mathbf{0}, \mathbf{Q}_t\right), \mathbf{Y}_t=\mathbf{d}_t+\mathbf{Z}_t \mathbf{X}_t+\varepsilon_t, \varepsilon_t \sim \mathrm{N}\left(\mathbf{0}, \mathbf{H}_t\right)$$

(1961) 的开创性论文，通常称为 Kalman 递归。与状态空间模型 (8.7)-(8.9) 相关的三个

• 卡尔曼滤波: 给定 $\mathbf{Y}_{1: t}$, 恢复或估计 $\mathbf{X}_t$
• 卡尔曼预测：给定 $\mathbf{Y}_{1: t}$ ，预测 $\mathbf{X}_s$ 在哪里 $s>t$
• 卡尔曼平滑: 给定 $\mathbf{Y} 1: t$, 估计 $\mathbf{X}_s$ 在哪里 $s h \mid j=\mathbf{E}(\mathbf{X} h \mid \mathbf{Y} 1: j)$ 和
$\mathbf{P} h \mid j=\operatorname{Var}\left(\mathbf{X}_h \mid \mathbf{Y} 1: j\right)=\mathrm{E}\left[(\mathbf{X} h-\mathbf{X} h \mid j)(\mathbf{X} h-\mathbf{X} h \mid j)^{\prime} \mid \mathbf{Y} 1: j\right]$ 分
别是条件均值向量 (即估计值) 和协方差矩阵 $\mathbf{X}_h$ 给予 $\mathbf{Y} 1: j$ ，我们在这里定义起始
值 $\mathbf{X} 1 \mid 0=\mathbf{a}_1$ 和 $\mathbf{P} 1 \mid 0=\mathbf{P} 1$ 为了 $h=1, j=0$. 请注意，如果 $\mathbf{X} h \mid j$ 被视为估计
$\mathbf{X} h$ 给予 $\mathbf{Y} 1: j$ ，然后 $\mathbf{P} h \mid j$ 是估计误差的协方差矩阵。此外，让
$v_h=\mathbf{Y}_h-\mathbf{Z}_h \mathbf{X} h \mid h-1-\mathbf{d} h$ 是的估计 $\varepsilon_h$ 和 $\mathbf{V}_h=\mathbf{Z}_h \mathbf{P} h \mid h-1 \mathbf{Z} h^{\prime}+\mathbf{H}_h$
的条件协方差矩阵 $\boldsymbol{v}_h$ 给予 $\mathbf{Y} 1:(h-1)$. 下面三个定理给出了一组卡尔曼递归来解决
上述三个基本问题。
定理8.1 (卡尔曼滤波) 卡尔曼滤波有递归算法
$$\mathbf{X} t|t=\mathbf{X} t| t-1+\mathbf{P} t\left|t-1 \mathbf{Z}_t^{\prime} \mathbf{V}_t^{-1} \boldsymbol{v}_t \quad \mathbf{P} t\right| t=\mathbf{P} t|t-1-\mathbf{P} t| t-1 \mathbf{Z} t^{\prime} \mathbf{V}_t^{-1}$$
在哪里 $\mathbf{X} t \mid t$ 是过滤后的估计和 $\mathbf{P} t \mid t$ 是相应的误差协方差矩阵。此外，条件分布 $\mathbf{X} t$ 给 予 $\mathbf{Y} 1: t$ 是 $N(\mathbf{X} t|t, \mathbf{P} t| t)$.

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: