# 统计代写|时间序列分析代写Time-Series Analysis代考|STAT3040

## 统计代写|时间序列分析代写Time-Series Analysis代考|FRACTIONAL DIFFERENCING AND LONG MEMORY

5.23 Our analysis has so far only considered cases where the order of differencing, $d$, is either zero, one, or possibly two. Concentrating on the first two cases, if $x_t \sim I(1)$ then its ACF declines linearly, whereas if $x_t \sim I(0)$ its ACF exhibits an exponential decline, so that observations far apart may be assumed to be independent, or at least nearly so. Many empirically observed time series, however, although appearing to satisfy the assumption of stationarity (perhaps after differencing), nevertheless seem to exhibit some dependence between distant observations that, although small, is by no means negligible. This may be termed long range persistence or dependence, although the term long memory is now popular. ${ }^6$

Such series have particularly been found in hydrology. where the longrange persistence of river flows is known as the Hurst effect (see, e.g., Mandelbrot and Wallis, 1969; Hosking, 1984), but many financial time series also exhibit similar characteristics of extremely long persistence. This may be characterized as a tendency for large values to be followed by further large values of the same sign, in such a way that the observations appear to go through a succession of “cycles,” including long cycles whose length is comparable to the total sample size.
5.24 The class of ARIMA processes may be extended to model this type of long-range persistence by relaxing the restriction to just integer values of $d$, so allowing fractional differencing within the class of AR-fractionally integrated-MA (ARFIMA) processes. This notion of fractional differencing/integration seems to have been proposed independently by Granger and Joyeux (1980) and Hosking (1981) and is made operational by considering the binomial series expansion of $\mathrm{V}^{l l}$ for any real $d \geqslant-1$ :
\begin{aligned} \nabla^d &=(1-B)^d=\sum_{k=0}^{\infty} \frac{d !}{(d-k) ! k !}(-B)^k \ &=1-d B+\frac{d(d-1)}{2 !} B^2-\frac{d(d-1)(d-2)}{3 !}+\cdots \end{aligned}

With this expansion we may define the ARFIMA $(0, d, 0)$ process as
$$\nabla^d x_t=\left(1-\pi_1 B-\pi_2 B^2-\cdots\right) x_t=a_t$$
where, using the gamma function $\Gamma(n)=(n-1)$ !, the $\pi$-weights are given by
$$\pi_j=\frac{\Gamma(j-d)}{\Gamma(-d) \Gamma(j+1)}$$

## 统计代写|时间序列分析代写Time-Series Analysis代考|TESTING FOR FRACTIONAL DIFFERENCING

5.28 The “classic” approach to detecting the presence of long memory in a time series is to use the range over standard deviation or rescaled range $(R / S)$ statistic. This was originally developed by Hurst (1951) when studying river discharges and a revised form was later proposed in an economic context by Mandelbrot (1972). It is defined as the range of partial sums of deviations of a time series from its mean, rescaled by its standard deviation, i.e.,
$$R_0=\hat{\sigma}0^{-1}\left[\max {1 \leq i \leq T} \sum_{t=1}^i\left(x_t-\bar{x}\right)-\min {1 \leq i \leq T} \sum{t=1}^i\left(x_t-\bar{x}\right)\right] \quad \hat{\sigma}0^2=T^{-1} \sum{t=1}^T\left(x_t-\bar{x}\right)^2$$
The first term in brackets is the maximum of the partial sums of the first $i$ deviations of $x_t$ from the sample mean. Since the sum of all $T$ deviations of the $x_t \mathrm{~s}$ from their mean is zero, this maximum is always nonnegative. The second term is the minimum of the same sequence of partial sums, and hence is always nonpositive. The difference between the two quantities, called the “range” for obvious reasons, is therefore always nonnegative, so that $R_0 \geq 0$.
$5.29$ Although it has long been established that the $R / S$ statistic is certainly able to detect long-range dependence, it is nevertheless sensitive to short-run influences. Consequently, any incompatibility between the data and the predicted behavior of the $R / S$ statistic under the null of no long run dependence need not come from long memory, but may merely be a symptom of shortrun autosorrelation.

The $R / S$ statistic was, thus, modified by Lo (1991), who incorporated short-run dependence into the estimator of the standard deviation, replacing (5.23) with
$$R_q=\hat{\sigma}q^{-1}\left[\max {1 \leq i \leq T} \sum_{t=1}^i\left(x_t-\bar{x}\right)-\min {1 \leq i \leq T} \sum{t=1}^i\left(x_t-\bar{x}\right)\right]$$
where
$$\hat{\sigma}q^2=\hat{\sigma}_0^2\left(1+\frac{2}{T} \sum{j=1}^{q q} w_{q j} r_j\right) w_{q j}=1-\frac{j}{q+1}, q<T$$

# 时间序列分析代考

## 统计代写|时间序列分析代写时间序列分析代考|分数阶差分和长记忆

5.24 ARIMA过程类可以通过将限制放宽为$d$的整数值来模拟这种类型的长期持久性，从而允许ar -分数积分- ma (ARFIMA)过程类中的分数差分。分数阶差分/积分的概念似乎是由Granger和Joyeux(1980)和Hosking(1981)独立提出的，并通过考虑对任何实数$d \geqslant-1$:
\begin{aligned} \nabla^d &=(1-B)^d=\sum_{k=0}^{\infty} \frac{d !}{(d-k) ! k !}(-B)^k \ &=1-d B+\frac{d(d-1)}{2 !} B^2-\frac{d(d-1)(d-2)}{3 !}+\cdots \end{aligned} 对$\mathrm{V}^{l l}$的二项级数展开来实现

$$\nabla^d x_t=\left(1-\pi_1 B-\pi_2 B^2-\cdots\right) x_t=a_t$$
，其中，使用伽马函数$\Gamma(n)=(n-1)$ !， $\pi$ -权重由
$$\pi_j=\frac{\Gamma(j-d)}{\Gamma(-d) \Gamma(j+1)}$$ 给出

## 统计代写|时间序列分析代写Time-Series Analysis代考|TESTING FOR小数差分

$$R_0=\hat{\sigma}0^{-1}\left[\max {1 \leq i \leq T} \sum_{t=1}^i\left(x_t-\bar{x}\right)-\min {1 \leq i \leq T} \sum{t=1}^i\left(x_t-\bar{x}\right)\right] \quad \hat{\sigma}0^2=T^{-1} \sum{t=1}^T\left(x_t-\bar{x}\right)^2$$

$5.29$尽管$R / S$统计数据肯定能够检测长期相关性已经建立了很长时间，但它对短期影响仍然很敏感。因此，在没有长期依赖的null条件下，数据与$R / S$统计量的预测行为之间的任何不兼容并不一定来自于长时间的记忆，而可能只是短时间自相关的症状

$$R_q=\hat{\sigma}q^{-1}\left[\max {1 \leq i \leq T} \sum_{t=1}^i\left(x_t-\bar{x}\right)-\min {1 \leq i \leq T} \sum{t=1}^i\left(x_t-\bar{x}\right)\right]$$

$$\hat{\sigma}q^2=\hat{\sigma}0^2\left(1+\frac{2}{T} \sum{j=1}^{q q} w{q j} r_j\right) w_{q j}=1-\frac{j}{q+1}, q<T$$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: