I would like to create an auto-correlated time series using an AR model. However, when I look at my simulated data, it appears that the lag effects are not having the impact I expect.

```
set.seed(123)
x <- arima.sim(model = list(ar=.9,.9,.9), n = 100) #autocorrelated
y <- arima.sim(model = list(ar=.01), n = 100) #not very autocorrelated
z<- arima.sim(model = list(ar=.9,-.1,.9,-.1), n =100) #weirdly autocorrelated
plot.ts(cbind(x,y,z))
```

Good! My time series seem strongly autocorrelated for x, very noise/random for y, and somewhere in between for z.

```
par(mfrow=(c(1,3)))
acf(x)
acf(y)
acf(z)
```

I'm surprised here mostly by the output of the last graph. I have very weak lag term in my AR model for even previous years but the acf() function seems to be finding a temporal pattern very similar to that of my first 'x' time series.

```
par(mfrow=(c(1,3)))
pacf(x)
pacf(y)
pacf(z)
```

The pacf() function finds strong temporal correlation in the first year for time series 'x' and then finds almost none. I suspect this is because the first strong autocorrelation in AR(1) is soaking up so much of the variance between years that there is not any leftover for the second and third year lag effects to explain. Does anyone know if this is a reasonable way to think about this?

I tried to test it with my 'z' model where I created strong correlation between the present year (t) and the previous year (t-1), and weak correlation between the previos year (t-1) and the second previous year (t-2), and a lot again for the second previous year (t-2) and the third previous year (t-3) and yet still pacf() does not seem to notice a relationship past the first year.