Seraching for Additive Outliers in Nonstationary Time Series.
Recently, Vogelsang (1999) proposed a method to detect outliers which explicitly imposes the null hypothesis of a unit root. It works in an iterative fashion to select multiple outliers in a given series. We show, via simulations, that under the null hypothesis of no outliers, it has the right size in finite samples to detect a single outlier but when applied in an iterative fashion to select multiple outliers, it exhibits severe size distortions towards finding an excessive number of outliers. We show that this iterative method is incorrect and derice the appropriate limiting distribution of the test at each step of the search.