Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Essays on Forecasting Financial Markets Using Decomposition, Constraints and Extreme Learning Machine

Abstract

Chapter 1 and 2 discuss how to use a decomposition model to make a density forecast of the financial return and how to improve this density forecast by imposing matching moment constraints. The density forecast model is based on a decomposition of financial returns into the absolute return and the sign of the return. We also use the maximum entropy

principle for the out-of-sample density forecast subject to the constraint

that matches the mean forecasts from the decomposition model and a simple regression model. In Chapter 1 (joint with Professor Tea-Hwy Lee),

We show that when the mean forecast from the decomposition model deviates from that of the mean return, imposing the matching mean forecast constraint will tilt the density forecast of the decomposition model and improve over the density forecast of the original decomposition model. In Chapter 2 (joint with Professor Tae-Hwy Lee and Ru Zhang), we further improve the decomposition model by using dependent copula functions, and we show that the risk forecast produced by the decomposition density forecast model is superior to RiskMetrics in terms of giving higher coverage probability and lower predictive quanitle loss in extreme events of large loss for monthly returns.

Chapter 3 and 4 (joint with Professor Tae-Hwy Lee and Ru Zhang) deal with the testing of nonlinearity of time series data by using artificial neural network (ANN). In Chapter 3, we find that the original Lee, White and Granger (LWG, 1993) test is sensitive to the randomly generated activation parameters since they consider a fairly small number (10 or 20) of random hidden unit activations. To solve this problem, we simply increase the number of randomized hidden unit activations to a very large number (e.g., 1000). We show that using many randomly generated activation parameters can robustify the performance of the ANN test when it is applied to a real empirical data. This robustification is reliable and useful in practice, and can be achieved at no cost as increasing the number of random activations is almost costless given today¡&hibar;s computer technology. In Chapter 4, we further consider different types of regularization of the dimensionality, such as principal component analysis (PCA), Lasso, Pretest, partial least squares (PLS), among others. We demonstrate that while these supervised regularization methods such as Lasso, Pretest, PLS, may be useful for forecasting, they may not be used for testing because the supervised regularization would create the post-sample inference or post-selection inference (PoSI) problem. Our Monte Carlo simulation shows that the PoSI problem is especially severe with PLS and Pretest while it seems relatively mild or even negligible with Lasso.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View