Loading...
Please wait, while we are loading the content...
''Refitted Cross-Validation in Ultra High Dimensional Regression''
| Content Provider | Semantic Scholar |
|---|---|
| Author | Fan, Jianqing Hansen, Christian |
| Copyright Year | 2010 |
| Abstract | This paper reviews Granger’s contribution to the literature on aggregation and extends it by considering the case of large linear dynamic panels, where various interconnections between individual units are allowed. We notably permit distributed lags of all micro units to enter the individual micro relations, relax the independence assumption of the micro-distributed lag coefficients, and allow a general pattern of cross section dependence of micro innovations, which can be either strong or weak. Using the Pesaran’s (2003) forecasting approach to derive the optimal aggregate model, our paper derives the optimal aggregate model of a factoraugmented VAR in N cross section units. The paper also discusses the aggregation error in this set-up, identify some of the distributional features of micro-parameters from aggregate relations, and proves Granger’s (1980) conjecture about long memory properties of aggregate variables from a large scale dynamic econometric model. Monte Carlo experiments assess how these aggregate functions perform in small samples. "Selecting the Correct Number of Factors in Approximate Factor Models: The Large Panel Case with Bridge Estimators" Mehmet Caner (North Carolina State University, USA) Abstract. This paper proposes Bridge estimators to correctly select the number of factors in an approximate factor model when there are latent factors. This contributes to both econometrics and statistics literature. Instead of the information based criterion penalty in econometrics, we propose a penalty based on factor loadings. So this extends the Bridge estimators in least squares context to factor models. This is a new approach in factor models, we show that oracle property of Bridge estimators is preserved, and hence we can select the correct number of factors through high dimension reduction. Simulations show that our technique can do a better job than information based criterion on both autocorrelated and cross section correlated data. A simple example about US macro factors in the last 25 years is supplied in the paper as well. "Maximum Likelihood Estimation of Factor Models on Data Sets with Arbitrary Pattern of Missing Data" Marta Banbura (European Central Bank, Germany) and Michele Modugno (European Central Bank, Germany) Abstract In this paper we propose a methodology to estimate a dynamic factor model on data sets with an arbitrary pattern of missing data. We modify the Expectation Maximisation (EM) algorithm as proposed for a dynamic factor model by Watson and Engle (1983) to the case with general pattern of missing data. We also extend the model to the case with serially correlated idiosyncratic component. The framework allows to handle efficiently and in an automatic manner sets of indicators characterized by different publication delays, frequencies and sample lengths. This can be relevant e.g. for young economies for which many indicators are compiled only since recently. We also show how to extract a model based news from a statistical data release within our framework and we derive the relationship between the news and the resulting forecast revision. This can be used for interpretation in e.g. nowcasting applications as it allows to determine the sign and size of a news as well as its contribution to the revision, in particular in case of simultaneous data releases. We evaluate the methodology in a Monte Carlo experiment and we apply it to nowcasting and backdating of euro area GDP. "Factor Based Identification-Robust Inference in IV" George Kapetanios (Queen Mary, University of London, UK), Lynda Khalaf (Carleton University, Canada), M. Marcellino (EUI and Bocconi University, Italy) Abstract: Weak-instruments robust methods raise important size/power trade-offs resulting from omitted instruments biases. The popular Anderson-Rubin test has the right size when the underlying first-stage model (that is, the model linking the structural equations right-handside endogenous variables to available instruments) is closed or is incomplete. Alternative methods are available that may outperform this statistic assuming a closed first-stage specification (that is, assuming that all instruments are accounted for). We show that information-reduction methods provide a useful and practical solution to this problem. Formally, we propose factor-based modifications to three popular weak-instruments-robust statistics, and illustrate their validity asymptotically and in .nite samples. Results are derived using asymptotic settings that are commonly used in both the factor-analysis and weakinstruments literatures. For the Anderson-Rubin statistic, we also provide analytical finite sample results under usual assumptions. An illustrative Monte Carlo study reveals the following. (1) Our factor-based corrections circumvent the size problems resulting from instrument omissions and improve the power of the Anderson-Rubin statistic. (2) Once corrected through factor reduction, all considered statistics perform equally well. Results suggest that factor-reduction holds promise as a unifying solution to the many instruments |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://ims.nus.edu.sg/files/jianqing_cl.pdf |
| Alternate Webpage(s) | http://www.stat.yale.edu/Conferences/ICSS2010/abstracts/Jianqing%20Fan.pdf |
| Alternate Webpage(s) | https://www.cass.city.ac.uk/__data/assets/pdf_file/0007/65635/Abstracts-1.pdf |
| Alternate Webpage(s) | http://www2.ims.nus.edu.sg/files/jianqing_cl.pdf |
| Alternate Webpage(s) | http://d3iovmfe1okdrz.cloudfront.net/cms/wp-content/uploads/2012/07/Dr.-Jianqing-Fan-10-15-2010-abstract.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |