Git Product home page Git Product logo

exchange-rates-modelling's Introduction

Exchange-Rates-Modelling

This repository contains my R code for my master thesis in line with the idea of Reproducible Research.

The idea was to analyze the foreign exchange disconnect puzzle claiming a weak relation among macroeconomic fundamentals and FX-rates. Given the small sample used by the reference papers Meese and Rogoff (1983a & 1983b) that documented the phenomena I decided to further investigate the puzzle collecting data for a 20 years period and to contribute to the literature by testing the performance of univariate and multivariate models. Moreover, on the top of it I assessed the performance of the generalized tree structured model of Audrino and Bühlmann (2001) similar to the CART models.

Abstract

This study reports an estimation of the out-of-sample performance of univariate, multivariate, and non-linear parametric monetary models contributing to the analysis of the Meese Rogoff paradox for the empirical CHF/USD, GBP/USD and JPY/USD series. 20 years of monthly observations and five models are analysed. In line with the academic literature I find a random walk model to be the best univariate model to capture the unit root behaviour of FX-rates and to forecast the latter out-of-sample. Nonetheless, in contrast with the general academic literature I do not find multivariate models to underperform a random walk model when forecasting FX-rates out of sample. Rather, I find that a vector error correction model representing the cointegration evidence among macroeconomics fundamentals and FX-rates to marginally outperform a random walk model for all the country series and at all estimation lags. Finally, the considereable out of sample performance of the generalized structured model points to non linear macroeconomic effects in line with the stable relation properties of the co-integration analysis suggesting a sluggish adjustment process to the stable lung run equilibria rather than a gradual adjustment as implied in the error correction model.

Code

The code is divided in chapters.

For each step of the thesis a R markdown was written.

  • 0 - functions: contains globally specified functions that are going to be leveraged throughout the various scripts.
  • 1 - Data Import: leverages quantmod API to connect to the St. Louis FRED databases and download the macroeconomic series of interest.
  • 2 - Data Cleaning: here the series are treated and checked for structural breaks, mean stationarity and variance stationarity. Moreover the series are de-seasonalized according to spectral peaks and a parametric Fourier estimation leveraging the classical trigonometric functions.
  • 2.1 Cointegration Test: given the evidence for integrated series of order 1, resulting from script 2, co-integration is tested in the series according to the Johansen Eigenvalue test statistic.
  • 3 - Stationary Linear Modeling: fit two univariate models (structural OLS & Transfer Function Models) to the treated stationary series.
  • 3.1 Levels Linear Modeling: fit a vector error correction model and a VAR in difference.
  • 4. Linear Fit Comparison: plot the out-of-sample performance of the univariate. Moreover, it computes the bootstrapped p-values for the underlying superior predictability test among the models necessary to derive a model confidence set as in Hansen (2011).
  • 4.1 Common Trend Plot and Linear Fit: repeats 4, for the multivariate models also decomposing the co-integration relation extracting the common trends as in Gonzalo and Grange(1995).
  • 5 - Generalized Tree Structured Model: Fit the generalized tree structured threshold model of Audrino and Bühlmann to the analysis with local structural OLS fit without inclusion of lagged terms.
  • 5.1 GTS VAR: Analogous to 5 for a multivariate local parametric fit.
  • 6 - GTS Comparison: Analogous to 4 for the generalized tree structured model.

Notes

Coding the generalized tree structure model was a valuable exercise given the complexity of the underlying algorithm and the need to partition at each step the data set at each terminal node, combining the local parametric residuals of the partition with the residuals of each other terminal node and to compare the result with all of the other possible partitions in the partition space.

After heavy debugging and testing the code is functioning although surely not efficient. Functions 9-15 fully implement the algorithm outlined by Audrino & Bühlmann.

At the moment the code is functional just for the data set created in this repository. It can be though easily generalized to ask for a parametric model as argument together with the exogenous and endogenous variables.

Moreover, if you will ever have time and interest it will be a good exercise to start to profile and optimize the code. You can compare the results of your model to the one of party that should achieve a similar task to the one developed here. Due to the vague explanation of the package workings when fitting the model via likelihood, I preferred to implement the code by myself as a valuable learning experience.

exchange-rates-modelling's People

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.