Version 1.5 of Toolkit (Warning: Not backwards compatible!)

Three major but basic changes to make VFI Toolkit much easier to use, but which are not backward compatible, so will break all your existing codes. Hopefully they will make the VFI Toolkit much easier to use going forwards and be worth the change! The changes are all around making it easier to use VFI Toolkit on different hardware, and having to declare options.

The first major change is that VFI Toolkit now detects whether you have a GPU (graphics card) and sets defaults accordingly. This means that the same code runs on computers with a GPU, and on computers without a GPU (albeit very slowly). A further advantage of this is that users will need to do way less setting up of technical options and can focus on their Economic models. Have mainly done this so that users can write codes on laptop with no GPU (with small grids), and then run same codes (with bigger grids) on Desktop or Server with a GPU. Note that this does not mean all codes can run with just CPUs, some remain GPU only (specifically almost anything with value function in finite horizon or OLG). It remains possible to set specific parallelization options exactly as before, e.g., vfoptions.parallel=2.

Second major change is that an initial guess for the value function is no longer required by ‘ValueFnIter‘ commands. All existing codes must therefore change any calls of these commands to remove the initial guess; change, e.g., ValueFnIter_Case1(V0, n_d,…) to ValueFnIter_Case1(n_d,…). This was done as initial guesses are not widely used and it makes switching between GPU and CPU implementations much easier. You can still set initial guesses using new vfoptions.V0 (default is an initial guess of zeros). This also means that the ‘HeteroAgentStationaryEqm‘ also no longer require initial guess for value function.

Third major change is that for ‘HeteroAgentStationaryEqm‘ commands the inputs and outputs for equilibrium prices are now structures rather than vectors. This makes them easier to use as can add/remove conditions/prices without having to worry about causing errors elsewhere in codes due to reordering. Likewise for all ‘TransitionPath‘ commands the input and output price (and parameter) paths are now structures.

These changes are reflected in all examples. I will be rolling them out to all replications over the coming weeks.

As this version is anyway breaking backwards compatibility I have taken the opportunity to remove the SSvalues commands. Their functionality was earlier replaced with the EvaluateFnOnAgentDist commands. I had intended to keep them around longer as legacy code to avoid breaking backward compatibility but since version 1.5 breaks most backward compatibility why not just break everything? 🙂

Because this does break backwards compatibility an archived copy of the last v1.4 is available as a zip-file.

As ever, if you find something that does not work, or there is a feature you think would really help improve the VFI Toolkit, please don’t hesitate to either send me an email or post on the forum.

Entry & Exit: Examples based on Hopenhayn & Rogerson (1993) and Restuccia & Rogerson (2008)

New example based on model of Hopenhayn & Rogerson (1993) – Job Turnover and Policy Analysis: A General Equilibrium Framework. This example illustrates how to solve (stationary) general equilibrium in models with endogenous entry and endogenous exit. The model itself is about how firing costs can cause factor misallocation, and the macroeconomic implications for consumption, productivity and employment.

Another example based on model of Restuccia & Rogerson (2008) – Policy distortions and aggregate productivity with heterogeneous establishments. This example illustrates how to solve (stationary) general equilibrium in models with endogenous entry and exogenous exit. The model itself is about how firm-level distortions can cause misallocation, and the macroeconomic implications for consumption, productivity and employment.

These example demonstrates new features in VFI Toolkit for solving models with (endogenous or exogenous) entry and exit. These features are simply implemented as an option in standard value function, stationary distribution, and general equilibrium commands (and transition paths in near future). This means that entry & exit, while requiring some additional set up, can then be used just like any other model.

For full details of the Hopenhayn & Rogerson (1993) model see the original paper. Code for example.

For full details of the Restuccia & Rogerson (2008) model see the original paper. Code for example.

Have also uploaded a replication of Hopenhayn & Rogerson (1993). There are some substantial differences because the original paper used rough grids on ‘number of employees’. With the increase in computing power over the past 25+ years it is nowadays easy to use much more accurate grids.

Have also uploaded a replication of Restuccia & Rogerson (2008). There are essentially no differences from the original results for Tables 1 to 9; except Table 7 which I failed to replicate, likely due to me simply not understanding from paper what denominator should be in ‘relative’ numbers. Note that the solution algorithm used is quite different from the original which exploited many things unique to this model that the replication does not attempt to take advantage of.

There is also a (in progress) working paper detailing the framework for models with entry and exit implemented by the VFI Toolkit. Most importantly it provides an exact description of the timing assumptions around entry and exit. It describes the exact modeling framework being solved by the VFI Toolkit commands, as well as pseudocode for the algorithms.

An additional example based on Hopenhayn (1992) – Exit, selection, and the value of firms has also been uploaded. Although the timing in the model does not fit that used by the VFI Toolkit this turns out to be largely irrelevant as the model contains no endogenous states, and so the standard commands can still be used to solve it with minor change to the options. The original paper is here, although this example is based on calibration from lecture notes of Chris Edmond.

Chris Edmond’s lecture notes on Hopenhayn & Rogerson (1993) are also nicely done and provide an alternative calibration of the model. His alternative calibration is implemented in the replication codes.

Transition Paths: Example based on Guerrieri & Lorenzoni (2017)

New example based on model of Guerrieri & Lorenzoni (2017) – Credit Crises, Precautionary Savings, and the Liquidity Trap. This example illustrates how to solve general equilibrium transition paths. The model itself looks at how interest rates, output, and employment respond to an (unexpected) credit crisis. Transitions are done for both the flexible prices and the new-keynesian-sticky-wage cases (the later involves sticky wages and imposing a zero lower bound on nominal interest rates).

This example show how the VFI Toolkit can be used to easily compute a general equilibrium transition path in response to a path for parameters (the ‘TransitionPath_Case1()’ command calculates the transition relating to the ‘ParamPath’ in codes). It also demonstrates tools to analyse outputs along a specific transition path, such as ‘EvalFnOnTransPath_AggVars_Case1()’, or to simulate a panel data set corresponding to such a path with ‘SimPanelValues_TransPath_Case1()’.

Among the transition paths these commands can solve and analyze are those known colloquially as “MIT-shocks”.

For full details of the model see the original paper. Code for example.

Have also uploaded a replication of Guerrieri & Lorenzoni (2017).


Main post ends here. The rest is extra background.


The codes implementing the model involve one very important decision, namely when discretizing the AR(1) process the Tauchen hyperparameters are chosen to match the variance and autocorrelation (essentially, using Tauchen-Hussey method), which is standard in the literature but provides a good example of why this method should no longer be used. The Tauchen-Hussey method is the Tauchen method where the hyperparameter is chosen to match the volatility; here meaning the Tauchen method hyperparameter takes a value of 2.1. I have written a short comment/code to illustrate how this is key to the results and explaining why this matters and why the use of the Tauchen-Hussey method is severly problematic in Economics. In defense of the authors this approach to numerical quadrature is widespread and they were simply following standard practice in the literature; see the comment for a detailed explanation. It is worth noting that while this assumption of setting the Tauchen hyperparameter to 2.1 is key to the results it is nowhere discussed in the paper and it is likely none of the Referees fully appreciated its importance (this is made clear by the ‘risk-aversion’ counterfactual, which is largely pointless since using Tauchen-Hussey made the model largely riskless). Discussion of how shocks are discretized is rare in the literature despite the large role the choice of such hyperparameters play in driving results.

The paper of Guerrieri & Lorenzoni (2017) makes a number of choices on how to present certain results, which I deliberately do not follow in the example. The model is quarterly, and some of the Figures in the paper have ‘quarterly’ y-axes and ‘annual’ x-axes. One example is the top-left panel of Figure III. It is described as ‘borrowing constraint’. Actually it plots the ‘borrowing constraint as fraction of annual output’ (not quarterly model output). I also follow standard practice and have period zero be the period in which the transition path is revealed; GL2017 have period zero being the initial stationary distribution and the transition path is only revealed in period 1.1 In fact, even this is not the full story as the y-axis is mislabelled, it is just a straight line that decreases over six periods from the initial ‘lagged value of borrowing constraint as fraction of annual output’ to the final value of the same, because for all the intermediate periods it not related to the current model value of annual output. The contents of the top-left panel of Figure III thus end up containing numbers that have little connection to what is happening in the model.2 I therefore choose instead to plot the actual current period parameter value for the borrowing constraint (\(\phi\)) instead. Further examples of where the example codes deliberately differ are the x-axis of Figure IV, which is labelled as ‘b’, when in fact it is b divided by initial annual GDP. Likewise the x-axis of Figure II is labelled as aggregate bond holdings but is in fact aggregate bond holdings divided by annual GDP. As a result of my decision not to follow this approach many of the graphs in the example code will appear to have different x-axes from those in the original paper. The replication code produces both (my versions and the original paper versions).

A related caution, some of the parameter values reported in Table 1 are incorrect. For \(\psi\) it is simply incorrect. For \(B\) and \(\phi\) the reported parameter values in Table 1 are not the quarterly model parameter values, they are the model values divided by annual GDP. While the article makes clear that these annual-debt-to-GDP concepts are the targets it does not mention that the values in Table 1 are of this form, and not the actual quarterly model values like all the other parameter values in Table 1. (\(\rho\) is also chosen based on an annual target value, but it is the quarterly model parameter value that is reported).

I mention these (Figures and Table values) as based solely on contents of the paper it was impossible to replicate because there is no indication that the parameter values in last two lines of Table 1 are not actually the parameter values themselves, but rather the target values. By studying the codes provided by the authors the replication was possible. The codes provided by the authors are a model of clarity and make it easy to uncover these choices on what to present. The authors should be commended for providing such well commented and easy to follow codes; that this replication was possible shows the payoff of doing so. This highlights the importance of providing these codes.


Footnotes:
1. Page 1441 describes it as “In the top left panel, we plot the exogenous adjustment path for \(\phi_t\)” (correction: ‘as a fraction of annual output’, not \(\phi_t\) itself). The codes consider this to be a lag of the current value as is evident in line 70 of ‘compute_transition.m’ code which can be downloaded from website of Lorenzoni. But actually the model description only calls the parameter \(\phi\) and does not specify whether it should be considered the time t or t+1 value, so this ‘lag’ in the timing convention of the code would be better considered the as the currently relevant value (the code considers equation (1) of the paper as being \(\phi_{t+1}\), I feel that \(\phi_t\) is more appropriate). Under this reading the value plotted really is the current value and not the lag, but this is not the interpretation of the code.
2. The codes of GL2017 make clear that the graph is in fact of \(\phi_t\) divided by initial annual output. But this is not a number which is actually relevant to the behaviour of the model. This is a concept which one might want to plot, but it is not clear from article that this is in fact what is being plotted.

Version 1.4 of Toolkit

Two main changes: first is renaming of ‘steady state’ commands to ‘evaluate on agents distribution’, second is improved support for using (parallel) cpu without gpu.

The renaming means commands like:
EvalFnOnAgentDist_AggVars_Case1()
have replaced those which used to be called:
SSvalues_AggVars_Case1()

To avoid breaking existing codes the ‘steady state’ commands continue to exist, but I expect to delete them in about two years time, and they will no longer be maintained/improved. This change reflects that these commands have many uses and applications involving agent distributions that have nothing to do with stationary distributions, and that steady-state was itself a misleading (essentially incorrect) description of stationary distribution. The new naming better reflects their actual use. There has also been some internal improvements in how these commands function that help improve performance.

The improved support for cpus is entirely behind the scenes, and has been done in response to requests from users. While the cpus are slower a number of people appear to find them useful for creating slow-to-compute-but-easy-to-implement solutions to models which they can use to debug their own more sophisticated codes.

All the existing example codes have been updated to reflect the renaming.

To celebrate I have uploaded a replication: Castañeda, Díaz-Giménez & Ríos-Rull (2003) – Accounting for the US Earnings and Wealth Inequality. This model is a ‘Case 2’ problem.

As ever, if you find something that does not work, or there is a feature you think would really help improve the VFI Toolkit, please don’t hesitate to either send me an email or post on the forum.

OLG models: Example based on Huggett (1996)

New example based on model of Huggett (1996) – Wealth Distribution in Life-Cycle Economies. This example illustrates how to solve general equilibrium OLG models. The life-cycle problem itself involves exogenous earnings shocks, and endogenous asset choice. A large grid on assets is needed to allow for the model’s focus on inequality in the wealth distribution.

This is the first example showing how to deal with multiple general equilibrium conditions, here there are three: one for asset market clearance, one for accidental bequests, and one for government budget balance. It also shows how to use various commands related to inequality, and age-conditional moments.

For full details of the model see the original paper. Code for example.

Have also uploaded a replication of Huggett (1996).

Example also previews the new feature of computing the agents distribution using iteration on sparse matrices (simoptions.parallel=3). This is slower than the default iteration on the GPU (simoptions.parallel=2), but uses much less memory which was until now a bottleneck for this model and means this example can be run on a laptop (previously only ran on more powerful server). ‘Previews’ as the feature has not yet been rolled out for all relevant toolkit commands; will hopefully do so in coming weeks.

Forum!!! Ask questions about codes and VFI Toolkit.

discourse.vfitoolkit.com

VFI Toolkit now has a Discourse Forum!!! Gives users a place to ask questions, get help, or even request features. Hopefully you find it useful.

Check it out. Ask a question.

If you are feeling ambitious report a bug 🙂 If you are feeling really ambitious report a bug fix 😀

One thing I personally would love to see are example or replication codes. If you have any please feel free to upload them.

Computing, especially GPUs, for Economists

(Not directly VFI Toolkit related.)

For those who are more generally interested in parallel computing, including GPUs, the following is a link to some materials created as part of a recent workshop I gave on the topic. They provide an introduction to the concepts of parallelization in general, and GPUs in particular.
robertdkirkby.com/computing-especially-gpus-for-economists/

The conceptual aspects are largely independent of whether you use Matlab or any other language. Short Matlab codes illustrating some of the concepts are included.

To be able to run all of the example codes you need a computer with Matlab (including Parallel Computing Toolbox) and an NVIDIA graphics card. You will also need to make sure to install CUDA (the drivers for your graphics card also need to be installed, but this is typically already the case). The NVIDIA graphics card must be CUDA-enabled but this is true of almost any NVIDIA card sold in recent years.

Finite Horizon: Simulate Panel Data and Life-Cycle Profiles

Have added commands to simulate Panel Data and Life-Cycle Profiles based on Finite Horizon Value Function Problems! This is done using the commands SimPanelValues_FHorz_Case1 and SimLifeCycleProfiles_FHorz_Case1. These commands automate many aspects of working with finite horizon models.

The main inputs are the formula for the variables for which panel data (or life-cycle profiles) should be generated, an ‘initial distribution’ from which agents are drawn/born, and the ‘Policy’ outputted by the ValueFnIter_Case1_FHorz command.

The outputs are a Panel Data set, or life-cycle profiles (mean, median, and some percentiles), for the variables whose formula are input.

Their use is illustrated as part of the ‘extended’ example to simulate panel data and life-cycle models from a basic ten-period consumption savings problem where income is a combination of a deterministic age-dependent component and a stochastic component. A description of the commands can be found in the Documentation.

For more details on the use of the command, it’s inputs, outputs, and internal options see the VFI Toolkit Documentation.

These commands can also be seen in action as part of a replication of Hubbard, Skinner & Zeldes (1994) – “The importance of precautionary motives in explaining individual and aggregate saving”. The replication also takes advantage of the as-yet-undocumented permanent/fixed type commands (e.g., ValueFnIter_Case1_FHorz_PType) which allow for different permanent/fixed types of agent.

[It is known that these commands will not work in Matlab R2015a. They give an error relating to max() of two vectors that does not occour in R2017a. I have no idea exactly which version in between the two is required.]

Version 1.31 of VFI Toolkit

Trivial update. No actual change in terms of how to use the VFI Toolkit. Just a renaming of ‘Market Clearance’ conditions to ‘General Equilibrium’ conditions. This better describes the role of that these conditions play in many models. For example models with taxation and government spending often have the Government Budget Constraint being one of the General Equilibrium conditions.

This change has been rolled out across all of the examples and replications.

Note: most older codes should still work without any renaming necessary.

Version 1.3 of VFI Toolkit

Minor update changing how parameters are passed when evaluating statistics of the stationary distribution of agents, typically used in heterogeneous agent models; e.g., means, medians, lorenz curves, etc. This change has three advantages. First, it allows these evaluations to be performed on the GPU. Second, it allows easier handling of models with many different statistics of interest each of which depends on different parameters. Third, it is closer how you declare and pass parameters for the value function problem hopefully making for ease of use.

For example. Say you have two statistics you want to evaluate, the first depends on the parameters $\theta$ and $\delta$, the second depends on the parameters $\alpha$ and $\gamma$. You now declare SSvalueParamNames as taking two separate entries for each of the two statistics,
SSvalueParamNames(1).Names={‘theta’, ‘delta’};
SSvalueParamNames(2).Names={‘alpha,’gamma’};
The precise formulation of the SSvaluesFn has also changed slightly to fit this new format. This can be seen in the Examples and Replications, or in the Documentation.

Same change in how parameters are passed has also been implemented for the MarketClearance conditions. This is done so that they too are treated analogously.

Numerous new replications have been added to give further illustrations of how the VFI Toolkit can be used to solve models and perform calculations common in the Quantitative Macroeconomics literature. This update to v1.3 of the VFI Toolkit substantially simplifies some steps that were often being performed as part of these replications.

© 2024 A MarketPress.com Theme