GMM Estimation of Life-Cycle Models

GMM Estimation of Life-Cycle Models is done in Life-Cycle Models 45-50 in
pdf: An Introduction to Life-Cycle Models
and GMM is formally described in the appendix.

Once you can solve Life-Cycle models the next obvious question is how to choose the parameter values? An important statistical estimator for Life-Cycle models is the Generalized Method of Moments (GMM). This involves estimating some parameters of the Life-Cycle model, so that the model succeeds in matching some moments —say mean earnings— of the data. Formally, we estimate model parameters \(\theta\) as,


\( \theta^*=argmin_{\theta} (M^d-M^m(\theta))’ W (M^d-M^m(\theta)) \)


where \(M^d\) are the data moments that we target, \(M^m(\theta)\) are the corresponding moments of the model, and \(W\) is a weighting matrix. You can think of this as choosing model parameters \(\theta\) to minimize the (weighted) sum of squares of the difference between the data moments and the model moments—e.g., between mean earnings in the data and mean earnings in the model.

Life-Cycle models 45-50 in the Introduction of Life-Cycle Models show how to perform GMM estimation, needing only around 10 additional lines of code after the model itself has been set up. Life-Cycle model 45 estimates three preference parameters to match the age-conditional mean earnings. The focus is on the steps involved in performing GMM estimation in VFI Toolkit: setting up the weighting matrix, the target (data) moments, the covariance matrix of the target moments, and specifying the names of the parameters to be estimated. Then just one command EstimateLifeCycleModel_MethodOfMoments() does the rest. Life-Cycle model 46 shows how we can parametrize, and thus estimate, the initial distribution of agents and the exogenous shock processes; it also covers how to restrict parameters to be, e.g., positive or valued between 0 and 1. Life-Cycle model 47 demonstrates how we calculate the data moments and their covariance matrix from real-world data, using the PSID panel data set1 (taking advantage of the ImportPSIDdata() command). Life-Cycle model 48 covers lots of options and looks at all the output, which includes the issues of identification and sensitivity. Life-Cycle models 49 and 50 show how to use permanent types with EstimateLifeCycleModel_PType_MethodOfMoments(), showing how you can parametrize the permanent types to estimate unobserved heterogeneity, and how to use parameters that depend on permanent type, as well as target moments that depend on permanent type.

Traditionally, GMM estimation of Life-Cycle models has required a large amount of extra work and knowledge. With VFI Toolkit you can perform GMM estimation of Life-Cycle models in just ~10 lines of code (not counting the empirical work involved in estimating the target moments and their covariance matrix). Have fun! 😀

  1. Except Life-Cycle model 47, all the other examples all use model-generated data, which has the advantage that we know the parameter values that generate the data and therefore know if we are estimating the true parameter values. ↩︎

Behavioural Life-Cycle Models

Impatience, Temptation and Self-Control, Loss Aversion and Ambiguity Aversion can now all be handled in life-cycle models. The Intro to Life-Cycle Models includes examples of each of these.

Impatience, modeled as Quasi-Hyperbolic Discounting, is demonstrated in Life-Cycle Model 36. Temptation and Self-Control, modeled as Gul-Pesendorfer preferences, is demonstrated in Life-Cycle Model 37. Loss Aversion, modeled as Prospect Theory, is demonstrated in Life-Cycle Model 38. Ambiguity Aversion, modeled as maximin over multiple priors, is demonstrated in Life-Cycle Model 39.

Behavioural economics became part of mainstream economics a decade or two ago, but remains only occasionally seen in structural models and Macroeconomics. Assuming this is largely because it requires relearning how to solve models for each different setting , hopefully their implementation in VFI Toolkit will help see them become more widely used.

All of these behavioural aspects can also be used in OLG models. When using behavioural in an OLG model the only part of the model that changes is the life-cycle model. The behavioural aspects determine the optimal policy functions, but once we have the policy nothing else about solving an OLG model changes.

The Appendix to the Intro to Life-Cycle Models contains explanations of how these preferences work, and how they are implemented in codes. These behavioural life-cycle models tend to run marginally slower that a standard life-cycle model, but almost always just one to two times slower so they are easily usable.


If you are unfamiliar with these behavioural life-cycle models, you might find these related lecture slides useful.

VFI Toolkit also handles Epstein-Zin preferences.

Portfolio-Choice with Epstein-Zin preferences

Portfolio-choice models have households choosing both savings, and the division of savings between safe and risky assets. Next period assets depend on these two decisions, as well as on a stochastic return to the risky asset. Version 2.1 of VFI Toolkit introduces riskyasset specifically for these problems in which aprime(d,u), that is, the next period endogenous state, aprime, depends on decision variable(s), d, and an i.i.d. shock that occurs between this period and next period, u.

There are four examples in the Intro To Life-Cycle Models, and an implementation of the baseline model of Cocco, Gomes & Maenhout (2005) – Consumption and Portfolio Choice over the Life Cycle. The four life-cycle models build various aspects of Portfolio-Choice models. Life-Cycle Model 31 introduces Portfolio-Choice in an otherwise standard life-cycle model, showing how to set up a riskyasset to solve these. Life-Cycle Model 32 adds Epstein-Zin preferences, Life-Cycle Model 33 shows how to do Warm-Glow of Bequests which are more complicated with Epstein-Zin preferences, and Life-Cycle Model 34 adds endogenous labor.

Epstein-Zin preferences are important in the Portfolio-Choice literature as they allow separating risk aversion (which is important to the division of savings between safe and risky assets) from the elasticity of intertemporal substitution (which is important to how much to save for retirement); both of these are controlled by the same parameter in standard (vonNeumann-Morgenstern) preferences. As part of Version 2.1 Epstein-Zin preferences have undergone a (breaking) overhaul. This added two additional aspects to Epstein-Zin: (i) there is an option to choose between Epstein-Zin preferences in utility-units, or to use the more traditional consumption-units, (ii) handling of survival probabilities (a.k.a. mortality risk) and warm-glow of bequests. For more about Epstein-Zin preferences and how to use them see the Appendix of the Intro to Life-Cycle Models.

There is code implementing an example of the baseline model of Cocco, Gomes & Maenhout (2005) – Consumption and Portfolio Choice over the Life Cycle. This example uses portfolio-choice, Epstein-Zin preferences, and permanent types all together and shows how these models can be handled.


————————————-

Cocco, Gomes & Maenhout (2005) have permanent shocks in their model. The correct way to handle permanent shocks is a renormalization of the model that makes them disappear from the state-space. Instead the example codes keep them as a state, this means that computationally the example code is actually solving a much more difficult problem. It also means that you can easily switch to a more modern earnings process (my reading of Gomes (2020) is that he views the recent evidence as clearly against using permanent shocks).

1000 Downloads!!! The wrong way :P

All downloads are good downloads, but only some downloads count.

If you visit vfitoolkit.com you will be directed to github to download a copy of VFI Toolkit. Github does not count downloads, and so the number of times this has been done is unknown. But there is another way — the wrong way 😛 — to download, and that is via Matlab’s website. The number of downloads there just ticked past 1000!!!

If you are one of those thousand, thanks for using VFI Toolkit! And if you are one of the uncounted github downloaders, thanks to you too even though you aren’t counted. I have no idea how many people downloaded via github but guessing that for every ‘wrong’ download there are between one and nine ‘right’ downloads, it would be something between 2000 and 10,000!

Anyway, just happy to see people find VFI Toolkit useful 😀 Thanks for using! As ever, if you have any questions, or feature requests, come visit the forum, discourse.vfitoolkit.com, or email me.


There is of course no wrong way to download. You can download however you like 🙂

An Introduction to OLG Models

pdf: An Introduction to OLG Models
Codes for all the models can be found at: https://github.com/vfitoolkit/IntroToOLGModels

(or just use this link to download as a zip)

Overlapping-Generations (OLG) models are a workhorse model of Macroeconomics containing many households, many firms, and widely used to understand the importance of progressive taxation, demographic aging, and much more. We show how to easily build and solve OLG models over a series of examples, adding a feature each time. We begin with a deterministic OLG and show how to add pensions, demographics, and government. We then switch to stochastic OLG models, introducing idiosyncratic shocks that help generate more realistic life-cycles and inequality. By the end we are solving OLG models with married couple households, single male households, single female households and heterogeneous firms. The intention is that you can go through the models one-by-one, first reading the pdf explanation of a given model and then running the codes and seeing how to implement it.

These OLG models can be easily used, all you need is Matlab and a GPU (preferably with 8+gb of gpu memory). Households in an OLG model are based on life-cycle models, if you are unfamiliar with them it may be worth first looking at the Introduction to Life-Cycle Models, but it is possible to skip straight to OLGs.

These codes take advantage of VFI Toolkit, all of which leaves you to free to just get on with the economics and solving OLG models.

If you have any questions about the material, or spot a typo in the codes, or would just like to ask a clarifying question, etc., please use the forum: discourse.vfitoolkit.com
If you think there is anything important relating to OLG models that is not covered please let me know and I will think about adding another example.


Replication: Webinar Series and an opportunity to get your hands dirty

ReplicationWiki is organising a series of online seminars about replication in Economics. There will be nine webinars that you can select from, taking place from September 8th on. You can find the full list here, but I will highlight two in particular: the first is on “Why replication? How is it done? Where to find replication material?” and will be run by The ReplicationWiki on Sept 8th, the other is “Replication in Quantitative Macroeconomics” and will be run by Robert Kirkby, the lead Dev on VFI Toolkit on Sept 29th.

Announcement on INET.YSI is here, full information is here. The webinars will be run as a ‘flipped classroom’, meaning a video will be made available prior and then the actual webinar session will be used for discussion. I want to highlight one aspect which is that we encourage you to undertake your own replication, giving you feedback via mutual peer review and support from experts to submit completed replications to academic journals.

If you are interested specifically in replicating a paper using VFI Toolkit, likely something in heterogeneous agent incomplete markets macroeconomics, please feel free to contact me directly, robertdkirkby@gmail.com. I will take a look at the paper, let you know if VFI Toolkit is capable of solving that model, and give you an idea what kind of hardware are run-times are likely to be required. If you want to get involved but don’t have a paper in mind, perhaps one of the following might interest you: Ventura (1999) – Flat tax reform: A quantitative exploration, Attanasio, Low & Sanchez-Marcos (2008) – Explaining changes in female labor supply in a life-cycle model.

OLG Transition Paths: Example based on Conesa & Krueger (1999)

New example based on model of Conesa & Krueger (1999) – Social Security Reform with Heterogeneous Agents. This example illustrates how to solve general equilibrium transition paths in OLG models. The model itself evaluates the economic impacts of a variety of possible reforms to the US Social Security (pension) system. Transitions are done for both a reform that happens immediately, and a reform announced now but which will take place in the future.

This example show how the VFI Toolkit can be used to easily compute a general equilibrium transition path for OLG models in response to a path for parameters (the ‘TransitionPath_Case1_FHorz()’ command calculates the transition relating to the ‘ParamPath’ in codes). It also demonstrates tools to analyse outputs along a specific transition path, such as ‘EvalFnOnTransPath_AggVars_Case1_FHorz()’, or to calculate the value function over the resulting price path with ‘ValueFnOnTransPath_Case1_FHorz()’ and use this for welfare analysis.

For full details of the model see the original paper. Code for example.

Have also uploaded a replication of Conesa & Krueger (1999).


Main post ends here. The rest is extra background.

If you use transpathoptions.fastOLG=1, the codes will (additionally) parallelize over age j. This is much faster, but requires a large amount of GPU memory (GDDR memory) and so will only work on more powerful GPUs (within a few years this will no longer be relevant). In practice it is often a good idea to use fastOLG=1 to solve a version with smaller asset grids, and then use this as the initial guess for larger grids with fastOLG=0.

The transition path is solved for using ‘shooting algorithms’. Essentially, you guess (a path for) prices, solve the model, generate new prices, and then iterate on this until you get convergence in the prices. The codes explain how this is done in terms of the general equilibrium conditions, and allows for different update weights for the different prices. This is the easiest approach I have been able to come up with.

The model of Conesa & Krueger (1999) actually allows for a closed form expression for the labor supply in terms of the other state variables (including next period assets), and this could be implemented by placing that expression into the return function (and would be faster). This is not done here so as to make the codes easier to modify for other purposes.

Disclaimer: If you are willing to assume that models are linear in the aggregate you can use these transition paths as a way to solve and simulate models with aggregate shocks, see Boppart, Krusell & Mitman (2018). There are ways to further exploit this linearity assumption to massively speed up solutions, see Auclert, Bardóczy, Rognlie & Straub (2021), but since VFI Toolkit is about global non-linear solution methods there is no plan to implement these approaches. The BKM method in particular is very easy once you have solved the transition path and so you can implement it easily by building on the toolkit results.

Version 2 of VFI Toolkit

You can now just refer to parameters by name, and likewise for aggregate variables. All the examples are updated to Version 2 so you can see it in action. Makes larger models much easier to keep track of.

Sick of writing ‘ParamNames’? Good news, version 2 does away with them. You no longer create ReturnFnParamNames at all, it is simply figured out internally. Likewise for FnsToEvaluate.

Even better, FnsToEvaluate is now created as a structure, where the field names are the names of the variables. The input arguments are the decision, endogenous state, and exogeneous state variables in order, followed by any parameters. For example, in the Aiyagari (1994) model, we would want aggregate capital K, so we set
FnsToEvaluate.K=@(kprime,k,z) k
If we needed some parameter, say we tax capital (wealth) at rate tau, then the tax revenue would be calculated as
FnsToEvaluate.K=@(kprime,k,z,tau) tau*k
You can see that this makes using parameters easy (VFI Toolkit will look for tau in the parameters structure, called Params in example codes).

But it gets better. Now imagine you want to use K in your general equilibrium condition. Again, let’s consider the Aiyagari (1994) model where the general equilibrium condition is that the interest rate is equal to the marginal product of capital. So we would just set this up as
GeneralEqmEqns.CapitalMarket=@(r,K,alpha,delta) r-(alpha*K^(alpha-1)-delta);
where the inputs can be parameters (alpha, delta), general equilibrium prices (r), and even the aggregates of the FnsToEvaluate (K). Everything is just by name and in any order.

Better still, when you run commands to solve the general equilibrium you get easy-to-understand feedback. At each iteration (while finding the general equilibrium prices) you will be told the current prices (r), aggregate variables (K), and general equilibrium conditions (CaptialMarket). Because everything is by name it is easy to follow what is happening and so see where anything goes wrong.

Of course there are still more improvements. Because FnsToEvaluate contains the names of the variables, the output of all commands using them now uses these names. So for example calculating the aggregate capital in the Aiyagari (1994) model would be done as,
AggVars=EvalFnOnAgentDist_AggVars_Case1(StationaryDist, Policy, FnsToEvaluate,Params, [],n_d, n_a, n_z, d_grid, a_grid,z_grid);
and the output AggVars contains the aggregate values of the ‘FnsToEvaluate’ which are now referred to by name, so for example
AggVars.K.Mean
would be the aggregate value of K. This is also true of the command for things like the median, standard deviation, lorenz curve, etc.; they contain all results by name. The big advantage, other than ease of reading the code, is that you can add or remove FnsToEvaluate without breaking code as nothing depends on the number of functions to evaluate nor on their order.

Lastly, there is one thing that is broken by the update to version 2 and that is transition paths (both infinite and finite horizon transition path commands). This was a deliberate decision as being able to refer to everything by name in version 2 turns this from difficult into something easy enough to use. There are currently three examples available of how to compute transitions. Two infinite horizon models one of which extends the Aiyagari (1994) model with the other one based on Guerrieri & Lorenzoni (2017), and one OLG transition based Conesa & Krueger (1999).

All up Version 2 should make it both much easier to write codes, and much easier to read and understand them. In my experience it also makes it much easier to debug and correct them; since everything is by name it is possible at a glance to see from the output where a model is going wrong and therefore how to correct it. The update is especially helpful for models with lots of parameters, functions to evaluate, and general equilibrium conditions, since it becomes easy to keep track of everything and trivial to add or remove aspects.

As always, any questions or comments please use the forum: discourse.vfitoolkit.com/ (or you can email me directly at robertdkirkby@gmail.com)

—————–
Comment: The permanent type ‘PType’ commands have also been updated to only work with Version 2, but since these are yet used in the example codes it is not as noteworthy.

Comment: General equilibrium in the Aiyagari (1994) model is often described as being about getting K to match K, this is equivalent to the above where we get r to equal the marginal product of labor. I personally find it much more intuitive to think about the equilibrium in prices, but of course it is mathematically equivalent to consider it in prices or in quantities.

Comment: If you don’t want all that feedback on your general equilibrium, you can just use heteroagentoptions.verbose=0 to turn it off. verbose=0 is part of all the options so you can also set it for transitionpathoptions, etc.

Comment: Currently transition paths require a powerful GPU so may not be ‘available’ to everyone. But given two or three years they should become something just about anyone can easily do.

Disclaimer: If you are willing to assume that models are linear in the aggregate you can use these transition paths as a way to solve and simulate models with aggregate shocks, see Boppart, Krusell & Mitman (2018). There are ways to further exploit this linearity assumption to massively speed up solutions, see Auclert, Bardóczy, Rognlie & Straub (2021), who also provide a Python toolkit for this purpose. I just want to let people know that these much faster methods exist for those willing to assume linearity of the model in the aggregates.

An Introduction to Life-Cycle Models

pdf: An Introduction to Life-Cycle Models.
Codes for all the models can be found at: https://github.com/vfitoolkit/IntroToLifeCycleModels

(or just use this link to download as a zip)

Want to solve life-cycle models easily? Good news! Here are a series of life-cycle models that gradually build up to look at income, hours worked, consumption, and assets over the life-cycle. We will start with a deterministic life-cycle model in which people live for J periods and make decisions on how much to work. Our second model will then add a decision about how much to save (assets). Our third model will just use this model to draw a life-cycle profile. We will then step-by-step make additions to the model to understand how these help us create more realistic life-cycle profiles including idiosyncratic shocks. The intention is that you can go through the models one-by-one, first reading the pdf explanation of a given model and then running the codes and seeing how to implement it.

By the end we will have a life-cycle model in which people make consumption-savings and consumption-leisure choices, which has working age and retirement, in which earnings are hump-shaped over age (peaking around ages 45-55), the variance of both income and consumption increase with age, incomes grow in line with deterministic economic growth of the economy as a whole, people have some assets left when they die, people face the risk of substantial medical costs when old, and where borrowing constraints and precautionary savings play an important role. And we will be able to use these to plot life-cycles profiles, including the mean, variance, and Gini coefficient of a variable conditional on age, and even on 5 year age-bins. We will also be easily able to simulate panel data sets from the model on which we could run regressions. There are also some models illustrating important concepts like the role of borrowing constraints and precautionary savings.

These life-cycle models can be used easily requiring very little knowledge of numerical methods; all you need is Matlab and a gpu.

These codes take advantage of what will become version 2 of VFI Toolkit. You just refer to parameters by name, and VFI Toolkit handles the rest. You create life-cycle profiles of ‘earnings’, and then just refer to it by name. When parameters depend on age this is handled automatically. All of which leaves you to free to just get on with the economics and solving life-cycle models.

If you have any questions about the material, or spot a typo in the codes, or would just like to ask a clarifying question, etc., please use the forum: discourse.vfitoolkit.com
If you think there is anything important relating to life-cycle models that is not covered please let me know and I will think about adding another example.

Video about the Introduction to Life-Cycle models (24mins): vimeo.com/750251629 (slides)

Exotic Preferences: Epstein-Zin & Quasi-Hyperbolic

New example based on model of Imrohoroglu, Imrohoroglu & Joines (1995) – A Life-Cycle Analysis of Social Security. This example solves the general equilibrium for an OLG model with standard expected utility preferences.

VFI Toolkit allows you to switch to ‘exotic’ preferences like Epstein-Zin and Quasi-Hyperbolic discounting with just a few lines of code. Here are examples that solve the exact same model again but this time using Epstein-Zin preferences and Quasi-Hyperbolic discounting preferences respectively. The only differences in the codes are in the first few lines, everything after that is identical demonstrating how easy it is to switch preferences. Two further examples show how to add endogenous labor, and how to use endogenous labor with Epstein-Zin preferences.

These examples demonstrates new features in VFI Toolkit for solving models with exotic preferences. These features are simply implemented as an option in standard value function. Note that from the perspective of simulating agent distributions there is no difference (hence you must set appropriate vfoptions, but no change to simoptions). General equilibrium commands automatically handle the exotic preferences.

Epstein-Zin preferences are useful as they seperate ‘intertemporal substitution’ from ‘risk aversion’, both of which are determined by the same parameter in, e.g., a CES utility function with (standard) von-Neumann-Morgenstern expected utility preferences. Quasi-Hyperbolic discounting captures ‘impatience’, you take actions today that are in your present interest, but are not in the longer-term interest of your future self. Both are explained in more detail in this pdf detailing the exact models that the Epstein-Zin and Quasi-Hyperbolic discounting examples are solving, as well as an explanation of their purpose. It also includes psuedo-code for the algorithms used by the VFI Toolkit. Note that there are two types of Quasi-Hyperbolic discounting, naive and sophisticated; both are implemented and can be set using vfoptions as in the example above, and naive is used by default if you do not specify.

Have also uploaded a replication of Imrohoroglu, Imrohoroglu & Joines (1995).

One paper that uses Quasi-Hyperbolic discounting is Imrohoroglu, Imrohoroglu & Joines (2003). The model is similar but different to their 1995 paper, and I link this mostly to give you a better understanding of how and where Quasi-Hyperbolic discounting might matter in terms of the Economics; beware there is a typo in their formulation of the sophisticated quasi-hyperbolic discounter’s value function problem.

I have also uploaded some examples based on the infinite-horizon Aiyagari model. Example solving the original Aiyagari model is already available. I have added a version with Epstein-Zin preferences, a version with Quasi-Hyperbolic discounting, a version with endogenous labor, and a version with both endogenous labor and Epstein-Zin preferences. All of these models are explained in the aforementioned pdf.


All of the codes implementing the Aiyagari model and the IIJ1995 model, as well as the variations using Epstein-Zin preferences, Quasi-hyperbolic discounting, Endogenous labor, and Endogenous labor with Epstein-Zin preferences, as well as the pdf explaining them can be found at: https://github.com/vfitoolkit/VFItoolkit-matlab-examples/tree/master/Exotic%20Preferences

© 2024 A MarketPress.com Theme