Economic Rationality, what is it?

spock

 

Economics as a subject is subject to fierce criticism these days. Most recently Paul Romer provoked a serious discussion around the usefulness of contemporary macroeconomics. I think the critique is justified. DSGE models need to be updated and banks as money creators should be included in the models. For reference, I recommend for example the following sources:

http://www.imf.org/external/pubs/ft/fandd/2016/03/kumhof.htm

http://www.bankofengland.co.uk/research/pages/workingpapers/2015/wp529.aspx

Ideally, economic business cycles should be modelled endogenously. This would mean that the system would produce oscillations inherently around perhaps some steady state. Contemporary DSGE models according to my knowledge are driven by external stochastic shocks. Maybe altogether better models can be developed, but I guess the point is that we need to have macroeconomic models that are ontologically sound and at the same time have microfoundations. This is no doubt very difficult.

However, today I am blogging about economic rationality and the theory of decision making in general. To me it seems that when people criticise economics, quite often one hears allegations of the type “man is not rational” “markets are not efficient or rational” or something of this sort. I would like to bring some structure into this debate here.

Choice theory, utility functions and convex programming

First of all, economic rationality is basically a set of axioms that we assume in order to establish a tractable model framework. This mostly means that if one wants to have for example a meaningful optimization scheme for consumer choice, we need to have a differentiable and therefore nice utility function so that we can try to find a constrained maximum and thus an optimal choice. Microeconomics is basically convex non-linear programming.

These axioms are technically what is called a total preorder induced by a binary (preference) relation on some set of alternatives. They are

  • reflexivity
  • totality
  • transitivity

Basically we assume that all pairs of alternatives can be compared and that there are no loops. This notion of rational preferences does not say anything about moral good or what is to be pursued on ethical grounds. It is merely a reasonable set of assumptions for a decision maker. So economic rationality does not rule out anything like genocide or crime of what have you.

Armed with these and some topological considerations, one can represent the preferences of the decision maker with a nice utility function. Then one can proceed with optimization programme given material constraints.

Choice under uncertainty

When one assumes that the outcomes are uncertain, things get more difficult and interesting. First of all now we are making choices around some collection of probability spaces, so that we need to pick the right or the most suitable probability measure. In economics textbooks one usually talks about lotteries, but I think it is best to talk about random variables or probability measures.

So we assume the previous total preorder over the set of random variables/probability measures and we also assume additionally an axiom of independence and continuity (topology again). Enter John von Neumann. What we have is the expected utility paradigm. This theorem says basically that if the assume these axioms over random variables, the preferences can be ranked according to

\int u(X)dP

Which is just the expected utility given a utility function u(.).  So this is how a rational decision maker picks  among things under uncertain payoff. This is quite useful. Only that is not empirically really accurate. According to data, people tend to deviate from this kind of EU -behaviour. We tend to overestimate the significance of events with small probability. So we expect too much and too little. For details, read e.g. the book “Thinking ,fast and slow” by Dr. Daniel Kahneman, a nobel laureate.

Maybe one should consider alternatives to EU-behaviour. This could be straightforward, as the representation can be established quite easily using some basic tools of linear vector spaces, like hyperplane separation/Hahn-Banach theorem and Riesz’s representation theorem in the case of Hilber Space. I’m aware of prospect theory, rank-dependent utility and dual choice theory by Yaari. Maybe one could still improve?

Rational expectations, efficient markets and all that

What about rational expectations and the efficient market hypothesis? Rational expectations is an assumptions that expectations are unbiased. So in other words we do not make systematic errors in forecasting. This seems plausible to me. EMH (efficient market hypothesis) in turn assumes that all information is already priced in financial assets. Plausible?

EMH does not imply that large deviations are precluded. Efficient market hypothesis is really about the price process being a martingale. So again, the expected price of tomorrow should be the same as today. Actually it is rather hard to assume otherwise because of arbitrage opportunities. If expected price was higher than todays, one should buy a lot, which would drive up the price today and vice versa. SO actually EMH is a rather plausible assumption.

So in other words, as long as the expectation exists, we can have EMH even with extreme events like non-integrable variance, like in the case of alpha-stable distributions.

Once again: extreme variation in prices does not imply that EMH is wrong.

Rationality is not a moral statement

In economics anyways. Rationality in economics is about being coherent and consistent. In economics, human behaviour is modelled through an optimizing entity. Think of dynamic programming and optimal control. The Bellman optimality criteria says basically that if one travels from A to C through B and the route is optimal, it implies that the route from B to C is also optimal. In game theory this is basically the same as subgame perfection. It is also backward induction in other instances. Economics is about optimizations as we are maximing profit or welfare or utility.

Are we always selfish according to economics?

In short: no. Economics only assumes usually a total preorder on the set of alternatives. This does not preclude in anyways choices where one gets utility from being unselfish. Again, economic rationality is about consistency.

There is however one interesting problem at the very foundations of economics. It is called the integrability problem. Basically the question is given some observed action/choice, can we construct a utility function for the decision maker that always makes the action rational? I guess this might be interesting for some scholars of philosophy of science.

 

 

 

 

 

What is risk ?

risks-sign-3-300x225

 

Introduction

In the field of management, insurance and finance, systems engineering and even world politics, decision makers face difficult choices that may include risk. We usually think that risk is something that includes a possibility for a loss or gain in terms of some abstract value. Finance is  likely the field that is most exposed to the concept of risk. I will introduce here the main issues around risk and uncertainty, in order to facilitate analytical discussion around (financial) risk and uncertainty further.

Risk is inherent in this world of ours as we lack full information and the means to analyse it. Risk is about uncertainty from a subjective viewpoint. In economics and finance, risk is usually introduced through concavity of the utility function. More specifically, we have what is called the Arrow-Pratt coefficient of absolute risk aversion.

A=-\frac{u''(w)}{u'(w)}

This means that if the decision maker is risk averse and has some wealth w, losing x units leads to a higher loss in utility than gaining x units (decreasing marginal utility). So if one is risk averse, one should always reject gambles of this type. Specifying the Arrow-Pratt risk aversion measure, we can generate a family of respective utility functions.

Another important concept is the so-called certainty equivalent which would leave the decision maker indifferent whether to accept the gamble or a certain monetary outcome. In the context of expected utility theory, we can define certainty equivalent implicitly through

u(C)=\mathbb{E}[u(X)]

Certainty equivalent demonstrates clearly how riskiness depends on both the objective probability and risk preferences. Therefore a good risk measure should incorporate these two aspects.

Quantitative risk management

In practical life, risk is is usually quantified by utilising some concepts from the theory of probability and mathematical statistics. The simplest measure of risk can be identified with standard deviation. This risk measure is the one introduced in the porfolio optimization model by Harry Markowitz. One obvious problem with standard deviation is that it penalises upside risks, i.e.exceptionally good outcomes increase risk. On the other hand, standard deviation is a simple risk measure and for simple distributions it works well.

One can improve from standard deviation and introduce for example various quantile-based risk measures. One of the most familiar risk measure is Value At Risk (VaR). VaR gives the loss associated with some probability. So it is in a way better than standard deviation, as good outcomes are not punished. Of course when the random variable has a finite support, one could also take the infimum or the minimum value of the random variable in question.

Artzner et al. introduced the concept of coherent risk measures that establish a set of plausible axioms that a risk measure should satisfy. Most importantly, a coherent risk measure is sub-additive and positive homogeneous so that diversfication pays off. Now it so happens that after some 17 years since the original article was published, the Basel committee on banking supervision has finally adopted the expected shortfall risk measure in its fundamental review of the trading book. Expected shortfall is just the conditional VaR, or  a conditional expectation, given that losses exceed some x.

Risk management tomorrow

As mentioned, subjective preferences should be incorporated in a risk measure. This aspect becomes only more important in the near future, when robots and artificial intelligence take over some of the tasks run today by humans.

In my view, the most promising framework for risk measures in this respect is introduced in the so-called family of spectral risk measures. Spectral risk measures generalise the previous risk measures and have some very useful properties. A spectral risk measure is of the form:

M_{\phi}=-\int_{0}^{1}\phi(p)Q(p)dp

where we integrate over the probability measure p. The function \phi is the spectrum and Q is the quantile function/generalised inverse of the cumulative distribution function. One should note that the expected value is the trivial spectral risk measure for a unit spectrum. The general idea behind such risk measures can be understood as distortions of the objective probability measure p. This distortion of the probability measure represents preferences and the spectral risk then gives out an expectation for some synthetic probability measure. We can call this the risk-neutral probability measure. The label comes from the fact that given the new measure P’, the decision maker acts as like she was maximizing the expected value for the random variable. This information can then be used to price financial assets, for example.

It should be noted that in finance the risk measures are functionals on the space of random variables and they generally do not consider the risk preferences of the investor. There are a couple of interesting exceptions, both published in the Journal of Political economy (Aumann & Serrano, Foster & Hart). These indices of riskiness take into account the preferences of the decision maker implicitly:

\mathbb{E}[e^{-\frac{X}{R(X)}}]=1

and

\mathbb{E}[\log{\left( 1 +\frac{X}{R(X)}\right)}]=0

The interested reader should also study the entropic risk measure and entropic Value at Risk. These risk measures have interesting properties as well.

Utility, choice theory and risk

In my opinion the concept of risk should always reflect both the objective probability measure, and the decision makers preferences. Therefore, distortion based risk measures are somehow the most interesting ones from this point of view. In classical choice theory under risk, we have the von Neumann -Morgenstern representation of preferences through expected utility and therefore one should choose always in such a way that the expected utility is maximised. The set-up assumes the usual weak order over the probability measures and in addition there is the independence axiom (convex sets) and axiom of continuity (proper topology). One basically then uses the separating hyperplane theorem in order to show that there exists a functional that retains the preferred order. This leads to a program

\underset{P}{\text{max}}\int_{0}^{1}u(X)dP

This may seem counterintuitive, as the decision maker does not care about the variability of utility, but is “risk neutral in utilities”. As indicated by the Allais and Ellsberg paradoxes, it indeed might be the case that people do care about the variability of utility. One should note that the expected utility framework basically distorts the monetary outcomes so in a way if one has a concave Bernoulli utility function, for example logarithmic utility, small rewards are emphasised whereas large rewards are deprioritised. It might be the case that the class of utility functions is not rich enough to cater for this kind of behaviour.

Conclusion

Summa summarum: a good risk measure is coherent and takes into account individuals ‘ preferences. I think that in this regard, the family of spectral risk measures are superior to to others. See for example: https://blogs.cfainstitute.org/investor/2014/02/26/are-spectral-risk-measures-respectable-enough/

 

 

Time to build a Grand Strategy for Finland

 

shakki

“The statesman must think in terms of the national interest, conceived as power among other powers. The popular mind, unaware of the fine distinctions of the statesman’s thinking, reasons more often than not in the simple moralistic and legalistic terms of absolute good and absolute evil.” -Hans Morgenthau

Security dilemma for Finland

Finland has always been in a difficult geographical position, a nation state between the East and the West. Since World War II, policymakers have had the thorny task of striking the right balance in matters of foreign and security policies. From the days of the Paasikivi doctrine, we’ve come a long way. Even though we had a neutral policy towards the superpowers during the Cold War era, since joining the EU, in 1995,  we are, in my opinion, irrevocably anchored in the western sphere of influence. Even the policy of a fixed currency regime can be traced back to the national interest of leaning towards the West. Ex post, it seems that policymakers did a rather decent job. Finland is a sovereign country, with a strong political commitment to the western values of justice, rule of law, democracy and liberal humanism. This should not be taken as granted.

Recent international events show that the days of power politics have not gone anywhere. They are back with a vengeance. Since the inception of NATO and the Warsow pact, international order has been based on the delicate balance of power between the United States, and the Soviet Union, nowadays of course Russia. Of course since then many countries have acquired nuclear capabilities and as such a strong deterrent, but in the big picture it seems the situation is once again more or less as it was pre-1989. History is alive and well. International order is shattered by the world events, espicially in Ukraine and in the Middle-East. Finland as small country in the power-periphery, should act accordingly.

wpp-fig13-5

What do we need to do?

First of all, I think we need to establish a Grand Strategy. The government of course prepares the Foreign and Security Policy Report (FSPR), most recent one in June 2016, but we need to prepare a Grand Strategy in order to integrate all government policies towards our national interests of peace and prosperity. Therefore we would need to merge our EU policies, EMU policies and foreign and security policies into one solid Grand Strategy. This would ensure that all policy tools work coherently together.

Second, given the current state of international affairs, I think we should join NATO as soon as possible. The United Kingdom is exiting the European Union; the EU is not militarily speaking as solid as it used to be. There is no binding collective security for us. We are on our own. Most of the EU members are in NATO anyways and we take part already in various joint operations. Article 5 is the best security guarantee there is. Some commentators say that NATO security guarantee is not credible. Failing to take joint action on the basis of Art 5 would destroy NATO and this is not in the interests of the US.

Third, we need to modernise our defence forces and increase our defence spending. Today, military strategy is a far more complex discipline as it used to be. Generals always fight the last war they say. Today we need speed, agility, readiness to deploy, adaptivity. And we need to hold on to our conscription -based armed forces.

 

Bank resolution in the Banking Union

First, here is a piece of text from the preamble (67) of the Bank Recovery and Resolution Directive (BRRD):

“An effective resolution regime should minimise the costs of the resolution of a failing institution borne by the taxpayers. It should ensure that systemic institutions can be resolved without jeopardising financial stability. The bail-in tool achieves that objective by ensuring that shareholders and creditors of the failing institution suffer appropriate losses and bear an appropriate part of the costs arising from the failure of the institution. The bail-in tool will therefore give shareholders and creditors of institutions a stronger incentive to monitor the health of an institution during normal circumstances and meets the Financial Stability Board recommendation that statutory debt-write down and conversion powers be included in a framework for resolution, as an additional option in conjunction with other resolution tools.”

What’s at stake?

Given the recent market turbulence in some of the large banks in Europe, I thought it might be useful to go through the steps how bank resolution works today in Europe and more specifically in the Banking Union (BU).  There is a rather thick pile of new european legislation and therefore, in order to avoid misunderstandings, some facts might be in order, so here we go.

The Banking Union institutions

First of all, Banking Union in Europe consists of the following three pillars:

-Single Supervisory Mechanism (SSM)

-Single Resolution Mechanism (SRM)

-Single Rule Book

SSM, i.e.  ECB together with the national supervisors take care of the daily supervision of banks. This includes, but is not limited to, capital adequacy monitoring, liquidity adequacy monitoring, credit risk, market risk, operational risk and the like. ECB is the central prudential supervisor in the BU and it directly supervises the large banks in the BU.

SRM consists of the Single Resolution Board and national resolution authorities. SRB prepares the resolution plans and decides on the resolution process, including on the use of the Single Resolution Fund (SRF) and on the use of the bail-in tool.

Single Rule Book includes most importantly the capital requirements directive (CRDIV), capital requirements regulation (CRR) and the bank recovery and resolution directive (BRRD). These rules are EU law, with regulations directly binding.

How resolution is triggered and carried out?

First of all, it should be noted that the SRB has the task to prepare resolution plans for large banks in Europe. This means that once the resolution commences,  the SRB  follows the pre-agreed resolution plan. The resolution plan includes the intended resolution actions and tools that are planned to be used, including the bail-in tool.

Now let’s assume a large, internationally active bank based in the BU goes into trouble (for example unexpectedly large losses). The regulation establishing the SRM provides the following procedure:

First, the conditions for resolution require that the entity is failing, or is likely to fail (~equity is non-positive) and resolution action is needed. ECB is responsible for making this assessment (there are exceptions). The Single Resolution Board will then take the decision to put the institution in resolution.

When resolution commences, the SRB will decide e.g. on the applicable resolution tools and determine on the use of the Single Resolution Fund (SRF). If the fund is to be used, European Commission will have to assess whether the use of the fund is in line with the state aid rules.

If a bank is to be resolved, first an independent valuation of its assets and liabilities must be made. After the valuation, a conversion/write down of capital instruments takes place. Thereafter the resolution might involve an asset separation, sale of business, bridge bank establishment and the use of the bail-in tool.

The Single Resolution Fund

The Single Resolution Fund is owned by the SRB. The target size of the Fund is 55 billion euros. Banks in the BU contribute annually to the fund to raise the money needed. The fund can be used in extraordinary circumstances.

The Single Resolution Fund (banks will provide the funds by paying ex-ante contributions) can take part in the financing of the resolution action according to the following rules:

1.Within the resolution scheme, when applying the resolution tools to entities referred to in Article 2, the Board may use the Fund only to the extent necessary to ensure the effective application of the resolution tools for the following purposes:

(a) to guarantee the assets or the liabilities of the institution under resolution, its subsidiaries, a bridge institution or an asset management vehicle;

(b) to make loans to the institution under resolution, its subsidiaries, a bridge institution or an asset management vehicle;

(c) to purchase assets of the institution under resolution;

(d) to make contributions to a bridge institution and an asset management vehicle;

(e) to pay compensation to shareholders or creditors if, following an evaluation pursuant to Article 20(5) they have incurred greater losses that they would have incurred, following a valuation pursuant to Article 20(16), in a winding up under normal insolvency proceedings;

(f) to make a contribution to the institution under resolution in lieu of the write-down or conversion of liabilities of certain creditors, when the bail-in tool is applied and the decision is made to exclude certain creditors from the scope of bail-in in accordance with Article 27(5);

(g) to take any combination of the actions referred to in points (a) to (f).

No more bail-outs !

The process I’ve described above ideally means that banks are not be bailed out by the tax payer anymore. First of all, the philosophy is that banks should hold sufficient amount of bail-inable debt (MREL) on their balance sheet in order to be able to convert /write down the debt if a bank suffers huge losses and its equity is wiped out. Even if the SRB would decide to use the Fund to inject capital in a bank, the money is collected from the banking sector. One should also note that the Fund can only be used after a sufficient amount of debt has been written down/converted into equity (8 % of the balance sheet).

What is the role of the central banks ?

Given that problems related to solvency are dealt with the SRB, the only role for the ECB and the national central banks is to provide extraordinary emergency liquidity assistance (ELA) to the solvent bridge institutions or to the new resolved institutions.

Is Deutsche Bank in trouble?

Legally speaking, this is to be assessed by the ECB. Given the current market valuations of insurance contracts for the subordinated debt (5y sub CDS) , the market is clearly pricing an element of increased risk there (in terms of bail-in). Given the systemic role of DB, not least through its huge derivatives book, I can only hope that the ECB and the SRB do what the EU legislation is requiring from them.

 

What is good economics and what are its triumphs?

I read an interesting piece of critique yesterday by Paul Romer. The core of his critique is that modern macroeconomics (~Dynamic Stochastic General Equilibrium models) has drifted during the past 30 years or so into a form of Post-Real Models, like in theoretical physics, where one has M-theories, string theories and so forth. In essence, mathematical consistency and structure with little touch with reality I suppose. I have to say that as a practising professional economist, I must agree with Dr. Romer. Modern macroeconomics most likely does more bad than good in terms of understanding how the economic machine works. I do understand the need for microfoundations, but given that most DSGE-models do not cater for bank leverage, financial markets and credit/leverage cycles in general, I do not see any intellectual interest in these models currently. Moreover, modelling banks as mere intermediaries is plain wrong. Reading Irving Fisher, Friedrich Hayek, Knut Wicksell, Joseph Schumpeter and J.M. Keynes is far more useful (and rewarding).

Economics and the scientific method

Nevertheless, I do not accept the position of dismissing economics altogether. Economics is a set of ideas. Even though economics is not a natural science, economics, should, in my opinion follow the scientific method, i.e. the method of doing economics should follow the rules of:

  1. Characterising the phenomenon
  2. Formulating hypotheses
  3. Making predictions
  4. Experimenting –falsification and verification
  5. Evaluation and making improvements

Most likely the main problem with DSGE -models and modern macroeconomics is with the first step. A model of an economy should not be based on a simple  linear optimal control problem with stochastic shocks. One should model the economy more like some artificial economies are modelled in computer games (Simcity, Civilization, Democracy). Physicists model complex phenomena using for example the Ising model, why  not to bring this paradigm into economics as well? Optimal control is a good tool for controlling ICBM-missiles, but not so much for complex, human economies. If the model does not work and is ontologically unrealistic, I don’t see the benefits of having microfoundations.

What is good economics ?

There are a lot of good theories and models in economics. The following list is a list based on my experience, on what is important.

  1. Economic efficiency

Given that resources are scarce, economics has quite nice formalisations for using resources efficiently. Usually we have the following vector program, where one needs to solve:

max (f_1(\vec{x}),f_2(\vec{x}),f_3(\vec{x}),f_4(\vec{x}),..)

s.t.  \vec{x}\in X

The objective vector could represent a set of utilities for individuals in a society, or profits for a firm, or a utility function for a central bank. What is important that efficient use of resources comes through a (convex) optimisation program. Pareto efficient solutions can be desciribed for partially ordered sets. Of course one of the triumphs in welfare economics are the theorems that codify the invisible hand of Adam Smith: competetive market allocation is Pareto-optimal (conditions cannot be improved for some, without affecting negatively others). Kenneth Arrow and Gerard Debreu were the pioneers in these issues.

2. Public choice

Economic methodology married with political science. Assuming voters, bureaucrats and politicians to be selfish utility maximisers, one can  deduce interesting things. My favourite is the median voter theorem, which basically says that the median voter preferences are represented  by a majority rule.

3. Economics and social welfare -veil of ignorance

This is close to ethics and moral philosophy, especially Rawlsian moral theory but maybe surprisingly economics has something to offer for social welfare theory as well. One of the most interesting results is by John Harsanyi (1955), which says that under some mild assumptions, social welfare is a weighted sum of individuals’ expected utility. Expected utility was by the way developed by the genius John von Neumann. The veil of ignorance -concept was as well originally developed by Harsanyi well before John Rawls.

4. Cournot competition and game theory

Cournot competition (1838) is an equilibrium concept (Nash equilibrium), where two firms choose optimal levels of production in a strategic setting, where individual firms have market power. Firms have what is so called “best response functions” and the intersection of these curves define the equilibrium. Very intuitive, and profound. Again, while John von Neumann was not busy discovering quantum mechanics and the first computer, he developed game theory further as well. Minimax theorem and of course later John Nash’s equilibrium theorems are important.

4. Portfolio theory

Choosing an optimal allocation for an investment portfolio. Harry Markowitz saw the light and formulated it as a quadratic program. One has the covariance matrix of stock returns C, the return vector R, risk parameter lambda and the weights w. The program is simply:

max \vec{w}^TC\vec{w}-\lambda \vec{w}\cdot \vec{R}

subject to \sum w_i=1

5. CAPM

Capital asset pricing model is kind of funny, because it seems trivial (as a projection in the Hilbert space) but it can be deduced also from utility maximisation. What CAPM says is that there is a price of risk, which can be calculated based on the covariance with the market risk.

E[R-R_0]=\beta E[R_M-R_0]

 

6. Options pricing and the risk neutral probability measure

Heat flow and financial derivates benefit from the same underlying structure, which is the  continuous Brownian motion that models white noise in the continuum limit, Black & Scholes (1973) realised that using the Ito lemma and Fourier transforms, one could derive excplicit pricing formulae for European options, stemming from the following parabolic PDE:

 

\frac{\partial V}{\partial t}+\frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2}+rS\frac{\partial V}{\partial S}-rv=0

 

The link between the heat equation and BS equation comes from the fact that the generator of the stochastic process in stock markets and diffusion processes is a second order Partial differential operator.

The risk neutral probability measure is an imagined probability measure that equates the price of an asset with the expectation of its present value payoff. In this way, all the assets can be priced using only expectations.

7. Covered interest rate parity

The determination of exchange rates in relation to interest differentials is of course interesting, at least for traders. So here we have it

 

\frac{F_i}{S_j}=\frac{1+i_i}{1+i_j}

 

In other words, the ratio of forward FX rate to spot FX rate is the ratio of the interest rates in the respective currencies.

8. Comparative advantage (international trade)

‘Focus on where you’re good at’ -David Ricardo

9. The Trilemma (international finance)

It is impossible to have the following things at the same time: fixed exchange rates, free capital mobility and an independent monetary policy.

10. Tragedy of the commons

We have an incentive to defect, when we have common property –>private property.

This is the list. As you can see, no DSGE models.

Euro area monetary policy and its discontents

What is at stake?

The monetary situation in the Euro area is still rather fragile and  the main indicator (medium-term market implied inflation expectations) points to the direction where additional monetary easing is definitely required. When the ECB governing council assembles tomorrow and on Thursday to decide on the rates and on the monetary policy stance in general, the members of the council will face a thorny choice between various self-conflicting options.

Scene setter

First of all, it should be noted that making policy decisions is very delicate in the current situation, given all the political and legal constraints inherent in the Euro area constellation. However, if we focus on the main objective, i.e. keeping inflation expectations firmly anchored (close to 2 per cent, in the medium term), the data is rather disappointing. The market implied expectation for the 5 y inflation in the medium term has drifted downwards for some time now. The current number is well below 1,3 %. The figure below illustrates this very clearly. This number is surely one of the main market indicators that the ECB watches keenly.

sg2016090631274

Fig. 1. 5y5 Inflation swap forward -implied inflation

Despite of the current extended asset purchase programme (QE), it seems that the monetary policy channel is still somewhat impaired, as significant expansion of the Eurosystem consolidated balance sheet has not lead to a substantial increase in bank lending in the Euro area. This impairment ultimately affects inflation negatively, at least if we assume that inflation depends somehow positively on the money in circulation (cash and bank deposits).Remember, MV=PY.

sg2016090631887

Fig. 2. Eurosystem balance sheet (ECB +national central banks)

Again, we must remember that QE does not automatically increase the amount of “real” money, because the stock of bank deposits (firms’ and peoples’ money) does not automatically change due to QE. QE does however increase excess reserves by crediting commercial banks’ central bank checking accounts, and flattens the yield curve.This should encourage lending and borrowing. There is also the FX/currency channel, which boosts exports, to some extent.

sg2016090632234

Fig. 3. Euro area money supply (M3) -annual growth rate, SA

One might say that in the Euro area we have an impaired bank lending channel, as new bank lending is still rather sluggish. For me this seems to be effectively a demand side problem and should be tackled with fiscal measures. However, given that we do not live in a sovereign money and/or fiscal union, the ECB must act, and it should act sooner than later.

Policy instruments

So, what are the main options and obstacles? As the eurosystem is buying a large amount of government bonds each month (almost 80 bn eur a month), the price level of bonds has increased to the extent that a large portion of sovereign bonds in Euro area have negative yields. This does not need to be a problem, as long as the instruments yield on average more than -0,4% (central bank deposit rate). This will guarantee a positive net interest income for the national central banks.

So, as long as the net interest income is positive (on average), negative yields are not of any major concern for the Eurosystem as a whole. For banks, the ultra-low interest rate environment is toxic, as most of the loan books are tied to euribors, and the deposit rates are not. So the price of lowering the central bank deposit rate is basically to worsen bank profitability (and to punish savers in general). Given the low levels of capital in some parts of the banking system, low profitability might be a problem in terms of financial stability. However, it might be so that a 10-25 bps further cut in the deposit rate is realistic, given that the other options are even more difficult.

It might be nevertheless argued, that the negative rate does not really boost lending, because we might have reached the limits of the effect of interest rates on investment decisions and banks will charge positive rates from loan customers anyway.

Given that the low yields are a de facto constraint, what else could be done in order to expand the QE programme? Well, one can always target longer maturities, whenever possible. Also, one can target more credit risk. This is technically easy, but politically hard. Technically speaking, if one would like to expand the QE programme, to say 100 bn a month, which I think is needed, one could easily do so by buying solely for example Italian government bonds. However, this would mean deviating from the capital key pro rata -approach and thus would be a (larger) step towards a fiscal union, at least that’s how it would look like in many countries. Politically not likely.

The final and the most radical option would be to launch an all-out equities and/or junk bond purchasing programme. The eurosystem could start buying very risky assets with high yield. I think this is off the table, as is helicopter money, at least for now.

To sum up on the options

  1. Lower the deposit rate by for example 25 bps to -0,65 % (and the repo rate and the rate on marginal lending facility accordingly, in order to keep the interest rate corridor tight). This would lower the market rates further across the yield curve.
  2. Relax the issue limit and/or capital key. This would mean that the Eurosystem could acquire a bond issue in its entirety and target individual Member States. Politically delicate, not likely.
  3. Extend the QE programme to equities and/or junk bonds as the BoJ has done. This would most likely be rather controversial and would inflate the equity market further still.

Given the legal and political challenges, my bet would be that there will be a further cut in the interest corridor. This might be accompanied with some bond buying below the -0,4 % limit, as long as the overall profitability of the Eurosystem is positive. Alleviating from the capital key creates more political strain and tension, compared to the benefits of QE.

Has monetary policy reached its limits?

Nevertheless, I think that in the Euro area, monetary policy has reached its limits, more or less. Bank profitability and capital issues constrain further rate cuts and alleviating from the capital key could be politically toxic. So what is left is little. Structural reforms are necessary, but in this case, we need most likely more support from the fiscal side. Given the pathological current account surplus of Germany, it seems clear that the external value of the Euro is too weak for Germany. The accumalating huge net international investment position due to persistent current account surpluses should be tackled by the European commission and the Council, using the six-pack provisions. The other options for fiscal stimulus are basically left with some boosted form of the Juncker plan, and/or boosting the EIB lending volume. One could envisage a role for the ESM as well, at least in theory.

 

 

Inflation, QE, money, banking and other stuff

Inflation might be always and everywhere a monetary phenomenon, but the devil is in the detail. Inflation as such is a social convention, which usually means the annual change in the consumer price index (CPI). One might as well speak about nominal GDP growth or some other nominal measures (NGDP would be a far better target for CBs, as future expectations affecting investment are tied to future profit expectations). Since 1990’s, the central banks have been more or less independent and the usual wisdom is to keep inflation stable, around 2%. The independence I guess is basically there to prevent excess monetary financing of government deficits. And I stress the word ‘excess’.

 

The recent ‘unconventional’ monetary policies in the US, UK, Japan and Eurozone have included measures such as buying ABSs, government, covered and corporate bonds. The interest rate corridor is ultra-low, with interbank rates around zero or even negative. Eurosystem has utilised a negative rate on central bank deposits (-0,4 % interest on banks’ checking accounts at the central bank). So the price of money is low, measured by the yield curve. The yield curve is therefore rather flat as well.

 

When QE was launched by the Fed  in the US some years ago, some investors and economists claimed that QE would lead to  hyperinflation or at least would undermine the external value of dollar to a great extent. Both are theoretical possibilities, but I think that the common and usual discussion around money, inflation and monetary policy is somewhat shallow and unrigorous. For sure,  inflation and monetary economics are difficult subjects and much studied in the economics literature (especially in the form of modern DSGE models). However, I think that one of the main reasons for the misconceptions around inflation is the flawed ontology around money creation and purchasing power.

 

The usual claim is that central bank bond buying is somehow affecting inflation directly (CB money printing causes hyperinflation, therefore buy gold or stocks). There is an effect, but it is mostly indirect. Let me sketch a simple model economy.

 

We assume there is a central bank, a private bank (the banking sector is consolidated into one bank), some firms and households. We also assume there is a government with some stock of public debt D. Let us assume that the interest rate level is already at the so-called zero lower bound. Now, as the economy is in secular stagnation, the central bank records a zero inflation development from the statistics office and decides to launch an ambitious QE programme. We assume that the private bank holds all the government bonds of amount D. The private bank has a checking account at the central bank with a balance of 0, for simplicity. The central bank then buys the whole bond stock D, worth of 100 from the bank and credits the bank’s central bank account by 100. This is QE. The direct effect is that the balance sheet of the CB expands by 100 and that the return on equity for the bank goes down (checking account at the CB yields less than the government bond (hopefully ;)).

 

Hold on, where’s the (hyper)inflation? Well, as consumer price inflation depends basically on the nominal purchasing power of households, the additional inflation requires additional nominal purchasing power for the households. For sure, the balance sheet of the central bank has increased by amount of 100 and maybe the yield curve has flattened a bit (portfolio effect, as the bank wants to swap the low yielding CB deposit into some other fixed income instrument). This is the more or less direct effect of QE. But the fact that the private bank holds now 100 units of central bank money instead of government bonds does not add to the purchasing power of households.There is the small effect of monetising the government interest costs (government owns the CB), but this is of minor importance.

 

The purchasing power of households will increase, if they borrow money from  banks (consumer credit) or if their salaries increase (or if they receive dividends/sell assets). Now, let us assume the ‘salaries increase’ -scenario. This can take place through investment (hiring more people in the aggregate) and/or additional consumption in the economy (more revenue for the firm means a possibility for negotiating higher wages). This in turn requires that more purchasing power is injected in the economy.

 

This is where banks enter the equation. Let’s assume that a firm wants to invest by amount of 100 and borrows the money from a bank. Then the bank credits the firms checking account by 100 and recognises a new loan on its loan book. Nothing else is required. The fact that the bank had 100 units of central bank money on its balance sheet instead of government bonds, due to QE, did not affect the loan agreement in anyway directly. This is due to the fact that in a closed economy, banks do not lend their excess reserves (that’s the reason btw why there is no hyperinflation in the US in spite of the huge pile of excess reserves). The excess reserves can in this case only go down by extending new loans, creating therefore new deposits and forcing down the reserve-ratio (CB checking account balance/banks checking accounts balance). The same logic goes for consumer credit or mortgage loans, new deposits are created and therefore new money is created. Out of thin air, not from the CB deposits.

 

Therefore demand for loans is critical. Of course low interest rates help, but if the economy is excessively leveraged, the deleveraging dominates and there is no demand for new loans in net terms (too much private and public debt). Also, banks are constrained by the amount of regulatory capital, so little capital=little new lending.

 

Now, as money is commonly understood to be cash and banks’ checking accounts, it is quite clear that new money in this respect requires new lending from the banking system. QE does not increase directly the balance sheet of the banking system. It only flattens the yield curve and creates excess reserves (and makes the government debt interest -free as the government receives the interest income from the central bank in the form of dividends).

 

To sum up, QE should create strong incentives for new lending and borrowing, especially now, when excess reserves are penalised. Despite of low interest rates, new lending is still sluggish. There is no sufficient demand for loans and thereore nominal gdp growth is sluggish, which only creates more pessimism.

 

So what was the lesson learned?

 

  1. QE is not really money printing as such, because what we mean usually by money is the peoples bank accounts balances and cash. The portfolio effect however flattens the yield curve. So QE does not print ordinary money. However, the interest from government bond is returned to the government, so this basically is debt relief for governments.

 

  1. In a closed economy, the banks do not lend out their excess reserves. New lending is autonomous, i.e. new loans create new deposits and thus new money. Therefore the key for nominal growth is to create new bank deposits.

 

  1. In a secular staganation, and a balance sheet recession the most effective theoretical way to increase purchasing power directly is to finance government deficits outright by crediting governments’ central bank checking acconts and recognising a zero coupon perpetuity loan from the government at its face value on the balance sheet of the central bank. Most likely direct government investment is the most efficient way to increase nominal demand, although tax cuts might have a better allocationary effect.