Tieto on valtaa, mutta rajallista

Eräässä aforismissa sanotaan, että suuret mielet keskustelevat ideoista, keskinkertaiset tapahtumista ja vajaat ihmisistä. Miksi juuri ideat ovat tärkeitä? Olisi tärkeämpää puhua ideoista, jotka koskevat ideoita. Puhutaan siis metaideoista.  Inhimillisen tiedon rajat ovat meille kaikille tuttuja. Yksinkertaisimmillaan emme vain tiedä asioista, koska emme ole lukeneet niistä. Inhimillisellä tiedolla on kuitenkin myös perustavanlaatuisempia rajoja.

Nähdäkseni maailmassa on tiedon näkökulmasta kolme aivan perustavanlaatuista ongelmaa.

Ensimmäinen ongelma koskee kausaliteettia. Siis sitä, että mikä on asioiden syy ja seuraus. Tieteessä puhutaan fysiikan laeista, joissa vaivihkaa vihjataan, että esimerkiksi maailmassa on painovoimalaki. Kuitenkin jo yli 100 vuotta sitten Albert Einstein joutui päivittämään Newtonin ’lain’ nykyaikaan. Lienee todennäköistä, että myös fysiikan lait ovat vain hyviä kuvauksia ilmiöille, mutta universaalin lain asemaan harvoin nostettavia. Tässä yhteydessä on kuitenkin huomattava, että fysiikassa on kuitenkin ja eittämättä metafyysistä ja teleologista estetiikkaa, joka ilmenee esimerkiksi klassisten kenttäteorioiden variaatiolaskennan muotoiluissa. Luonto hakee muotoa, joka esimerkiksi minimoi avaruuden kokonaiskaarevuuden. Kyynel ja saippuakupla ovat tahollaan optimaalisia muotoja ja luonto on ilmiömäinen omassa ekonomisuudessaan. Ekonomisuus vaikuttaa universaalilta hyveeltä.

Emme kuitenkaan voi tietää tarkasti edes fysiikan lakien olevan voimassa aina ja sellaisenaan muuttumattomia. Entäpä jos jonakin päivänä aurinko laskeekin aamulla? Tai vesi kiehuukin jäähtyessään nollaan asteeseen normaalipaineessa? Olemme tehneet lukemattomia mittauksia ja empiirisesti vaikuttaa, että luonnonlait ovat kohtuullisen stabiileja ajan suhteen. Kysymys on kuitenkin pohjimmiltaan tilastollisesta regressiosuorasta, ja poikkeavia datapisteitä ei voi loogisesti poissulkea. Toki fysiikan lait ovat muutoinkin kauniita, ja intuitiivisesti on uskottavaa, että luonto toimii juuri niiden mukaisesti.

Kausaliteetin ongelma on tavanomaisempi ihmisten välisessä kanssakäymisessä ja varsinkin kun puhutaan poliittisista kysymyksistä. Ihmisille on harmillisen tavanomaista sekoittaa korrelaatio ja kausaliteetti keskenään. Jos ilmiöt liittyvät tilastollisesti toisiinsa esimerkiksi positiivisen otoskorrelaation kautta, ei siitä kuitenkaan voida päätellä juuri mitään ilmiöiden kausaliteettisuhteesta. Jäätelön menekki ja hukkumiskuolemien määrä korreloivat ajallisesti, mutta taustalla lienee jokin muu yhteinen tekijä, tässä tapauksessa todennäköisesti kesän tuoma mukava ilmanala. Tilastotiede voi vastata ainoastaan tietoteoreettisesti sellaisiin kysymyksiin, joilla voidaan poissulkea mahdollinen kausaalisuhde. Eikä tämäkään pidä täysin paikkaansa.

Tilastotieteessä on yritetty luonnostella kausaliteettia esimerkiksi niin sanotun Granger-kausaliteetin avulla, mutta valitettavasti sekin on alisteinen ”post hoc ergo propter hoc” -kritiikille. Temporaalisesti edeltävät ilmiöt eivät välttämättä aiheuta ajallisesti myöhempiä ilmiöitä, ellei ole erityisen puritaani deterministi. Kausaliteetin ongelmatiikasta huolimatta varsinkin poliittisessa keskustelussa vedetään kausaliteettinuolia asioiden ja ilmiöiden välillä ilman mitään kestävää tietoteoreettista itsekritiikkiä.

Kausaliteetti liittyy aivan olennaisesti itse ajan luonteeseen. Aika virtaa eteenpäin, ja tapahtuneet asiat voidaan järjestää siinä mielessä lukusuoralle. Asioilla on siis ajallinen järjestys, mutta itse kausaalivaikutusten identifioinnissa tulisi harjoittaa erityistä varovaisuutta. Järjestys ei implikoi kausaliteettia. Kun katsoo esimerkiksi useimpiin tilastollisiin tutkimuksiin vaikkapa yhteiskuntatieteissä, on oltava erityisen kriittinen, koska korrelaatio ja kausaliteetti sekoitetaan varsin usein keskenään. Jotain kuitenkin voidaan hahmotella erilaisten todennäköisyyksien avulla. Entropia kasvaa ja lämpökuolema uhkaa. Ihmisjärki taas luo järjestystä ja informaatiota.

Kausaliteetin ja determinismin tiedonfilosofisessa ongelmassa on kuitenkin myös helpotus, emme tiedä asioiden syy-seuraussuhteita tarkasti ja siten voimme myöskin olla paikoin hieman armollisempia itsellemme, koska nykytila lienee kuitenkin menneisyyden monimutkainen ja epälineaarinen summa. Mainittaessa epälineaarisuus, ei voida välttää viittausta maailman mahdolliseen kaoottisuuteen. Maailma on todennäköisesti kaoottinen dynaaminen systeemi. Kaoottinen dynaaminen systeemi riippuu erittäin voimakkaasti alkutilasta, ja perhosen siivenisku voi aiheuttaa tornadon periaatteessa vaikkapa Suomessa.

Kuitenkaan tilastotieteellinen tutkimus ja -tieto eivät ole arvottomia, vaikka tieto kausaliteetista puuttuu. Suurinta valuuttaa tässä maailmassa on kyky ennakoida ja ennustaa. Robustin ennakoimisvälineistön perusteella voi ennustaa tulevaisuutta paremmin kuin pelkällä näppituntumalla. Jotkin amerikkalaiset hedge-rahastot ovat tehneet vuosikymmeniä parempia tuottoja kuin laajat markkinaindeksit perustuen sofistikoituneisiin ennustemalleihin. Sattumaa? Ennustamiskyky on arvokasta osaamista. Markkinoiden ennustevoimaa tulisi mielestäni hyödyntää myös laajempiin yleisinhimillisiin tarkoitusperiin. Rahoitusmarkkinat ovat epistemologisesti äärimmäisen kiinnostava ilmiökenttä. Palaan tähän jäljempänä.

Emme kuitenkaan voi tietää maailman todellista toimintalogiikkaa, koska viime kädessä empiiriset havaintomme maailmasta tuottavat vain karrikoiden regressiosuoria ilman syvää ymmärrystä syy-seuraussuhteista.

Toinen ongelma koskee loogisen maailman epätäydellisyyttä ja itsesimuloinnin ongelmaa. Tämä ongelma liittyy myös jännällä tavalla ennustamiseen. Kysymys on siitä, että viime kädessä ihminen tai kone on fysikaalisesti osa ennustettavaa systeemiä. Eli kun teemme ennusteita, vaikutamme itse ennustettavaan systeemiin (jollakin todennäköisesti kausaalisella tavalla). Eli kun esimerkiksi taloustutkimuslaitokset tekevät ennusteita talouden kehityksestä, ennuste vaikuttaa ihmisten kulutuskäyttäytymiseen, jolloin ennuste ei ole riippumaton tutkimuskohteesta. Tämä ongelma on korostuneen iso varsinkin siinä tapauksessa, jos maailma on kaoottinen dynaaminen systeemi, jossa kaikki vaikuttaa valtavasti kaikkeen monimutkaisilla takaisinkytkentämekanismeilla.

Ennusteen tulisi siis ottaa huomioon se, että ennuste vaikuttaa kohteeseensa, joka tulisi ottaa huomioon se, että jne. Tästä syntyy ääretön silmukka, ellei ole olemassa jotain itseään toteuttavaa erityistä kiintopiste-ennustetta. Jos algoritmi ja rekursio palauttaa jossakin vaiheessa itsensä. Kuka loi Jumalan ja miksi? Tämä on osa samaa kysymystä.

Täsmällisimmin tätä kenties vaikeinta filosofista ongelmaa on lähestynyt matemaatikko ja loogikko Kurt Gödel (1906-1978). Gödel todisti 1930-luvulla kuuluisat epätäydellisyyslauseensa, jotka täsmällisesti kuvaavat formaalisten systeemien epätäydellisyyksiä; on olemassa totuuksia, joita ei voida todistaa systeemin aksioomista käsin. Myös Ludwig Wittgenstein sivusi juoksuhaudoissa kirjoitetuissa mietelmissään tätä ongelmaa. ”Tämä lause ei ole totta”. Jotkut asiat ovat ikään kuin olemassa, vaikka systeemi ei pysty niitä loogisesti tavoittamaan. Looginen maailma ei ole siten suljettu merkitysten suhteen. Jos olisin uskovainen, antaisivat Gödelin tutkimukset minulle toisaalta myös todistetta siitä, että tämä maailma ei ole kaikki.

Parhaat ennusteet ovat siten sellaisia, jotka toteuttavat itsensä. Rahoitusmarkkinoilla on muuten sellainen käsite kuin markkinoiden informatiivinen täydellisyys. Puhutaan myös autokorrelaation häviämisestä tuottoaikasarjasta. Markkinoiden täydellisyys on erikoinen paradoksi; ne ovat täydelliset vain, jos kukaan ei usko niiden täydellisyyteen.

Kolmas ongelma liittyy juuri tähän edelliseen asiaan omalla tavallaan. Miten voidaan muodostaa mielekkäitä malleja maailmasta? Mitkä lauseet ovat mielekkäitä? Esimerkkinä haluaisin nostaa tässä yhteydessä mikrotaloustieteen ja laajemmin päätöksentekoteorian. Siis opin rationaalisesta päätöksenteosta. Jos todetaan, että ihminen taikka päätöksentekijä toimii optimaalisesti, rationaalisesti taikka järkevästi, mitä oikeastaan tarkoitamme? Taloustieteen mielessä tarkoitamme sitä, että agentti valitsee sellaisen toiminnan, joka maksimoi hänen hyötyfunktionsa annettujen rajoitteiden puitteissa. Rajoitteet ovat sinänsä helppo ymmärtää, mutta hyötyfunktio on vaikeampi. Entä jos esitämme kysymyksen käänteisesti; annettuna rajoitteet ja toiminta, onko olemassa sellaiset preferenssit ja hyötyfunktio, jotta tämä käyttäytyminen on hyötyä maksimoivaa?  Tämä ongelma tunnetaan mikrotaloustieteessä integroitumisongelmana. Se liittyy myös jännällä tavalla lämpöoppiin ja eksakteihin differentiaaleihin.

Kysymys on siis falsifioitavuudesta, ainakin osittain. Mielekkäät lauseet pitäisi pystyä osoittamaan ainakin periaatteessa oikeaksi taikka vääräksi. ”Olen omasta mielestäni kaunis.” Tämä ei ole mielekäs lause. ”Huomenna sataa.” sen sijaan on. Makuasiat eivät ole siis jossakin mielessä mielekkäitä väitelauseita. Makuasioista ei voi kiistellä. Ovatko moraali ja etiikka makuasioita? Ennusteita voidaan kuitenkin testata.

Edellä listasin mielestäni keskeisimmät inhimillistä tietoa koskevat ongelmat. En kuitenkaan voi olla mainitsematta tässä yhteydessä käytännöllisempiä ongelmia, jotka liittyvät lähinnä moraaliin ja argumentointitaitoon. Asiallisessa ja fiksussa keskustelussa ei tulisi ikinä sortua vääristelemään toisen puheita, eikä tulisi myöskään vedota keskusteluosapuolen ominaisuuksiin. Asiat ovat olemassa taikka olematta riippumatta keskusteluosapuolen ominaisuuksista. Jos siis valtio-oppinut esittää väitelauseita taloudellisista järjestelmistä, voivat ne olla totta riippumatta henkilön koulutustaustasta. Samoin tilastodata on tilastodataa riippumatta sen esittäjästä. Poliittisessa keskustelussa esiintyy harmillisen usein näitä argumenttivirheitä. Asioiden totuusarvo tulisi selvittää objektiivisesti; totuus on kontekstivapaata. Eli välttäkäämme kehäpäätelmää, auktoriteettiuskoa, henkilöön menemistä ja muita olkiukkoja.

Viimeisenä kohtana en malta olla vielä mainitsematta erästä poliittista elämää rajoittavaa teoreettista seikkaa. Tämä seikka koskee yhteiskunnan kokonaishyvinvointia. 1800-luvun liberaaleista lähtien on ollut vallalla sellainen eriskummallinen ajatusmalli, että yhteiskunnan kokonaishyvinvointi voidaan jotenkin laskea ja summata yksilöiden yli yhteen. Tämä ajatus ei kestä kriittistä tarkastelua. Utilitarismi ei ole tietoteoreettisesti kestävä ajatus, ainakaan täsmällisesti tulkittuna. Ihmisten subjektiivisia hyötyjä ei voida vertailla, koska ne ovat subjektiivisuudestaan johtuen ei-verrattavia asioita. Toisen saama nautinto hyvästä ruosta voi olla hyötyarvoltaan 10, mutta toisen voi olla 10 potenssiin 10, taikka salpietaria. Emme asetu samalle mitta-asteikolle. Siksi keskussuunnittelija on melko varmasti tuomittu epäonnistumaan, koska yhteiskunnan kokonaishyvinvoinnin maksimoiminen ei ole mielekäs väitelause. Sen sijaan tehokkuutta koskevat väitelauseet ovat mielekkäitä. Voimme kysyä kategorisesti, muuttuuko hyvinvointi ylös tai alas. Voimme mahdollisesti tehdä parannuksia joillekin heikentämättä kenenkään hyvinvointia. Tällaisiin tehokkuusparannuksiin on syytä pyrkiä.
Politiikka on sodan jatkamista muiden keinojen avulla. Hyötyjä ei voida vertailla täsmällisessä mielessä, mutta stabiili poliittinen järjestelmä löytää kuitenkin melko kestävän tasapainon vuodesta toiseen. Tämä on itse asiassa aika hieno juttu. Edistyksellisempää uutta vuotta 2018.

The Intellectual problems with Adam Smith’s invisible hand and the question of omnipotency of free markets

’Equilibrium prevails if all plans and expectations of all economic subjects are fulfilled so that, at given data, no economic subject feels inclined to revise his plans or expectations.’-Erik Lindahl

Let us talk about Walrasian economic equilibrium and about the dynamic process that governs the evolution of market prices, possibly towards an equilibrium. As the quote above by Erik Lindahl (1891-1960), a Swedish economist, suggests, economic equilibrium is a rather general and abstract concept, which is a state where nobody is willing to move or alter her policy.

The neoclassical theory of economic equilibrium is arguably the intellectual cornerstone of modern economic theory. The modern theory of economic equilibrium, including the Welfare theorems, is the rigorous equivalent of the ’invisible hand’ of Adam Smith. Equilibrium concepts are common in any dynamical systems context, be it a Walrasian equilibrium in neoclassical economic theory, or a Nash equilibrium in game theory.  Analytically defined economic equilibrium is originally a concept that originated from the 19th century physical sciences.

Although the father of modern economics is indeed usually cited to be Adam Smith, the first semi-rigorous economic theory on equilibrium was put forward by Leon Walras in the late 19th century. The next major breakthrough was in 1954, when the existence of competitive equilibrium was proven by Arrow and Debreu. In spite of the fame of Debreu and Arrow, actually the mathematical theory of general equilibrium was put forward to a great extent by Abraham Wald in 1936. Moreover, Johnny von Neumann, 1945, contributed to the theory of equilibrium greatly as well.

The existence theorem for a competitive equilibrium is a substantial achievement as such, but nevertheless it does not tell much about the workings of a true market economy. We know very little about the adjustment process of prices as such. Especially little we do know about the dynamics of it. The canonical theory since 1940’s assumes that there is a ”trial and error” or ’tatonnement’ process, where a Walrasian auctioneer adjusts the prices so that the excess demand is driven to zero according to the following dynamics

$\frac{dp}{dt}=\lambda Z(p)$

where p is the price vector, lambda is a constant and Z is the excess demand function for the economy.

Even though the equation above seems rather innocent, Scarf in 1960, among others has shown that it is not generally globally stable. A globally stable equilibrium means (assuming it exists) in this context that if you start with some initial set of prices and excess demands, the dynamics will lead to an equilibrium always.

The lack of a stable equilibrium is a major problem, as it implies that in general, there is no market clearing. It is actually rather peculiar, that economic theory contains such an essential pathology, given how lightly people usually assume that demand and supply will balance each other. Therefore, we would need to advance on the dynamic adjustment process, because actually we know rather little about theoretical economies.

According to the Fields medalist and mathematician Stephen Smale, the problem of lack of knowledge with general equilibrium theory is really severe, and he has included it on his famous list of eighteen unsolved problems in mathematics.

In the 1970’s Stephen Smale published extensively on the problems around general equilibrium theory. In particular, he held the view that the main unsolved problem in mathematical economics was the lack of understanding of dynamics of general equilibrium.

’I feel equilibrium theory is far from satisfactory. For one thing the theory has not

successfully confronted the question, ”How is equilibrium reached?” Dynamic

considerations would seem necessary to resolve this problem.’ -Stephen Smale

There has been a consistent line of research in non-tatonnement through last decades, although thin, regarding Walrasian exchange and adjustment of prices. Even though the price adjustment process is of fundamental importance, the amount of research published on this particular issue is not however so large. It is worrying, as price adjustment towards equilibrium is a key issue, not least because the market clearing does not occur always.

So, my leftist friends, study more mathematics, and you can challenge Mr. Smith!

.

Is QE equivalent to “printing money” ?

Since the launch of quantitative easing (QE) by the major central banks, numerous economic commentators and especially journalists are equating QE and “money printing”. This is of course interesting, as money is generally very precious, yet not completely understood by the public and this is troublesome, given the paramount role of ‘money’ in society.

I will try to explain here why it is misleading to equate QE with money printing, and also I will try to explain how QE induces money creation.

For these purposes we need to build a very simple model of our monetary economy. Let us first suppose that we have a Central Bank (CB), a (commercial) bank and an institutional investor (II) be it a pension fund. Moreover, we implicitly assume that there are households, firms etc. who need capital markets for financing.

First of all, let us define a couple of important concepts (simplification)

-Central bank money =commercial banks’ deposits at the central bank + notes and coins in circulation. These are liabilities of the central bank.

-Public Money = Deposits of households and firms and government at the commercial banks. These are liabilities of the commercial banks. This is nowadays what we call casually ‘money’, given that the use of cash is diminishing.

We assume that initially the commercial bank (all banks in the economy) in the economy holds 100 euros worth of bonds issued by the government and firms. The pension fund holds lets say 300 euros worth of same bonds. So the total outstanding amount of bonds in the economy is 100+300=400.

The initial balance sheet of the commercial bank is assumed to be consisting of some assets, say loans worth of 900 euros and the loans and bonds held are financed with 950 euros of deposits and 50 euros of equity. The balance sheet of the commercial bank looks like this:

Assets                                 Liabilities

Loans 900                          Equity 50

Bonds 100                          Deposits 950

The central bank then proceeds with QE. It buys the whole portfolio of bonds from the commercial bank crediting the bank’s account at the central bank with 100 euros. The balance sheet of the bank looks like this:

Assets                                Liabilities

Loans 900                         Equity 50

CB deposits 100              Deposits 950

Now the commercial bank is not happy, as the account at the central bank is costly (-0,4% p.a. in the Euro area), so it wants to compensate this negative carry by buying a lot of government bonds from the pension fund. This is the so called “portfolio (rebalancing) effect”. Do note that the sovereign bonds are “risk free” in the sense that they do not require any regulatory capital. The pension fund holds a checkings account at the commercial bank.

The bank now credits the pension fund’s account with 200 euros and gets in return 200 euros worth of sovereign bonds. Now the bank’s balance sheet looks like this

Assets                                  Liabilities

CB deposits 100                Equity 50

Bonds 200                         Deposits 1150

Loans 900

The same economic effect for the bank can be achieved by extending new credit to households and firms. It improves the net interest income and therefore the bank has an incentive to expand its balance sheet, if there is sufficient capital available. This effect is especially strong now, because of negative interest rates.

So we notice that balance sheet of the commercial bank did not increase directly due to QE and hence that the amount of public money did not increase directly due to QE. However, when the banking system is induced to buy new bonds or to issue new loans, due to rebalancing effects the amount of public money is increased as a second-order phenomenon.

In other words, QE is not “money printing” as such, because the first-order effect just increases the amount of central bank deposit money, which is not used by the public.

On the other hand, if the central bank wants to buy all/large amounts of bonds held by the bank and the pension fund, the bank needs to buy these bonds first from the pension fund in order to sell them to the CB, which creates new public money (the bank credits the pension fund’s account). On the other hand, this money is at the hands of the pension fund and as such does not create directly any additional purchasing power as the fund needs to have certain amount of assets to cover its pension-related liabilities.

Is the debt burden lower for central goverments due to QE ?

No, not really. Even though most central banks are owned by the central government, the debt is still there and if the CB decides to taper, i.e. to start divesting and let its balance sheet shrink, the maturing government debt must be refinanced by selling new bonds to private investors. Therefore QE is not about cancelling government debt. Some of coupon interest ends up cancelled because the CB profits are distributed to the central government.

Problems with QE

Even though QE might work to fight deflation, it is not particularly efficient at that. Moreover, QE creates huge amounts of central bank money, which usually causes a euphoria in the asset markets as banks want to buy bonds and stocks with their excess reserves. Over-valued asset markets might cause problems in terms of financial stability. Moreover, as QE inflates asset prices, the distribution of wealth tends to favour the rich as the rich own more assets than the poor. This is not trivial in terms of social justice. Finally, at least in the Eurozone, QE has increased the level of TARGET2 imbalances and the general level of credit risk for the central banks. This might be trouble in terms of budgetary sovereignty of national parliaments. Moreover, the solvency of various entities might be zombie-solvency, because debt sustainability might be very different with a “normal” yield curve. This is especially so as QE includes corporate bonds as well.

Therefore fiscal policies (lowering taxes/increasing public investment) would be in general more efficient and socially justifiable solutions fighting deflation. Of course in the Eurozone this is very difficult.

Conclusion:

QE in its purest form (no buying of bonds by the banks) is not equal to money printing as it only increases the amount of central bank money deposits. However, in practice because of of the portfolio rebalancing and because of the scale of QE , banks do create new money to fund their bond-buying from institutional investors.

The Maximum Entropy principle in modelling and estimating probabilities of default for banks

I am now finally proceeding with my PhD dissertation in systems analysis and operations research. What I found originally interesting, was estimating probabilities of default for a group of banks using logistic regression, see my presentation at University of Cambridge Judge Business School, Lindgren [2016].

When we consider a statistical model for probability of default (PD) of a business entity or of a bank, we need to argue why we assume a specific statistical model for the data generating process. After we have identified a statistical model, estimation and inference is usually rather straightforward, although it might be computationally burdensome. In this article, I explain why logistic regression specification is a very natural one in terms of maximum entropy based statistical inference. The additional benefit is that we can use the machinery of statistical mechanics, as we will interpret the model through the Gibbs measure. This framework allows us to find expressions for various potentially useful concepts like enthalpy and free energy, usually based on the information codified in the partition function Z. Logistic regression is a very simple model for neural networks and this could be ultimately very useful paradigm in finance as well. Markets could be seen as a huge, adapting non-linear neural processing totality.

The principle of maximum entropy

I will follow in the steps of Jaynes [1957], who argued that the a priori distribution should be the one that maximizes entropy given some constraints. Entropy is a concept that originated from 19th century thermal physics  and statistical mechanics as a measure of disorder, but in a larger perspective it can be considered as an expectation related to surprisal, in terms of information theory. We usually consider information be related to the logarithm of probability because of its certain algebraic properties. For a thorough discussion, see for example the famous work by Claude Shannon.

In a discrete probability space we define entropy as

$S(p_i)=-\sum_{i=1}^{n}p_i \log{p_i}$

Where we define ‘surprisal’ to be $\log{\frac{1}{p_i}}$. Note that if an event is certain, surprisal is zero, and if probability is close to zero, surprisal grows very fast towards infinity. The intuition for entropy is therefore the average surprisal, when sampling. The idea now is to find a priori distribution, if we now nothing about it except some expectation based on the distribution. If we are prudent, we should assume the distribution is the one that maximizes entropy.

Consider now an expectation, call it energy

$\langle E \rangle = \sum_{i=1}^{n}E_ip_i$

If we now maximize entropy given a fixed constraint of average energy, we have the following Lagrangian

$L(p_i)=-\sum_{i=1}^{n}p_i \log{p_i}-\beta \left(\sum_{i=1}^{n}E_ip_i-\langle E \rangle \right)-a\left(\sum_{i=1}^{n}p_i-1\right)$

The last constraint is there to ensure that the probability measure is normalized to unity.

The maximization problem is straightforward and the entropy maximizing distribution is the Boltzmann distribution, or the Gibbs distribution

$p_i=\frac{e^{-\beta E_i}}{Z(\beta)}$

where $Z(\beta)$ is the partition function that ensures the distribution is normalised to 1.

Logistic model

We now consider a binary choice model for the problem of default. At any instant, the entity is in default or not. We assign these probabilites to be $p_i$ and $1-p_i$ respectively. Now let us assign energies for such two states of the world. We have energies $E_1$ and $E_2$. The partition function is therefore

$Z(\beta)=e^{-\beta E_1}+e^{-\beta E_2}$

If we now substitute this in to the Gibbs distribution, we will have

$p_1=\frac{e^{-\beta E_1}}{e^{-\beta E_1}+e^{-\beta E_2}}$

This can be simplified to be

$p_1=\frac{1}{1+e^{-\beta(E_2-E_1)}}$

This is the logistic curve, whose argument is the difference in energy. There is the Lagrangian coefficient beta, which in physics is the inverse temperature, here it can be used to balance the units to give a unitless probability.

Let us now identify the energies. Given that the probability of default is dependent on the difference, we should somehow relate these two concepts to risk and capital. So we could for example choose that $E_2$ represents total risk and $E_1$ represents capital-like variable. When risk is large compared to capital, probability of default is close to unity.

So in other words we might choose

$E_2=\vec{w}\cdot \vec{x}$ and $E_1=\theta$

Where risk is a weighted sum of incoming sources of risk and theta is a measure of capital. Given these specifications, we have the model

$p_1=\frac{1}{1+e^{-\beta(\vec{w}\cdot \vec{x}-\theta)}}$

We can use the logit transform to form a linear regression model

$\log{\frac{p_1}{1-p_1}}=\beta(\vec{w}\cdot \vec{x}-\theta) +\epsilon$

We can assume that the additive noise term is IID, normal, standardised, if we assume a multiplicative IID lognormal noise in the original specification. This is feasible. The multicollinarity issues of the risk vector can be ignored, because I mainly care about forecasting systemic risk. In the era of machine learning, black box modelling is OK!

What next?

This is the framework I feel intuitively is the logical foundation for my empirical studies of systemic risk. I need to consider if I could somehow make use of the statistical mechanics framework further.

References:

Jaynes, E.T (1957), Information Theory and Statistical mechanics, Phys. Rev. 106, 620

The Future of Money – Central Bank Issued Digital Legal Tender ?

Given the recent hype around digitalisation and artificial intelligence, I thought it might be worthwhile to consider what digitalisation might actually mean for money and monetary systems in general.

First of all, one should exercise restraint when considering digitalisation or AI. To me, it seems that AI – even in the case of deep learning neural networks is basically just complex nonlinear regression analysis (essentially fitting a nonlinear curve on a data set). At least Noam Chomsky is of the view that AI has not advanced much from the 1960’s in qualitative terms. Of course AI has advanced in quantititative terms, given Big Data, more powerful CPUs and so forth. But I think Kurzweil’s singularity concept is not really something that is just around the corner.

Digitalisation is a more interesting concept, although I do not see a giant leap for mankind here either. Digital means basically computers (1/0) and computers have been around since the inventions of the Turing machine and the ENIAC. Computers are of course a lot more sophisticated nowadays. And I guess programs are also in some sense “smart”.

However, in the field of monetary economics, current and rapid digitalisation can provide lots of new opportunities. Since the emergence of cryptocurrencies , bitcoin and distributed ledger technology in general, people are more and more interested in these issues because of potential financial benefit as well, on top of intellectual curiosity.

One of the most interesting concept for me is the concept of a digital central bank currency.

Think of money for a moment.

Money for most of the people is basically bank accounts and notes and coins issued by the central bank. Paper money is a liability of the central bank and a legal tender (you can use it universally to buy stuff and pay your taxes). The other major part of liabilities for the central bank consists of bank reserves or bank deposits at the central bank. These deposits have by the way expanded substantially due to quantitative easing, as central banks buy government bonds from the banking sector by crediting the banks’ accounts at the central bank.

Now, let’s consider the possibility of extending that set of liabilities

In particular, let us assume that the central bank would establish a digital universal access legal tender account for all legal entities and for consumers and households as well. In other words, let us assume that one  could use the balance of these accounts to settle day-to-day transactions at the local supermarket etc. Furthermore, let us assume that this new system of digital legal tender would integrate the current distributed ledger technology in order to facilitate peer-to-peer transactions and enhanced security. In a way the balances at the central bank could be used for netting the balances within the peer-to-peer universe. What would this mean for the current monetary system?

First of all, let us remind ourselves that commercial banks create most of the money in a given society by creating new bank deposits as a by-product when granting loans to their customers. So we can assume that in the first stage, when money is created, the newly created money is in the form of bank deposits. However, if we assume that there is the option for the deposit holder to transfer her money into the central bank digital money account (which is a legal tender), she would probably do so, because the counterparty credit risk is smaller at the central bank (practically zero). This would then mean that the banking system would need to refinance itself in the first place by borrowing short term from the central bank. This would mean that the balance sheets would look something like this:

In essence the central bank would finance the loans in the economy. This of course would be very risky for the central bank, and therefore the banking system would need a lot more capital to absorb losses compared to the current situation. Now households and firms are financing loans as they hold the majority of bank deposits.

However, money in this new monetary system would be super-safe,

as money would be direct liabilities of the central bank which can run with negative equity if necessary. This would of course make our monetary system more stable. It would also incentivize the central bank to supervise the banking sector more efficiently as the central bank would be the biggest creditor of the banking system !

Note that in this model the banks would still do most of their core business, which is credit screening and loan extension. So no moral hazard with politicians financing some projects 🙂

If the banking system would fail, the central bank would take the hit, and given the current regime of bail-in, the central bank would then become a partial owner of the banking system through debt-to-equity swaps.

Monetary policy in this model could be implemented easily

Monetary policy could be implemented by adjusting the interest rate at which the central bank lends to the banking system. Moreover, QE would be more efficient as well, because for the central bank the sphere of counterparties would be containing all economic agents in the economy.

One could also see opportunities for increased security in order to counter terrorism and money laundring, if the distributed ledger platform would be operated by the central bank/public authorities. All transactions would be verified within the network of blockchains and criminal activities could be exposed easily. Cash could be abolished in principle. Of course one would need to ensure sufficient privacy, and striking the right balance between security and privacy is paramount.

End of cash? QE4 people?

This digital legal tender system could replace the need for cash. Of course in order to make the system more resilient, numerous back-up systems would be necessary. And maybe some spare cash in the government’s vaults as a preparation for potential failures and crisis.

Some circles have been advocating for extending QE to all counterparties. The system described above would enable this, as the central bank could include all entities in society as its counterparties. However, in a traditional QE scheme the central bank is buying securities which for example households do not hold to a large extent. These aspects need obviously more elaboration and research.

Conclusion

I have presented here a model for central  bank digital legal tender. I think it provides a lot of opportunities, if we are to achieve a more stable and secure financial system. Obviously lots of threats are included in this model, but I think it would be worthwhile to have this discussion in the public sphere. Money is, after all, our common interest.

Why we need higher capital requirements for banks?

Capital requirements have been the main tool of banking regulation since the 1990’s. International regulation has evolved quite a bit during the last 30 years or so. Many of the main issues are still unsolved however.  The global banking crisis that began in the US in 2007 induced the G20 leaders to decide on more stringent capital requirements for banks. The new regulatory regime known as the Basel III, is supposed to make bank failures less common. More capital means less leverage. However, the banking lobby has been fierce in protecting the status quo, i.e. low capital requirements and high leverage. In this article I will try to elaborate a little on the basic arguments around these issues.

Why we need to regulate banks in the first place?

In general, public regulation can be justified theoretically because of market failures that can evolve from market power, externalities or asymmetric information between buyers and sellers. The classical Diamond -Dybvig framework [1] shows that when banks provide liquidity insurance, they actually create a bank run equilibirium. In general, bank runs have been quite frequent since the days of John Law and fractional reserve banking. Banks finance illiquid loans by transforming maturity via issuing demand deposit instruments (=create private money) and this creates easily a loss of confidence in the mind of the public sphere. The ability to create private money (deposits are used to settle day-to-day transactions) is one peculiar feature of banking business, see e.g. [2]. This essentially means that banks’ creditors are their customers. The bulk of banks’ debt is held by uninformed  small agents (households and small firms). Another distinctive feature is the paramount role of banks in the modern payment and settlement systems. Banks form a crucial part of modern society’s infrastructure.

Why then the bank managers do not choose the optimal amount of risk and capital in order to mitigate the possibility for bank runs? This is due to conflict of interest between the depositors and equity holders. See for example the article by Jensen and Meckling [3]. Usually the agent representing the owners of the bank will prefer to choose more risky investments than the depositors would prefer. Therefore the depositors need to be protected. One can think of this also as a problem due to limited liability; the maximum loss for the equity investor is bounded, whereas the potential upside is in principle unlimited. If the agent has low risk aversion mentality, it makes sense to maximize the expected return of investments (=prefer relatively risky projects), see e.g. Dewatripont and Tirole [4]. In general, executive pay schemes and managerial incentives matter of course and can have a substantial effect on bank risk taking behaviour. The post 2008 regulatory regimes tries to tackle these corporate governance issues as well.

Because of the frequent and inherent bank runs, public authorities have created institutions like deposit insurance and central banks, which are to prevent bank runs in the first place.  Central banks can provide (artificially) cheap refinancing to illiquid but  more or less solvent banks (originally due to Walter Bagehot), whereas deposit insurance is to convince bank customers (bank creditors) that their money is safe and guaranteed ultimately by the central government, i.e. the taxpayer. Moreover, governments have stepped in to bail out failing banks by e.g. injecting equity into the troubled institutions or buying stressed assets from the banks at face value. All this can be costly to the taxpayer and to the politicians. Bank runs and bank failures often involve cross-border contagion and as banks form the bone of the payment system, the whole system cannot be let to fail at the same time. Global financial system meltdown would be a total catastrophe.

There is therefore a strong case for bank regulation because of these reasons. In this post I will not consider radical alternatives to bank regulation like abolition of central banks, free banking, Chicago Plan or the like.

Capital requirements as a tool for regulating banks

The most important regulatory instrument currently is the minimum requirement of regulatory capital.  If a bank has a total amount of risk (RWA), calculated as a risk weighted average of bank assets, the minimum amount of capital (CET1~shareholder’s equity + retained earnings) should be at least >4,5%*RWA. Banks tend to hold some more capital than the minimum requirements, as breaching the limit might lead to a bank resolution procedure or at least to a supervisory intervention, which would quickly cause trouble in the wholesale and interbank funding markets.

The amount of capital, in principle, will make bank failure less likely in a given time horizon as distance to default gets longer. Notice that minimum capital is there to absorb unexpected losses (some small/tail quantile of the loss distribution). Expected losses are in principle covered by loan margins and required asset returns.  In terms of nominal values, banks hold typically nowadays some 4 % of equity as a percentage of the balance sheet total. This means that if the value of the bank assets go down more than 4 %, the equity is wiped out and the bank fails. This is not a large number, it is more like statistical volatility/noise in terms of loss variation. Basel III has raised the capital requirements, but only to a small extent. So why the level of leverage in banks is so large compared to other lets say non-financial listed companies?

Banks’ core business is to make loans by issuing demand deposits. This is very useful and can be called also financial intermediation or maturity transformation, but the point is to collect net interest income from extending loans. Proper pricing of risk is of the essence. This of course then involves managing and evaluating credit risk, but from the point of this analysis, it is important to realise that the core business of banking implies naturally high leverage as demand deposits are liabilities of the bank. Before the financial crisis, the levels of equity could be as low as around 2 per cent of balance sheet total. We all have seen the consequences.

The implicit government guarantees, be it in the form of bail-outs, central bank refinancing or deposit insurance, makes high leverage possible. I do not think that any bank could have only 5 per cent equity without these public subsidies, especially important is the cheap central bank liquidity/emergency support. So high leverage is supported by the implicit government subsidies.

Why banks do not want higher capital requirements ?

Bank managers in general want to maximize the return on equity (ROE), as this supposedly reflects the future dividends for shareholders and therefore it tends to maximize the value of shares, which usually then maximizes the executive pay. As explained above, the interest of the shareholder might be also to take too much risk, because of limited liability. This would be appropriate in a world without public support mechanisms =free and fair market, and if the banks did not have such a paramount role in society in terms of payment systems and money/credit creation.

The bank lobby therefore usually tends to dislike capital regulation. This has to do at least with three things. First of all, other things being equal, higher capital level erodes the actual return on equity and it supposedly increases the weighted average cost of capital.  The first claim is technically correct. However there is a difference between the actual return on equity and the required return on equity.

What is important is the required return on equity (RROE) by the equity investors. As additional risk must be compensated with additional return, it is clear that the shareholders must earn more compared to the depositors or the bond holders. However, the effect of lower leverage lowers the risk of bank failure (distance to default) at the same time and it therefore lowers the demanded risk premia of the debt holders and the equity holders. This offsets the increase of costs due to the increase in the equity/debt -ratio. So the required return on equity goes down while the cost of debt goes down as well.

Basically what I am arguing above is that the weighted average cost of capital (WACC) is independent of the debt-equity mix. This is of course all familiar from corporate finance 101, it is the famous Modigliani-Miller theorem, see e.g. [5]. There is one important reason on top of the tax deductions of interest costs that might make the WACC to be somewhat dependent on debt-capital mix. It is that if the elasticity of risk premia of the debt investors and equity holders is close to zero due to the public support schemes. That is, if for example the creditors of the bank do not benefit from additional loss absorbing capital because they enjoy from full protection by some public scheme.

What would the shareholders want?

According to empirical studies, most of the people are risk-averse, including investors. Therefore the typical shareholder of a bank should care about risk-adjusted ROE and not just vanilla ROE. Risk adjusted ROE could be defined as RaROE=ROE/Risk. As for bank, we could measure the risk simply through the ratio Risk=RWA/CET1 so we would have RaROE=ROE*(CET1/RWA)=Return/RWA. So this simple argumentation shows that the banks’ management should focus on maximizing basically the Sharpe ratio instead of vanilla ROE. So we do have some principal-agent asymmetry here as well.

The current regulatory regime is technically too complicated. It is not as complex as the Einstein field equations above, but I think we would benefit from a simpler capital regime. Small banks at least could benefit from a more proportionate regulatory regime. There are at least a couple of problems with the current risk-adjusted regime.

The most important problem is the model risk. In the current envinronment the banks can basically determine their own regulatory risk, i.e. the total risk, RWA. This in turn entails possibly some regulatory capture and asymmetric information, because the big banks can afford to hire the best technical people to justify that their models are correct. Moreover, for example the current regime allows a zero risk weight for sovereign exposures. Model risks also involves uncertainty is terms of risk metrics and distributional assumptions. I guess the industry standard is normal distribution + VaR.

I would think we could abolish altogether the Basel regime for small banks and introduce mere leverage ratio requirement. Big banks could still use RWA, but the leverage ratio requirement should be a lot higher for large banks, perhaps around 10 per cent of total assets. One should also note that the shareholders should take care of the allocation problem with leverage ratio, i.e. even though a flat leverage ratio of say 10 per cent would incentivise bank managers to take on more risk, the maximizing behaviour of the shareholders shoud care about Return/RWA, so that the portfolio should be always optimal according to the preferences of the shareholders. I am just stressing this point because the lobbyist always argue that leverage ratio would distort banks towards excessive risk. I do not think this is the case, because the shareholders care about risk-adjusted returns, or banks’ RaROE.

Do higher capital requirements hurt the economy?

One of the usual arguments against capital requirements is that they hurt the economy. The lobbyist would either argue that higher capital requirements would lead to higher margins, or that lending would be severely constrained. The first argument of the lobbyists is not valid as in any competitive market there should be no reason why the banks would not raise the margins even without any higher capital requirements, if the loan customers are willing to pay for it. Moreover, if there is competition, lower margins would attract customers, not repel them. Second, a large chunk of new bank lending goes into financing real estate and not to non-financial companies financing investment and therefore the price level in the housing market would suffer if anything. Finally, higher capital requirements can be naturally satisfied by retaining a larger portion of earnings. So less dividends and less share buybacks. Empirical data does not find any support for the lobbyists argumentation either. We have seen high growth rates with low bank leverage. Of  course lower ROE would be bad in terms of comparison across sectors. Banks typically enjoy superior ROE compared to Main Street

Of course any monetary economy is dependent on bank lending, as bank lending creates purchasing power and purchasing power creates aggregate demand and therefore investment and employment, but my argument is that we could find a new and better general equilibrium via higher levels of bank capital.

Conclusion

My central thesis here (again) is that modern finance is always inherently fragile and prone to endogenous crises. The root reason is centered around the banking system. Banking system creates an artificial asymmetry in the economy by issuing safe, liquid and short term liabilities to fund risky, illiquid and long term assets. This asymmetry in the balance sheet means almost tautologically that the equity must be economically really risky, at least for the taxpayers. This means that there should be less leverage and more capital, at least until we have a system where junior and senior debt can really absorb  losses substantially.

Recent developments like introducing the TLAC and other resolution instruments are  all very well, but to me it seems that given the recent experiences in Europe, bail-in can be very difficult to implement in practical and political terms and therefore I think what we need is good old equity.

Moreover, the lobbyists argumentation is not supported by data and the ROE-fallacy is based on uneconomical thinking.

PS: The NPL-problem in Europe could be solved by writing off the non-performing assets and recapitalising the banking sector. This is yet another piece of evidence that there is too little capital in the banking system.

References:

[1] Diamond, D. W., and P. Dybvig. 1983. Bank runs, deposit insurance, and liquidity. Journal of Political Economy 91 (3): 401-419

[2] Jacab, Z. and M. Kumhof in Bank of England Working Paper No. 529: Banks are not intermediaries of loanable funds – and why this matters http://www.bankofengland.co.uk/research/Pages/workingpapers/2015/wp529.aspx

[3] Jensen, M., and W. R. Meckling. 1976. Theory of the firm, managerial behaviour, agency costs and ownership structure. Journal  of Financial Economics 3: 305-360.

[4] Dewatripont, M., and J. Tirole. 1993. Efficient governance structure: Implications for banking regulation. In  Capital markets and financial intermediation, ed C. Mayer and X. Vives. Cambridge: CUP.

[5] Copeland, T. E., J. Weston. 1988. Financial Theory and Corporate Policy, 3rd edition, Prentice Hall, New Jersey 2003.

Inflation – a midsummer night’s dream?

The current economic situation in the euro area is rather interesting in terms of monetary policy going forward. It is interesting for one, because the ECB governing council needs to balance between anchoring inflation expectations and ensuring financial stability in the euro area. Ultra low bond yields and refinancing rates are boosting solvency figures in terms of long term debt sustainability, especially in the weak sovereign sector. However, inflation seems to be slowly picking up as well.

QE is supposed to enter its peak phase in terms of the size of the consolidated balance sheet of the eurosystem by next year. I guess the maturing bonds will be reinvested for a long time before the eurosystem starts to actually delever its balance sheet. Debt sustainability should be ensured from the financial stability perspective and therefore I do not expect the ECB to start outright selling any time soon. If inflation picks up in an unexpected way, I would presume it would be a safer option to first raise the interest rate corridor upwards. This would flatten the yield curve.

Since late 2014, inflation expectations have been stabilised and the spectre of deflation in the euro area has disappeared for now. QE and the ultra low interest rate corridor have increased substantially the amount of bank lending and money is created in the economy at a decent pace. Investment and consumption is growing, unemployment is falling, currently around 9,3 %. This is all good. However, if inflation expectations would increase substantially, monetary policy stance would need to change sooner and the yield curve would shift upwards faster than anticipated. This could be a problem for some countries with a large stock of public debt, weak external balance (current account deficit and negative NIIP) and low nominal GDP growth. Housing price developments in countries like Germany could also contribute to a tighter stance of monetary policy albeit something could be done with macroprudential tools as well. Cyclically raising capital requirements to shrink bank lending would be a continuation of monetary policy by other means.

Let us therefore spend some time to analyse the current monetary situation in the euro area. First, headline inflation (HICP) is actually close to the target range of the ECB. It is currently around 1,4 %. Core inflation nevertheless is stuck around 0,9 %. Moreover, market based inflation expectations indicators like the 5y5 forward swap are showing some deceleration tendencies and thus lower future inflation. M3 annual growth is around solid 5 %, which is telling us that things are indeed more stable than they used to be. Even in the US, M3 growth is reassuring. The Chinese credit bubble might induce some problems here as  well though, although in China crisis management could be more straightforward. Current account is in surplus for the euro area as a whole, while the NIIP is still largely negative. The external balance would therefore indicate that the euro is more or less correctly priced in the forex markets. So imports induced inflation should stay more or less stable.

Oil price is around 45 \$, and the trend is downwards, but I would assume OPEC will cut production further in order to stabilise the developments there. One can say that oil price is not forcing the central bank to raise rates. Political uncertainty in the US, North Korea, Syria and Russia might trigger a surge in risk premia, but for the time being nothing acute seems to be happening. Of course Brexit and the looming Italian elections add to the political uncertainty as well.

Eurozone growth seems to be broad and solid especially in Germany and France and the recent PMI figures suggest that this is indeed the case, although there was a little correction from the peak figures recently. For the euro area as a whole, 2 per cent growth for this year seems totally realistic. Unemployment is trending downwards as well, going towards NAIRU, what ever it is in the Euro area.

So, in terms of monetary policy, my guess is that the ECB will stop net buying during 2018 and will start raising rates in 2019. Due to financial stability concerns, raising the interest rate corridor would be my choice of weapon. Happy Juhannus (midsummer night) and stay safe.

Our economic machine is lubricated and fuelled by debt — and therefore credit, for the lack of a better word, is good.

The recent Great Recession, Eurozone crisis and Great Depression have been devastating in terms of human welfare, employment and economic growth. Although there has been a number of explanations for these miseries, I will try to explain here, what happened and why, in a general context, according to my humble understanding. My perspective will be non-canonical, as I do not think that current macroeconomic theory (DSGE-models) can give any meaningful insight into the roots and reasons behind these hazardous business cycles. Moreover, I will not delve into the structural flaws of the Eurozone; rather, I will try to explain the general features of any real monetary economy.

First of all, what needs to be understood from the very beginning is that generally any economy is a non-equilibrium system. This means that even though an economy has a general equilibrium configuration, the system might never reach it. Demand and supply might never balance each other. This has been well known since the 1960’s. Herbert Scarf was a pioneer in this field. So the intellectual foundations for neoclassical economic theory are rather shaky in the sense that we know virtually nothing about the adjustment process towards equilibrium, at least in analytical terms. The system might be in chaotic motion without ever reaching a stable state. In the jargon, we say that the dynamical system is not stable.

As the current DSGE-literature is based on the idea of a stable equilibrium, we cannot really be sure that the DSGE-paradigm is a very good model of reality, at least when it comes to understanding how the economic machine actually works. DSGE-models basically ignore banks, money creation and nominal issues in such a way that I am really dubious whether they are of any use. They might be even harmful, as they might prohibit us from seeing when a credit cycle is about to peak.

I think that macroeconomics should go partly back to the 1930’s, back to Keynes, Schumpeter, Wicksell and especially Irving Fisher. I think Fisher’s theory of debt deflation is possibly the most ontologically correct description of credit cycles. I will explain its main features in just a minute. More recently, stock-flow consistent modeling by Wynne Godley is also of high relevance. I think Hyman Minsky has provided good insight and Ray Dalio has smart thinking as well, from the hedge fund Bridgewaters.

The blueprint of the machine

The modern economic machine works like any advanced machine. It has a lot of interdependent moving parts. The lubricant and fuel of the machine is debt/money. The definition of money is a matter of taste, but for the reasons of simplicity, we might understand money as commercial bank deposits. This is because we need bank account balances to settle our day-to-day purchases, be it a pint of beer, a factory machine for a company, a car or an apartment, not to mention day to day groceries. Also, if somebody saves money in pension/investment funds, the savings go into somebody’s bank account. So the speak of “idle bank accounts” is not really intellectually coherent. We could also consider extending the definition of money to various overdraft facilities and credit cards, but for simplicity I will consider only bank deposits, i.e. ~liability base of commercial banks.

Given this definition of money, we should note that money can be created and annihilated on an on-going basis. Money is created by commercial banks when they grant bank loans and extend credit. So when a bank buys a bond from a primary issue or extends a loan to a company or a mortgage, it finances this loan by crediting the customers’ bank accounts. It therefore creates money out of thin air, ex nihilo. Of course the bank might need to refinance its deposits by borrowing from the interbank market, or attracting deposits from payment systems transactions, issuing bonds or borrowing from the central bank, but the initial money itself that is circulating in the payment system is created by banks themselves. In the beginning of times, God did not create the stock of deposits that are lent over and over again. So the money multiplier story in economics 101 text books is best to be forgotten. Likewise, when somebody pays its dues to a bank, the balance sheet of the bank shrinks and money is destroyed (the bank account is debited).

According to Irving Fisher and some others alike, the credit crash begins from the state of over-indebtedness, so economic units have too much debt. Remember that we are talking about debt in gross terms as someone’s debt is always some others’ asset. Too much debt then logically implies too much assets. Too large balance sheet. Possibly too much leverage. We need to understand why there is too much debt and how it accumulates.

As long as the music is playing, you’ve got to get up and dance. We’re still dancing.

The credit cycle begins when firms and households are in a good mood. Firms have a positive expectation of future profits for some reason and the investment analysts’ calculations show a large positive net present value for projects. The firms will then start investing. They borrow money from the banking system, directly or indirectly, and this will cause the balance sheet of the banking sector to go up. There will be lots of new assets and lots of new liabilities. The credit standards are easy and lax, as the banks themselves also see the future as bright, and the incurred losses on current loan portfolios are small and manageable. The value of collateral goes up as well and induces more lending. Let’s do some volume business!

The newly created money is first on the bank accounts of the firms. Then the firms hire lots of new people to build up new factories and so forth. Remember that even capital goods are partly the result of labor force input. This hiring binge creates positive atmosphere in the labor market and the labor force gets new money from the firms. This creates even more economic activity, as the labor force will be inclined to buy new cars, new apartments and new leisure commodities. All these purchases can be partly financed by fresh new money, new mortgages, new car loans and credit card loans. The balance sheet of the banking sector goes up further as the original investment loans for the companies are not paid back yet.

As increased spending creates even more employment in construction and household goods and car industry, the people working for those companies will get boosted opportunities to borrow new money for their consumption. There is a virtuous self-reinforcing cycle that boosts economic growth, employment, revenues and prices. New money is created more than old debts are being paid back. Balance sheets of various sectors expand. Inflation picks up.

Banks will not have any trouble at extending new loans as the level of loan losses stays low and financial assets gain in value, thus boosting the capital base of the banking system. Net interest income stays positive and stable as well. Remember that basically the only constraint for bank lending is the availability of demand, sufficient collateral, and most importantly, banks’ capital base.

After some point, the credit hybris goes too far. Banks will ease lending standards as the data supports lowered credit risk and banks accept even speculative finance needs, i.e. debtors pay only their annual interest expenses without paying back some of the principal. The leverage of the banking sector becomes too large and there is too little capital for unexpected losses. Notice that also the households are too much leveraged as are the firms (banks’ loan assets are households’ and firms’ liabilities, if we assume no external sector and no public sector for simplicity). Banks might know that their lending is reckless, but alas, business requires that you’ve gotta dance while the music’s still playing.

The excess lending/money creation has of course inflated the economy as whole and inflation is fast as are the valuations of financial assets (some of the new money goes into purchasing financial assets in the secondary markets). Because CPI-indices do not take totally and comprehensively asset reflation/bubbles into account, the monetary policy acts too late and there is ample time to develop a good old fashioned unsustainable credit binge.  This is the end of the first part of the cycle.

When the music stops, in terms of liquidity, things will be complicated.

The second part what some people call the “Minsky moment” starts when some shock ignites the system and makes it collapse like a house of cards. For example, markets might just realize that the banks have lent too much money for people who cannot bear the cost of higher interest rates and this might spike a sell-off in bank shares, bonds and ABS –markets. As the loan books start to rot and funding costs of the banking sector goes up, the banks will stop extending new loans and they will start to sell off assets in panic like everybody else in the financial markets. Shrinking balance sheet by selling eg. bonds in your trading book improves your capital and liquidity position. Re-valuation of the assets (fair-value accounting) and eventual loan losses will cause the bank capital go down further and people start really speculating on the solvency of the banking system. The interbank and repo markets dry up. Banks will stop new lending altogether and seek help from central banks and government. A wide sell –off continues, stocks go down as does everything else except the safest government bonds.

The credit freeze is symmetric compared to the credit binge in the sense that the process will amplify itself and there will be a self-reinforcing loop. As lending stops, the inability to refinance maturing debt and interest expenses will cause further defaults and will erode bank capital further. Insolvency procedures will kick in, igniting further sell-offs. As there will be no new net lending, consumption and aggregate demand will fall, which together will wipe out revenues of the firms as a whole. As revenues go down, investments stops and firms will start kicking people out. Unemployment will go up rapidly.

As a result, we have a large contraction in national product, very high unemployment, and low gross investment and bank failures. The system is on a brink of financial self-destruction. This is what happened in 2008-2009.

The public sector will suffer, albeit with a delay. As the GDP collapses, the tax income collapses as well. The public expenditure goes up as automatic stabilizers kick in. Unemployment benefits and other social costs along with lowered tax income will cause large public deficits to accumulate and a rapidly increasing government debt. The fiscal effect however stabilizes the situation at the same time, as increased government spending stimulates aggregate demand.

As the banking sector is the heart of the financial system, bank failures and closings are not realistic as a general option, when it comes to systemically important banks (SIBs). Even though there are legal attempts to make the system more towards “pro-market” instead of “pro-business”, empirical data tells us a story where usually governments tend to bail out banks by injecting capital into the banks and by guaranteeing the bank liabilities (deposit insurance, external financing insurance). These operations cause usually the gross stock of government debt to increase further. This is the end of the second stage of the credit cycle.

After the collapse there is relapse. However, this takes time as most of the balance sheets in the economy are still too large to bear. Now the economy, ceteris paribus, faces a long period of low growth and deflation. Producing economic units, i.e. households, governments and firms will invest and spend only what is essential as they will want to use their income to pay off their debt and nobody is willing to lend them more for extra spending/investing. What we have is a fairly long deleveraging period, what Richard Koo calls “balance sheet recession”.

Ultimately, as time goes by and people and firms get their debts to acceptable levels, the system reaches its low in the credit cycle and the relapse phase begins. This usually however means a long journey of suffering, as deleveraging is a deflationary process (paying one’s debt destroys money in the system) and the deflationary process tends to press down profits and employment.

Play it again, Sam?

Now we have come through the whole credit cycle. The cycle begins with a spark in investments and this creates more money in the system. More money means more profits and more jobs and more lending until the system reaches its Minsky moment. Then the system turns and selling starts, deleveraging and debt deflation follows. Without government intervention this can take fairly long. Ultimately, as the leverage system-wide goes sufficiently low, the system is ready for another build-up of credit. What kicks off the relapse is difficult to say, it has to do something with a sufficiently low level of leverage. One can bear more risk, if there is sufficient capital. And taking risk at the level of the system will again induce the self-reinforcing virtuous credit binge.

In this way, the economy reminds me of a non-linear oscillator, where pathological behavior takes place during the cycle.  It is evident that the economic system is far from equilibrium. Maybe the economy is oscillating chaotically around some steady state.

The real question is then, what can we do about it?

We surely want to avoid economic swings like 2008? To me it seems that the key to stabilization is monetary policy, microprudential policy and macroprudential policy. Fiscal policy is too political and slow, although making public investments during the debt deflation could alleviate the suffering.

It seems clear that the credit cycle has to do with reckless lending and borrowing. Therefore we would need to a have a better rules based system when it comes to interest rates policy. Inflation targeting does not seem to suffice. Maybe one could improve by considering nominal GDP targeting. Bank regulation and macroprudential policies are of essence as well. First of all, banks need to hold a sufficient amount of capital to cover unexpected losses. Also excess leverage should be prohibited. Basel III, new trading book rules, RWA-floors and leverage ratio are steps into the right direction. In my opinion, however, they are not sufficient. Banks have an incentive to maximize their return on equity, and they tend to argue that they have too much capital. The cost of equity should however normally somehow reflect the risks of the bank, when considering ratios like risks per equity. Also, one should consider risk-adjusted returns on equity, not to mention the negative externalities of banking crises.

On the macroprudential side, we would need to lean against the wind when the credit binge is building up. Supervisors and central banks should monitor closely especially loan-to-value- and loan to income –ratios and demand extra capital if the rate of credit expansion is too high. We do have countercyclical capital buffers, but the implementation is yet to be seen.

Loan loss provisioning and impairment rules are of utmost importance as well. During the last crisis, too little was provisioned and too late. Maybe IFRS 9 will fix all this but I have my doubts. During good times the expected life time losses will be small.

What about fiscal policy? My take is that during the balance sheet recession the government should use fiscal policy proactively. As everybody else is deleveraging, the public sector should lean against the wind and invest in repairs and infrastructure. After all, we are living in a consumption -based economy. Somebody needs to spend in order to keep the music playing. In the Eurozone this is more difficult than in the US for example.

Various tax incentives could be introduced to reduce the bias towards debt financing. Cost of equity should be made tax deductible.

However, the most important role is saved for central monetary authority. The most important variable that is volatile throughout the credit cycle is the amount of credit extended per unit time. Therefore what the central banks should target is a steady expansion of the amount of credit. Too little credit is bad, too much credit is bad. If one takes a look at the data of various monetary aggregates, it is clear that central banks had too loose monetary policy before the crash of 2008. In the US at least, whereas in the Eurozone things are more complicated. Controlling the credit supply is of course difficult and because of the Keynesian liquidity trap it might take really unconventional measures like outright monetary financing of households or something of this sort. In more normal times the adjustment of the main refinancing rates should however be sufficient.

My hypothesis is that too narrow focus on consumer price index and the belief in “Great Moderation” caused mainly the last catastrophe. There was too much extension of credit. Of course it was partly supported by the excess savings in Asia. But ultimately the financial system collapsed because banks had lent recklessly. The credit risk then contaminated the whole system through the ABS -market and the interbank market.

When it comes to regulators, the problem is that policy makers are educated in various economic departments and the lack of sufficient understanding of macroeconomic processes lead to a situation where knowbody halted the game. Therefore, given how much suffering credit crashes cause to people, we should invest in macroeconomic research. And not in just the usual empirical macro, but we should really try to model a real working economy with all its money and credit flows throughout the economy. I recently read about stock-flow consistent modeling of macroeconomics and I found it really appealing. Rather it than stock-flow inconsistent modeling. Computer simulations might be useful as well.

It might nevertheless be the case that we are doomed to live forever in this credit cycle. The economy is a highly non-linear system, and controlling it might prove to be too difficult altogether. Not to mention that stopping the music while people are dancing is not politically popular. We are now in the very beginning of a new credit cycle, so cheers!

How to exit from unconventional monetary policy? (Sad.)

Once upon a time there was a systemically important central bank with lots of US treasuries and mortgage bonds, car-, student- and credit card debt and other ABS-stuff on its balance sheet (some 4500 billion dollars) and the federal funds rate almost at the so-called zero lower bound (ZLB). With GDP at its potential level in the US, unemployment probably below NAIRU and inflation expectations rising, the central bank decided to think about how to normalise the monetary policy stance again. What will this imply for the yield  curve in general?

Given the state of the US economy and the intentions of the Trump administration to induce more spending in the US economy, inflation expectations will go probably even higher, and the FED will have to counterbalance this by shifting the yield curve vertically upwards. Let’s consider how this might happen.

Now, it is straightforward that the central bank sets the short-end interest rate corridor by for example setting a bid/ask -spread for borrowing and lending vis-a-vis the banking sector. Remember that central bank has unlimited amounts of central bank money at its disposal. Buying stuff (i.e. bonds) is also easy: just credit the banks’ accounts at the central bank. Selling bonds is easy as well, debit the banks’ accounts. Buying bonds from the banks does not however increase what we  call “money”, as the central bank just credits bank’s checkings account at the central bank. But the yield curve for sure has changed since 2007, and this of course encourages bank lending.

How are long term interest rates determined in the ‘market’ ? For the last 8 years one could say its trivial, they are determined by the central banks. Central banks’ huge demand drives up bond prices and therefore lowers the yields across the board. In theory, the yield curve is easiest to grasp starting from  the short end. The canonical theory says that the pure expectations hypothesis holds: forward rates are just the expected short spot rates, so for example if one has a bond with some coupon rate and a bullet type -maturity, one can replicate the cash flow as a series of zero-coupon instruments. This then tells us that we can do (credit risk -free, like US treasuries) bond pricing by using the forward rates. What determines the forward rates: the pure expectation hypothesis says that it is the expected spot rates.

So for example, imagine we have a cash flow that is received after two years, let the spot rate on that be $s_{0,2}$, therefore it should be product of the current 1-year spot rate and a forward rate as follows:

$(1+s_{0,2})=(1+f_{1,2})(1+s_1)$

The key question is now, what determines the forward rate? According to the pure expectations hypothesis we assume

$f_{1,2}=\mathbb{E}_0(s_{1,2})$

So that the forward rates are supposed to reflect expectations of short spot rates going forward.

In practice however, most likely the FED will :

1. First raise its depo-rate (the interest on bank reserves), so that the truly risk free rate is acting as a floor for money market and interbank rates. This will feed into the longer end of the yield curve as well, through forward rates. It is a real floor, because nobody should be willing to lend to a risky counterparty at a lower rate. Of course the discount window rate is to be raised as well, in order to keep the corridor stable. These operations will cause the Federal funds rate (=key offical interest rate) to set somewhere in the middle of the corridor.
2. Second, they will stop altogether reinvesting the maturing debt. This will diminish both the size of the balance sheet of the Fed and the general demand for long term bonds. Prices will go down, yields will go up. Of course, the FED could just go about selling outright the bonds it holds. This would have the same, albeit much stronger effect.
3. Altogether, the effect is that first the yield curve flattens a bit, and ultimately also the  long end will go up. This will stabilise inflation and hopefully maximize emploement at the same time.

Other things being equal, the vertical shift of  the US yield curve will then shift demand into more yielding US dollar instruments, which will tend to depreciate the euro against the dollar. This will ultimately cause imports -related inflation go up in the Euro area, and ultimatelly the interest rates will have to go up here as well, in order to anchor inflation expectations in the medium term.

Summa summarum: no hyperinflation, and the FED can delever its balance sheet in an orderly fashion without causing too much turbulence in the money and bond markets.

What is “long-term debt sustainability” or “kestävyysvaje” all about?

What’s at stake?

In Finland, when the media and various special interest groups debate about the current economic and fiscal policy stance of the government, one cannot avoid the term sustainability deficit or “kestävyysvaje”. In economic policy circles it is also known as the S2-indicator. Given the heated political discussion on fiscal stance, I thought it might be a good idea to try to explain it in ‘layman’s terms’ what the indicator is all about. So dear media and politicians, let’s have a go.

The idea of fiscal policy sustainability is mostly defined around the following abstraction: we say that a policy stance is sustainable if the present value (sum of discounted cash flows) of future primary balances (budgetary surplus/deficit without interest expenses) is equal to the current level of debt.  So sustainability is an intertemporal budgetary constraint for the government. Then we can define S2 or “kestävyysvaje” in the following way: S2 is the immediate and permanent one off fiscal adjustment that would ensure that the intertemporal constraint is met.

Debt dynamics (accounting identity)

By definition, we can express the change in annual debt by

$D_{t+1}=(1+r)D_t-PB_{t+1}$

The equation tells us that the annual change in government debt is the sum of : interest expenses on the current debt stock and the negative primary balance. This is an accounting identity and true by definition.

..and S2 derived from it

After some lengthy calculations, assuming that the interest rate is constant, we get the following expression for the S2-indicator

$S2=rD_0-PB_0 +r\sum_{i=1}^{\infty}\frac{\Delta A_i}{(1+r)^i}$

where the third term represents the ageing related costs according to the decomposition

$\Delta PB_i=PB_i-PB_0=S2-\Delta A_i$

The previous equations tell us that the “kestävyysvaje” is sum of three terms : 1) the interest rate times the current debt 2) primary deficit 3) present value of age related costs times the interest rate.

This all tells us that that S2 depends crucially on the initial debt and initial deficit and projected age related costs. Of course the expected rate of interest matters a great deal as well.

Where are we now?

European Commission currently sees that the “kestävyysvaje” for Finland is some 3,2 % of GDP at market prices, so some 6-7 billion euros. This would mean drastic budgetary cuts, if implemented one off. However, currently it is perceived that at least to some extent structural reforms can bring down the figure a bit.

Structural reforms lower the present value of ageing costs

As can be seen from the formulas above, to bring down the kestävyysvaje, one could implement structural reforms in order to bring down the present value of ageing costs

Critique

In my mind, the S2 is an indicator on where we stand, it should not be a strict and rigid policy rule that should be followed religiously. Given that there is more or less a  consensus on the figures across research institutions , and  COM, BoF, MoF, we should see it as a strong indicator that the current level of spending and debt is unsustainable with respect to the projected ageing related costs. Of course there  is a lot of uncertainty and sensitivity analysis is important.

What about net debt and the definition of general government figures?

Put aside the previous figurative considerations, I think we should be more worried of the fact that “TyEL” is included in the general government deficit figures. In practice I find it difficult to imagine that the government would confiscate the private pension system assets and liabilities. So if there is too much pessimism and uncertainty in the determination of kestävyysvaje, I think there is too much optimism that the pension system would provide budgetary revenues to the annual government budget.

On net versus gross debt: one should also consider whether the government could shrink its balance sheet by selling financial assets. This would bring down the kestävyysvaje, as S2 is determined on a gross debt basis. One could bring down the debt from current 102 bn to some 75 billion or so by considering selling the liquid financial assets. On the other hand, leveraged government investment activity might be rational, iff the risk adjusted returns on financial assets are at least large enough to cover the interest expenses on government debt. This is risky business however, and its therefore difficult to say what is the optimal anount of leverage.

Why all the fuss?

“According to the system of national liberty, the sovereign
has only three things to attend to… first, the duty of
protecting the society from the violence and invasion of
other independent societies… secondly, the duty of
protecting … every member of the society from the
injustice or oppression of every other member… and
thirdly, the duty of erecting and maintaining certain public
works and certain public institutions, which it can never be
in the interest of any individual to erect and maintain”