Monthly Archives: November 2016

Economic Rationality, what is it?

spock

 

Economics as a subject is subject to fierce criticism these days. Most recently Paul Romer provoked a serious discussion around the usefulness of contemporary macroeconomics. I think the critique is justified. DSGE models need to be updated and banks as money creators should be included in the models. For reference, I recommend for example the following sources:

http://www.imf.org/external/pubs/ft/fandd/2016/03/kumhof.htm

http://www.bankofengland.co.uk/research/pages/workingpapers/2015/wp529.aspx

Ideally, economic business cycles should be modelled endogenously. This would mean that the system would produce oscillations inherently around perhaps some steady state. Contemporary DSGE models according to my knowledge are driven by external stochastic shocks. Maybe altogether better models can be developed, but I guess the point is that we need to have macroeconomic models that are ontologically sound and at the same time have microfoundations. This is no doubt very difficult.

However, today I am blogging about economic rationality and the theory of decision making in general. To me it seems that when people criticise economics, quite often one hears allegations of the type “man is not rational” “markets are not efficient or rational” or something of this sort. I would like to bring some structure into this debate here.

Choice theory, utility functions and convex programming

First of all, economic rationality is basically a set of axioms that we assume in order to establish a tractable model framework. This mostly means that if one wants to have for example a meaningful optimization scheme for consumer choice, we need to have a differentiable and therefore nice utility function so that we can try to find a constrained maximum and thus an optimal choice. Microeconomics is basically convex non-linear programming.

These axioms are technically what is called a total preorder induced by a binary (preference) relation on some set of alternatives. They are

  • reflexivity
  • totality
  • transitivity

Basically we assume that all pairs of alternatives can be compared and that there are no loops. This notion of rational preferences does not say anything about moral good or what is to be pursued on ethical grounds. It is merely a reasonable set of assumptions for a decision maker. So economic rationality does not rule out anything like genocide or crime of what have you.

Armed with these and some topological considerations, one can represent the preferences of the decision maker with a nice utility function. Then one can proceed with optimization programme given material constraints.

Choice under uncertainty

When one assumes that the outcomes are uncertain, things get more difficult and interesting. First of all now we are making choices around some collection of probability spaces, so that we need to pick the right or the most suitable probability measure. In economics textbooks one usually talks about lotteries, but I think it is best to talk about random variables or probability measures.

So we assume the previous total preorder over the set of random variables/probability measures and we also assume additionally an axiom of independence and continuity (topology again). Enter John von Neumann. What we have is the expected utility paradigm. This theorem says basically that if the assume these axioms over random variables, the preferences can be ranked according to

\int u(X)dP

Which is just the expected utility given a utility function u(.).  So this is how a rational decision maker picks  among things under uncertain payoff. This is quite useful. Only that is not empirically really accurate. According to data, people tend to deviate from this kind of EU -behaviour. We tend to overestimate the significance of events with small probability. So we expect too much and too little. For details, read e.g. the book “Thinking ,fast and slow” by Dr. Daniel Kahneman, a nobel laureate.

Maybe one should consider alternatives to EU-behaviour. This could be straightforward, as the representation can be established quite easily using some basic tools of linear vector spaces, like hyperplane separation/Hahn-Banach theorem and Riesz’s representation theorem in the case of Hilber Space. I’m aware of prospect theory, rank-dependent utility and dual choice theory by Yaari. Maybe one could still improve?

Rational expectations, efficient markets and all that

What about rational expectations and the efficient market hypothesis? Rational expectations is an assumptions that expectations are unbiased. So in other words we do not make systematic errors in forecasting. This seems plausible to me. EMH (efficient market hypothesis) in turn assumes that all information is already priced in financial assets. Plausible?

EMH does not imply that large deviations are precluded. Efficient market hypothesis is really about the price process being a martingale. So again, the expected price of tomorrow should be the same as today. Actually it is rather hard to assume otherwise because of arbitrage opportunities. If expected price was higher than todays, one should buy a lot, which would drive up the price today and vice versa. SO actually EMH is a rather plausible assumption.

So in other words, as long as the expectation exists, we can have EMH even with extreme events like non-integrable variance, like in the case of alpha-stable distributions.

Once again: extreme variation in prices does not imply that EMH is wrong.

Rationality is not a moral statement

In economics anyways. Rationality in economics is about being coherent and consistent. In economics, human behaviour is modelled through an optimizing entity. Think of dynamic programming and optimal control. The Bellman optimality criteria says basically that if one travels from A to C through B and the route is optimal, it implies that the route from B to C is also optimal. In game theory this is basically the same as subgame perfection. It is also backward induction in other instances. Economics is about optimizations as we are maximing profit or welfare or utility.

Are we always selfish according to economics?

In short: no. Economics only assumes usually a total preorder on the set of alternatives. This does not preclude in anyways choices where one gets utility from being unselfish. Again, economic rationality is about consistency.

There is however one interesting problem at the very foundations of economics. It is called the integrability problem. Basically the question is given some observed action/choice, can we construct a utility function for the decision maker that always makes the action rational? I guess this might be interesting for some scholars of philosophy of science.

 

 

 

 

 

What is risk ?

risks-sign-3-300x225

 

Introduction

In the field of management, insurance and finance, systems engineering and even world politics, decision makers face difficult choices that may include risk. We usually think that risk is something that includes a possibility for a loss or gain in terms of some abstract value. Finance is  likely the field that is most exposed to the concept of risk. I will introduce here the main issues around risk and uncertainty, in order to facilitate analytical discussion around (financial) risk and uncertainty further.

Risk is inherent in this world of ours as we lack full information and the means to analyse it. Risk is about uncertainty from a subjective viewpoint. In economics and finance, risk is usually introduced through concavity of the utility function. More specifically, we have what is called the Arrow-Pratt coefficient of absolute risk aversion.

A=-\frac{u''(w)}{u'(w)}

This means that if the decision maker is risk averse and has some wealth w, losing x units leads to a higher loss in utility than gaining x units (decreasing marginal utility). So if one is risk averse, one should always reject gambles of this type. Specifying the Arrow-Pratt risk aversion measure, we can generate a family of respective utility functions.

Another important concept is the so-called certainty equivalent which would leave the decision maker indifferent whether to accept the gamble or a certain monetary outcome. In the context of expected utility theory, we can define certainty equivalent implicitly through

u(C)=\mathbb{E}[u(X)]

Certainty equivalent demonstrates clearly how riskiness depends on both the objective probability and risk preferences. Therefore a good risk measure should incorporate these two aspects.

Quantitative risk management

In practical life, risk is is usually quantified by utilising some concepts from the theory of probability and mathematical statistics. The simplest measure of risk can be identified with standard deviation. This risk measure is the one introduced in the porfolio optimization model by Harry Markowitz. One obvious problem with standard deviation is that it penalises upside risks, i.e.exceptionally good outcomes increase risk. On the other hand, standard deviation is a simple risk measure and for simple distributions it works well.

One can improve from standard deviation and introduce for example various quantile-based risk measures. One of the most familiar risk measure is Value At Risk (VaR). VaR gives the loss associated with some probability. So it is in a way better than standard deviation, as good outcomes are not punished. Of course when the random variable has a finite support, one could also take the infimum or the minimum value of the random variable in question.

Artzner et al. introduced the concept of coherent risk measures that establish a set of plausible axioms that a risk measure should satisfy. Most importantly, a coherent risk measure is sub-additive and positive homogeneous so that diversfication pays off. Now it so happens that after some 17 years since the original article was published, the Basel committee on banking supervision has finally adopted the expected shortfall risk measure in its fundamental review of the trading book. Expected shortfall is just the conditional VaR, or  a conditional expectation, given that losses exceed some x.

Risk management tomorrow

As mentioned, subjective preferences should be incorporated in a risk measure. This aspect becomes only more important in the near future, when robots and artificial intelligence take over some of the tasks run today by humans.

In my view, the most promising framework for risk measures in this respect is introduced in the so-called family of spectral risk measures. Spectral risk measures generalise the previous risk measures and have some very useful properties. A spectral risk measure is of the form:

M_{\phi}=-\int_{0}^{1}\phi(p)Q(p)dp

where we integrate over the probability measure p. The function \phi is the spectrum and Q is the quantile function/generalised inverse of the cumulative distribution function. One should note that the expected value is the trivial spectral risk measure for a unit spectrum. The general idea behind such risk measures can be understood as distortions of the objective probability measure p. This distortion of the probability measure represents preferences and the spectral risk then gives out an expectation for some synthetic probability measure. We can call this the risk-neutral probability measure. The label comes from the fact that given the new measure P’, the decision maker acts as like she was maximizing the expected value for the random variable. This information can then be used to price financial assets, for example.

It should be noted that in finance the risk measures are functionals on the space of random variables and they generally do not consider the risk preferences of the investor. There are a couple of interesting exceptions, both published in the Journal of Political economy (Aumann & Serrano, Foster & Hart). These indices of riskiness take into account the preferences of the decision maker implicitly:

\mathbb{E}[e^{-\frac{X}{R(X)}}]=1

and

\mathbb{E}[\log{\left( 1 +\frac{X}{R(X)}\right)}]=0

The interested reader should also study the entropic risk measure and entropic Value at Risk. These risk measures have interesting properties as well.

Utility, choice theory and risk

In my opinion the concept of risk should always reflect both the objective probability measure, and the decision makers preferences. Therefore, distortion based risk measures are somehow the most interesting ones from this point of view. In classical choice theory under risk, we have the von Neumann -Morgenstern representation of preferences through expected utility and therefore one should choose always in such a way that the expected utility is maximised. The set-up assumes the usual weak order over the probability measures and in addition there is the independence axiom (convex sets) and axiom of continuity (proper topology). One basically then uses the separating hyperplane theorem in order to show that there exists a functional that retains the preferred order. This leads to a program

\underset{P}{\text{max}}\int_{0}^{1}u(X)dP

This may seem counterintuitive, as the decision maker does not care about the variability of utility, but is “risk neutral in utilities”. As indicated by the Allais and Ellsberg paradoxes, it indeed might be the case that people do care about the variability of utility. One should note that the expected utility framework basically distorts the monetary outcomes so in a way if one has a concave Bernoulli utility function, for example logarithmic utility, small rewards are emphasised whereas large rewards are deprioritised. It might be the case that the class of utility functions is not rich enough to cater for this kind of behaviour.

Conclusion

Summa summarum: a good risk measure is coherent and takes into account individuals ‘ preferences. I think that in this regard, the family of spectral risk measures are superior to to others. See for example: https://blogs.cfainstitute.org/investor/2014/02/26/are-spectral-risk-measures-respectable-enough/