Seeking Alpha

# procyon's  Instablog

procyon
Send Message
An avid reader, writer, social thinker, manufacturing leader with working experience in India and Europe. After a four year stint in Switzerland, have now returned to India to head a large unit, which is expanding.
My blog:
Procyon Mukherjee
• ##### The Curious Case Of Probability Weighting Function: Why Central Banks Hesitate To Make Any Change

Left tail and Right tail distributions

Decision making under uncertainty has been the core subject of Kahnemann and Tversky; their thirty years of work in the formulation of the Prospect Theory and later Kahnemann's book, "Thinking Fast and Slow" have brought new insights in the area of what constitutes the process of making decisions where information and data point to various levels of uncertainty and in a probabilistic distribution of outcomes how does the mind choose. The distributions themselves are never anywhere close to normal, as life's events do not happen in such an ordered form that follows a normal distribution, rather almost all distributions are skewed either with a left tail or a right.

Let me take Taleb's example of incomes in a population, it is naturally skewed with a left tail as those earning thousand times the average are very few while the majority take the distribution to the left of the average, the average itself is meaningless as the majority is nowhere close to the average. But how do economists deal with this data, or marketers, or even government forecasters and data analysts? Do they take the population as a distribution of incomes or they take a 'value of convenience' like the mean, median or mode to make the matter look simple? Is there any way that a policy targeted for the average income of the population of any consequence to the majority of the people or to the minuscule minority? Is there a way that a product targeted for the average earner have thousands of hits if the average earner is a minuscule minority (it could be for example in a town where Bill Gates lives, the average earner of that town would have a theoretical earning that no one actually earns anywhere close to)?

Policies are made with a targeted population in mind, or products are launched with average earners in mind, but the distribution of a statistic if it is heavily skewed, how do we frame the correct response for the different segments in that distribution? Let me take the example of the framing of an insurance policy where a risk event is plotted with frequencies in a population, let's say we look at a distribution of age for deaths in a population and use that for the future. There is a multiplicity of factors operating for that target population, and for the past period for the data to be distributed in a manner. Most likely if this target population is for a place like Palestine, the distribution would be a left tailed skewed distribution, for a place like Japan this would be a right tailed one; similarly periods in History could have a strong bearing on the outcome, but could we use a prior event and distribution to predict a future event? Of course, using Bayesian conditional probability, where prior probabilities are used to frame posterior probabilities.

The question that is becoming increasingly important is that Kahnemann and Tversky used a framework in their Prospect Theory that had four pillars, namely, 1) reference dependence, 2) loss aversion, 3) diminishing sensitivity, and 4) probability weighting. Out of these four while human predilection to 'not making losses' is more potent than making gains has been more often quoted while the importance of probability weighting, which is the human tendency to overweight the low tail probabilities while underweighting the high tail probabilities comes as a major imponderable in decision making under uncertainty. Also Kahnemann and Tversky's research showed that "the simplification of prospects can lead the individual to discard events of extremely low probability and to treat events of extremely high probability as if they were certain."(An Analysis of Decision under Risk: Prospect Theory, pages 263-291)

Let me try to elaborate this in details.

Probability weighting

Let us take the classic Russian Roulette game with six players. Probability of first player surviving is 5/6. Probability of second player surviving is the probability of the first player dying OR them both surviving which equals [1/6 + (5/6)(4/5)] = 5/6 again. Probability of third player surviving is the probability of first player dying + the second player dying + all three surviving. This is = 1/6 + 1/6 + 3/6 = 5/6. One can add because they're mutually exclusive. So it would seem that the choice of being the first player or the second player based on the probability of survival would make no difference. But would one like to be the first player is the question. In a risk event as a Russian Roulette, one would be tempted to think differently about the odds of dying than what the probability estimates would suggest, thus creating a probability distortion in our minds. Even a equally likely event could be thought of having a different probability based on what precedes and what supersedes, the reason being the aspect of probability weighting function, which Kahnemann and Tversky found out was based on the human mind's intrinsic tendency towards overweighting of small probabilities, underweighting of large probabilities, and subcertainty (i.e., the sum of the weights for complementary probabilities is less than one).

Kahneman and Tversky also noted that the probability weighting function may not be 'well behaved' near the endpoints 0 and 1.This is predominant where gains and losses are not evenly distributed, or the mind perceives the aversion to losses to be more important than making the gains.

Let us examine a few real life cases. Just before the Lehman Brothers collapsed, just around that time the flow of credit assumed the highest level, if we denote it by w which could vary from 0 to 1, then w was very close to 1 just before the crisis struck.

Before Lehman Brothers collapsed w (0,1) à 1

When the Lehman Brothers collapsed and the financial world went through a shock, this same w of the flow of credit moved to the opposite extreme, tended towards 0:

After Lehman Brothers collapsed w (0.1) à 0

How did such a dramatic change happen? How did the market swing from one extreme to the other in such a short span? It was mainly due to probability weights, where the weights moved into a non-linear distribution, with every bank manager increasing the estimate of default from near zero to close to one, thus covering up for the potential loss was the first knee jerk reaction, thus taking an enormous amount of liquidity from the market, stalling all inter-bank lending.

Thus a low tail risk event of default for a banker as solid as the Goldman Sachs tempted it to approach Warren Buffet to bail it out for \$5 Billion, which if one uses how Warren Buffet had estimated the probability weights was just the opposite of what the Goldman Sachs executives had estimated. The same low tail risk event was given different probability weights by two sets of people; Warren Buffet perhaps made a huge bonanza from this investment. It is also not a matter of conjecture any more that when the inter-bank lending did start it has continued to be basking in the paleness of the 'crisis' syndrome, it has never reverted to the long term mean position, this is one of the fundamental reason why in spite of Federal Reserve's unprecedented monetary loosening, almost a Trillion Dollars is parked as the Commercial banks' excess reserves lying at the liability side of the Fed balance sheet, which does not make any meaningful contribution to goods and services.

Let me take the case of a high tail risk event like the changes in interest rate for its impact on equity markets versus the bond markets. The delta change in the positive direction in interest rates works almost inversely to the bond prices, while for equity markets the resultant is a function of what the general economy would be doing in such a situation in the future and the impact of the residual consumption expenditure on the performance of companies needs to be evaluated. Here we are dealing with conditional probability paradigm, when a posterior event needs to be assessed with the aid of a prior event. The problem with this is that the bond prices being lower as interest rates harden the attractiveness of equity versus the bonds becomes a dominant factor at play as well. These two work in tandem to determine the long run outlook of the equity prices. The real problem however is that in a dampened economy the money flow being constant, when bonds and equity start to attract a higher proportion of the total, that left for the consumers to actually spend on consumption goods becomes less, so here we have a situation that is not sustainable that bonds and equity market on their own cannot forever be disjointed from the rest of the economy and the consumer's spending habits; the positive change in the interest rates on consumer's spending habits would not change in the positive direction either. What does this leave us, we need a constant dosage of money supply to make this dysfunctional equation work, for a temporary period, it can never be a permanent affair.

But how does this make probability weighting work? The bond holders assess the interest rate shift and its timing with certain probability weights, which may or may not coincide with what the equity holders would be assessing, while the general consumers given the state of the economy make their own judgment on the timing of interest rate change and the quantum. Federal Reserve in their forward guidance creates some room for these judgments to be influenced. The resultant is what we get as the 'reaction', which either is metamorphosed as 'jobs created', or 'income levels' or at a very aggregate level the GDP itself; for each individual sector, the weights could be quite different.

The weighting of individual sectors, consumers and corporate and the imponderables that determine the shift in strategy is a complex matter, to even put all factors at play and to model it would need above all the use of non-linear complex mathematics. It therefore suffices to say that when Central Banks try to simplify the impact of any change, what they actually intend to do is not make any change at all; it is the illusion of change that gyrates in prices when bets and counter bets challenge the paradigm.

Procyon Mukherjee, 13th August, 2013

Aug 13 2:27 AM | Link | Comment!
• ##### Unwise By One Third: The Fallacy Of Self-Correction

Bandwagon behavior, and coupled to it the 'free-rider' problem has wide scale ramifications from equity and derivatives trading to public accountability in capital raising and deployment and in many other areas including the simple experiment of finding the best candidate for a job amongst a large number of applicants. The behavior in most bourses as the opening bell is sounded till the closing has similarities that can be attributed partially to the effects of information asymmetry or sometimes to potential difficulty to actually get a mathematical solution to a complex problem where unknowns are either large or a simplistic linear model may not be the right fit to get as close to the reality as possible. Relying on availability heuristic or copying the behavior of others is the more 'sensible' response, which may not be the more rational one. The engines through which these actions get guided or influenced is a more recent study where market participants could actually be incentivized to act on signals of others rather than actively seek information for a more personal inquiry. Inquisitorial journey into areas where timely information and perfect information could be rarity further compounds this problem plaguing financial markets in particular.

As David Brooks would say it, in "this buffered world of private choices", decisions are taken more sequentially than in an integrated manner, although it may seem otherwise. From planning decisions, in absence of perfect on-line up to date information, where availability heuristic is the basis of taking a view, to the more venerable corporate planning for the long term, sequential processes lead to decision making where decisions taken by others and priori probabilities of the correctness of these decisions have a bearing on the posterior probabilities and on the decisions thereof.

In a world that is more oriented towards marketing a product or an idea or a value proposition, we have the same incidence of sequential decision processes driving results. The launch of a successful product in a test market is a pre-condition for its subsequent success, the idea behind any political move must be tested positively in carefully chosen segment of the population, to be of any significance going forward.

To draw the first customers to a product or a service is the fundamental driver towards success of that product launch and a positive response can only catch on if that is carefully nurtured with the right interventions. One person's decision is no more cocooned and sequestered in a world that is far more networked than before and sequentially as people exercise their choices on products, ideas and mandates, we have the incidence of what is summarized as 'bandwagon effects', or what is called the effect of 'information cascades'. From one man who his bent on his instincts (he is more likely to over-estimate his positivism when he is keen to buy while could under-estimate his negativism when he wants to reject) to a herd who rely on instincts of others, we have a market phenomenon which cannot be called a self-correcting mechanism anymore; thus when bubbles start to inflate it is mostly unnoticed or when credit is extended the inflection point is never fully understood as there is no self-correcting force in play. Similarly when too much credit is taken out of the market, it is never known when too much has already happened as every bank watches the other in deciding on the next course.

Abhijit Banerjee in his seminal paper in 1992, titled, "A Simple Model of Herd Behavior", introduced the topic of 'everyone doing what everyone else is doing although private information suggests doing something quite different'. His simple model brought to the fore the disastrous consequence of such an eventuality, "In equilibrium we find the reduction of informativeness be so severe that in an ex ante welfare sense society may actually be better off by constraining some of the people to use only their own information." The Nash equilibrium that creates the most efficient solution is itself based on sequential acceptance of other's choices which are themselves based on choices exercised prior to theirs, which may or may not be based on rationale. Lack of informativeness in the final outcome is a very pretentious denouement of the bandwagon effect.

Taking this case forward we have a few more inter-connected issues that get surfaced. The signals that influence a decision are not free, nor can they be sometimes priced correctly or traded that have no externalities involved. This makes modeling more difficult and most experiments have simplified this problem.

Asch in his experiments in the period 1951 to 1956, (he used a number of confederates and one single participant, where the participant was the last to respond when the responses of the confederates were known) which led to the Asch paradigm and later his variations led to the same result that "participants conformed to the majority group (confederate) in about one-third of all critical trials", regardless of what the participant's individual responses or preferences were.

In the seminal article by Robert Schiller in March 2008, 'How a bubble stayed under the radar', which came out in New York Times, he took examples from the work by Bikhchandani, S., Hirshleifer, D., and Welch. I, where the effect of prior and posterior probabilities explain how a sequential decision making works.

Easley David shows how this works in this brilliant example taken from his paper, 'Networks, Crowds and Markets: Reasoning about a highly connected world':

"A person's signal telling them to accept is denoted as "H" (a high signal, where high signifies he should accept), and a signal telling them not to accept is "L" (a low signal). The model assumes that when the correct decision is to accept, individuals will be more likely to see an "H", and conversely, when the correct decision is to reject, individuals are more likely to see an "L" signal. This is essentially a conditional probabaility - the probability of "H" when the correct action is to accept, or P[H|A]. Similarly P[L|R] is the probability that an agent gets an "L" signal when the correct action is reject. If these likelihoods are represented by q, then q > 0.5. This is summarized in the table below.

 Agent Signal True Probability State Reject Accept L q 1-q H 1-q q

The first agent determines whether or not to accept solely based on his own signal. As the model assumes that all agents act rationally, the action (accept or reject) the agent feels is more likely is the action he will choose to take. This decision can be explained using Bayes rule:

If the agent receives an "H" signal, then the likelihood of accepting is obtained by calculating P[A|H]. The equation says that, by virtue of the fact that q > 0.5, the first agent, acting only on his private signal, will always increase his estimate of p with an "H" signal. Similarly, it can be shown that an agent will always decrease his expectation of p when he receives a low signal. Recalling that, if the value, "V", of accepting is equal to the value of rejecting, then an agent will accept if he believes p >0.5, and reject otherwise. Because this agent started out with the assumption that both accepting and rejecting are equally viable options (p = 0.5), the observation of an "H" signal will allow him to conclude that accepting is the rational choice.

The second agent then considers both the first agent's decision and his own signal, again in a rational fashion. In general, the nth agent considers the decisions of the previous n-1 agents, and his own signal. He makes a decision based on Bayesian reasoning to determine the most rational choice."

This is summarized by Robert Schiller as: "In other words, more than one-third of the time, rational individuals, each given information that is 60 percent accurate, will reach the wrong collective conclusion."

Collective conclusion of the market based on sequential reasoning, where informativeness is itself scarce and on a shaky ground leads to the general argument that when the market as a whole could be taking an irrational decision, the chances of that being deciphered and acted on by an individual participant is remote. When market itself is one third unwise, as individual participants respond seeing the response of others as in the Asch experiment, the self-correcting principle of the market falls flat, or at least the mathematical fallacy is no more on a shaky ground.

This leads us to the conclusion that bubbles can only burst when the crisis is full blown, when the collective conclusion leads to this denouement where the Nash Equilibrium shifts; the collapse of a paradigm only needs one small nudge against a mountain of wisdom that is more unwise.

Procyon Mukherjee 25th July 2013

Jul 25 7:39 AM | Link | Comment!
• ##### Gyrations In Circulating Capital: The Special Case Of Savers

I am sorry that I am caught between the urge to understand and to respond as I go through the motions of reflecting on what I gather as one of the least cultivated subjects of our time, while being the most visible one in the blogosphere.

I have this nagging doubt that sometimes tilts towards a disbelief of the fractional reserve system and its purpose as it stands today; or for that matter money itself, as it mistakenly replaces capital, and in its over-abundance we find the scarcity of ideas in building a meaningful foundation of an economy that does not falter from one day to the other following cues, or simply an information over-drive that masquerades the true underlying facts.

Money must move between savers and borrowers and the exchange must have a purpose that satisfies an economic need; the fractional reserve system, together with the Central bank engineering initiatives cannot replace the basic structure of lending that leaves one section completely dis-incentivized at the cost of the other. Is it possible to have only borrowers (going by the incentives for the same) and no incentives for savers in an economy? Could we therefore sufficiently posit for making people to spend and consume?

Keynes raised the issue for the first time in his famous 'Paradox of thrift', where he posited that individual's desire to save more and consume less makes the societal outcome severe in a recession in particular as aggregate demand is impacted by the lack of aggregate spending. Here we must be cautious to examine what exactly Keynes meant by 'savings' in this example; he meant money that is taken out of circulation, or negative change in the circulating capital. This is a very significant observation as distinct from the assumption that savings could mean investment in fixed income assets as we generally connote, which could be less yielding and in riskless ventures, otherwise it is naïve to imagine that savings would be stashed under the carpet for rainy days, such an assumption could actually alter the nature of the debate.

If we assume that savers are forced into saving for the want of adequate future income or present, they could actually take money out of circulation, in the same way as Keynes argued and that naturally would mean that spending would be impacted by it and which is also the other man's income, in absence of which less goods and services would be eventually produced and consumed as a cascade effect for the economy. This cascade effect would cause output gap that would prolong the recession.

F.A. Hayek on the other hand in his famous essay of "Paradox of Saving", examined the illustrious work of Foster & Catchings in their essay, "Dilemma of Thrift", and their books, "Business without a Buyer" and "Profits". He proved that the dilemma or the paradox could be actually solved by making savings move from its stupor of idleness and transform its unproductive stasis to move through a locomotion of investment where others in want of capital and who had means of using it productively would be able to make better use of it by 'producing' which could be 'consumed', through a market clearing mechanism.

This was the first ideation of the usefulness of savings, something contrary to the long arduous treatment that Economists had meted out to this subject and was indeed quite uncharacteristic and somewhat at variance to the general acceptance of the alternate hypothesis that savings takes out spending in an economy and is at the root of recessionary posturing.

Circulating Capital and what happens when it does not move into goods and services

Here we are forced to make a distinction between circulating capital and capital (which is investable surplus as we define it) and monetary supply through the fractional reserve system, which could be progressively denominated as M0, M1 or M2, which is also denoted as the monetary base.

Quantity Theory of Money suggests in the equation of exchange that M.V = P.Q

As an example, M might represent currency plus deposits in checking and savings accounts held by the public, Q real output (which equals real expenditure in macroeconomic equilibrium) with P the corresponding price level, and P.Q the nominal (money) value of output. In one empirical formulation, velocity is taken to be "the ratio of net national product in current prices to the money stock. Money could be replaced by circulating capital here in this example.

Here we must go to the concept of circulating capital as first proposed by Marx and later explained by Keynes, as the engine through which capital is created, that in its circulating form makes transactions possible and by its transacting repeatability (combined with inter-changeability from hand to hand) enhances the prospects of making any surplus. To put it in lay terms, the monetary release moves through money in circulation and deposits to make meaningful exchange in goods and services possible that makes the engine of the economy move from a state of inaction to a state of surplus creation. If the creation of surplus in any form takes out money in circulation, which is negatively impacting the circulating capital, we have situation of stasis once again returning, with a deleterious impact on the creation of further surplus.

Here again we see that circulating capital (whether from Central bank advances or from any other) has the sole purpose of creation of goods and services that would move through a market clearing mechanism to be exchanged and the whole quantity theory of money is based on the assumption that the velocity of money (the speed with which goods and services would be exchanged or changing hands) induces the effect of higher output. The point to be noted is that circulating capital, if it does not move into goods and services but moves as a medium through which assets like equities or other paper assets are bought and sold, its impact on total output may not be the equivalent to the alternate mode where in it is used to buy or sell goods (which effectively means that some direct income streams are created); this partial response to output when circulating capital is used for transactions in speculative activity, not in production, has been a big remiss of our times and has been the least mentioned subject in the recent debates.

So when monetary base is increased (assuming M2) and the big delta is used to buy equities, or in any other speculative ventures, there is only partial response to output; in the recent example the rise in the Central Bank Balance sheets almost coincided with the rise of the liabilities as well as the commercial bank excess reserves soared, which is another indication that there was very little change in the circulating capital that moved into real goods and services. This perfectly makes sense to the hypothesis that a portion of the circulating capital which was earlier used for production of goods have been taken out systemically during the recession and now with 'easy money' situation it has further moved out to non-good or non-service areas (equities or other speculative assets) which has only partial impact on the economic output revival.

What does it leave for the savers ?

In an environment where output gap exists, and circulating capital is taken out of goods and services to promote speculative ventures, there is all the good reason why savers (corporate one included) would need to think twice before they would lock their hard earned savings into investments which are based on activities that relate to production or building infrastructure.

This is simply because there is so much excess stock of savings already (corporate world is a harbinger of savings) and which means avenues of investments have long been tasted beyond that threshold where no further stock of investment makes sense (unless someone wants to create over-capacity over the current stock).

Corporate savings cannot therefore move any further into those investment channels for creating capacity; with interest rates at record low fixed income streams of investments make no sense, bond markets are even more turbulent with yield curves moving upwards, as positions unwind, the odds of seeking an alpha is moving towards stiff uncertainties.

Individual small savers have no incentives for taking out consumption expenditure and putting that in assets that are either risky, or giving fixed income streams that would have no value in the event of inflation surge; parking in safe havens and even in un-invested 'carry', makes sense.

Banks are already in "carry" trades as never before, it makes a perfect sense for them. But carry works with an upward-sloping yield curve, but it loses money if the curve becomes inverted. Many investment banks have failed because they borrowed cheap short-term money to fund higher interest bearing long-term positions. When the long-term positions default, or the short-term interest rate rises too high (or there are simply no lenders), the bank cannot meet its short-term liabilities and goes under.

The current carry investment was working wonders with constant bouts of money dozing, which kept the yield curve upward sloping; with the slightest hint of rewind the lenders are seeking long-term debt contracts more aggressively than short-term debt contracts, the yield curve "inverts," with interest rates (yields) being lower for the longer periods of repayment so that lenders can attract long-term borrowing.

Dollar carry and yen carry could be rewinding fast as there are new hints in the air. In the current climate predictability has reached a nadir, where small savers have no incentives to save while borrowers must be those lucky few who could make the most from volatility, not from any solidity of trading positions.

Going back to Hayek and on his usefulness of savings, we have a new normal where the limits of investments in fixed assets due to the stock of capacity already existing and with no new innovation in products or services, we are straddled with a situation that savings cannot be channelized into gainful use by those who could convert them into productive use. The rise of capital share of income and the lowering of labor share is one more reason that goes against the cause of savings as the net purchasing power has not moved in tandem to augur well in the drive for increased consumption, something that remains as the most potent missing element towards bringing parity between the savers and borrowers.

For the engine to work smoothly the circulating capital must produce goods that are cleared by the market, for which consumption holds the key through the potency of purchasing power.

We are back to the basics.

Procyon Mukherjee 24th June 2013.

Jun 24 9:26 AM | Link | Comment!