Search This Blog

Showing posts with label information. Show all posts
Showing posts with label information. Show all posts

Sunday 18 June 2023

Economics Essay 76: Rational Actor

Discuss the view that individual economic agents will always act as rational decision makers so as to maximise their utility.

To properly discuss the view that individual economic agents will always act as rational decision-makers to maximize their utility, it's important to define and explain the key terms involved.

  1. Rational Decision-Making: Rational decision-making refers to the process of making choices that are consistent with one's preferences and objectives, based on a careful evaluation of available information and the expected outcomes of different options. Rational decision-makers aim to optimize their choices to maximize their expected utility.

  2. Utility: In economics, utility represents the satisfaction or value that individuals derive from consuming goods or services. It is a subjective measure of individual preferences and can vary from person to person. Utility can be expressed in different ways, such as happiness, well-being, or satisfaction.

Now, let's discuss the view that individual economic agents will always act as rational decision-makers to maximize their utility.

Supporters of this view argue that individuals possess rationality and have a clear understanding of their own preferences. They believe that individuals carefully assess the available choices, evaluate the costs and benefits associated with each option, and select the one that maximizes their utility. The rational decision-making model assumes that individuals have perfect information, are able to process information accurately, and act in their self-interest.

However, critics of this view highlight several limitations and challenges to the assumption of universal rationality:

  1. Bounded Rationality: Human beings have cognitive limitations, and their ability to process information and make decisions is bounded. Limited time, cognitive biases, and imperfect information can lead to decision-making that deviates from the rational model.

  2. Emotion and Psychology: Emotional factors and psychological biases can influence decision-making. People may make choices based on non-economic factors, social norms, or irrational beliefs, even if they are not in their best economic interest.

  3. External Influences: The decisions of individuals are influenced by external factors such as social pressure, cultural norms, and advertising. These influences may divert individuals from making strictly rational choices.

  4. Risk and Uncertainty: Rational decision-making assumes that individuals can accurately assess the risks and uncertainties associated with different options. However, people often face situations of uncertainty where the outcomes and probabilities are unknown, leading to decision-making based on imperfect information.

In reality, individuals exhibit a combination of rational and non-rational behavior, and their decision-making is influenced by a range of factors. While economic theory often assumes rationality, behavioral economics has highlighted the importance of understanding human behavior in a more realistic and nuanced way.

In conclusion, while the view that individuals always act as rational decision-makers to maximize their utility provides a useful framework for analyzing economic behavior, it is important to recognize the limitations and deviations from rationality that exist in real-world decision-making. Understanding the complexities of human behavior can provide valuable insights into economic outcomes and policy interventions.

Friday 16 June 2023

Fallacies of Capitalism 5: The Self Regulating Market Fallacy

How does the "self-regulating markets" fallacy fail to account for the need for government intervention to address market failures and ensure fair competition? 


The "self-regulating markets" fallacy is the belief that markets can regulate themselves without the need for government intervention. This idea suggests that if left to their own devices, markets will naturally correct any imbalances and ensure fair competition. However, this fallacy overlooks the need for government intervention to address market failures and promote a level playing field. Let's explore this concept with simple examples:

  1. Market failures: Markets can experience various failures that prevent them from functioning optimally. For instance, externalities like pollution or the depletion of natural resources are costs or benefits that affect third parties not directly involved in transactions. Without government intervention, these external costs or benefits are not taken into account, leading to inefficient outcomes. For example, if factories are allowed to pollute freely, it may harm public health and damage the environment, but the market alone may not correct this issue. Government intervention, through regulations or taxes, can internalize these externalities and ensure a more efficient allocation of resources.

  2. Monopolies and market power: Unregulated markets can result in the concentration of market power and the emergence of monopolies. Monopolies can abuse their power by setting high prices, reducing quality, and stifling competition. This restricts consumer choice and hampers innovation. Government intervention, such as antitrust laws and regulations, helps prevent and address monopolistic behavior, promoting fair competition and benefiting consumers. For example, if a single company dominates the internet search engine market, it may unfairly prioritize its own services over competitors' offerings, leading to biased search results. Government intervention can help maintain a competitive market where multiple players have an equal opportunity to compete.

  3. Information asymmetry: In many transactions, there is an imbalance of information between buyers and sellers. This information asymmetry can lead to market failures. For instance, in the market for used cars, sellers may have more information about the condition of the vehicle than buyers. This can result in "lemons" being sold at higher prices, as buyers are unable to make informed decisions. Government intervention, such as consumer protection laws and regulations, can require sellers to disclose relevant information and ensure transparency, enabling fair transactions and reducing information asymmetry.

  4. Ensuring fair competition: Self-regulating markets may not always guarantee fair competition. Unfair business practices, such as price fixing, collusion, or deceptive advertising, can harm consumers and undermine competition. Government intervention through competition policies and regulatory bodies ensures that businesses compete on a level playing field, preventing anti-competitive behavior and promoting fair markets. For example, if two competing companies agree to fix prices, it harms consumers who are deprived of the benefits of competitive pricing. Government intervention can enforce regulations that prohibit such anti-competitive practices.

In summary, the "self-regulating markets" fallacy fails to account for the need for government intervention to address market failures, prevent monopolies, mitigate information asymmetry, and ensure fair competition. Without appropriate regulations and interventions, markets can result in inefficient outcomes, reduced consumer welfare, and unequal distribution of resources. Government intervention plays a crucial role in maintaining a well-functioning and fair economic system.

Fallacies of Capitalism 3: The Invisible Hand Fallacy

 Explain the fallacy of the Invisible Hand.


The "invisible hand" fallacy is a misunderstanding of the concept coined by economist Adam Smith. It suggests that if individuals pursue their own self-interest in a free market, an "invisible hand" will guide their actions to benefit society as a whole. However, this fallacy overlooks the limitations and shortcomings of relying solely on market forces. Let's explore it with simple examples:

  1. Externalities: The invisible hand fallacy fails to account for externalities, which are the unintended effects of economic activities on third parties. For instance, imagine a factory that pollutes the environment while producing goods. The pursuit of self-interest by the factory owner may lead to increased profits, but it ignores the negative impact on the health and well-being of nearby communities. The invisible hand does not automatically correct or internalize these external costs, resulting in a market failure that harms society.

  2. Monopolies and market power: In some cases, the pursuit of self-interest can lead to the concentration of market power and the emergence of monopolies. Monopolies can manipulate prices, restrict competition, and exploit consumers, leading to inefficient outcomes and reduced overall welfare. For example, a dominant technology company may abuse its market power by setting high prices or stifling innovation, which is detrimental to consumers and smaller businesses. The invisible hand does not necessarily prevent the abuse of market power.

  3. Information asymmetry: The invisible hand fallacy assumes that all participants in the market have perfect information and are capable of making rational decisions. However, in reality, there is often a disparity in knowledge between buyers and sellers. For example, imagine a used car market where sellers are aware of hidden defects, but buyers are not. As a result, buyers may make suboptimal decisions and end up with lemons (defective cars). The invisible hand does not automatically address information asymmetry, leading to inefficient outcomes.

  4. Unequal bargaining power: The invisible hand fallacy assumes that all market participants have equal bargaining power. However, in practice, there can be significant disparities in bargaining power between buyers and sellers or between employers and employees. For instance, workers with limited job opportunities may accept low wages and poor working conditions due to the lack of alternatives. The invisible hand does not necessarily ensure fair and equitable outcomes in such situations.

In summary, the "invisible hand" fallacy suggests that individual pursuit of self-interest in a free market will automatically lead to societal benefits. However, this fallacy neglects the presence of externalities, market power, information asymmetry, and unequal bargaining power, which can result in inefficient and unfair outcomes. Recognizing these limitations is crucial for implementing regulations, policies, and institutions that can correct market failures and promote a more equitable and efficient economy.

Monday 27 June 2022

Don’t date anybody if you only want positive results! Life is poker not chess

Abridged and adapted from Thinking in Bets by Annie Duke





Suppose someone says, “I flipped a coin and it landed heads four times in a row. How likely is that to occur?”


It feels that should be a pretty easy question to answer. Once we do the maths on the probability of heads on four consecutive 50-50 flips, we can determine that would happen 6.25% of the time (0.5 x 0.5 x 0.5 x 0,.5).


The problem is that we came to this answer without knowing anything about the coin or the person flipping it. Is it a two-sided coin or three-sided or four? If it is two-sided, is it a two-headed coin? Even if the coin is two sided, is the coin weighted to land on heads more often than tails? Is the coin flipper a magician who is capable of influencing how the coin lands? This information is all incomplete, yet we answered the question as if we had examined the coin and knew everything about it.


Now if that person flipped the coin 10,000 times, giving us a sufficiently large sample size, we could figure out, with some certainty, whether the coin is fair. Four flips simply isn’t enough to determine much about the coin


We make this same mistake when we look for lessons in life’s results. Our lives are too short to collect enough data from our own experience to make it easy to dig down into decision quality from the small set of results we experience. If we buy a house, fix it up a little, and sell it three years later for 50% more than we paid. Does that mean we are smart at buying and selling property, or at fixing up houses? It could, but it could also mean there was a big upward trend in the market and buying almost any piece of property would have made just as much money. Bitcoin buyers may now wonder about the wisdom of their decisions.


The hazards of resulting


Take a moment to imagine your best decision or your worst decision. I’m willing to bet that your best decision preceded a good result and the worst decision preceded a bad result. This is a safe bet for me because we deduce an overly tight relationship between our decisions and the consequent results. 


There is an imperfect relationship between results and decision quality. I never seem to come across anyone who identifies a bad decision when they got lucky with the result, or a well reasoned decision that didn’t work out. We are uncomfortable with the idea that luck plays a significant role in our lives. We assume causation when there is only a correlation and tend to cherry-pick data to confirm the narrative we prefer.


Poker and decisions


Poker is a game that mimics human decision making. Every poker hand requires making at least one decision (to fold or to stay) and some hands can require up to twenty decisions. During a poker game players get in about thirty hands per hour. This means a poker player makes hundreds of decisions at breakneck speed with every hand having immediate financial consequences. 


It is a game of decision making with incomplete information. Valuable information remains hidden. There is also an element of luck in any outcome. You could make the best possible decision at every point and still lose the hand, because you don’t know what new cards will be dealt and revealed.


In addition, once the game is over, poker players must learn from that jumbled mass of decisions and outcomes, separating the luck from the skill, and guarding against using results to justify/criticise decisions made,


The quality of our lives is the sum of decision quality plus luck. Poker is a mirror to life and helps us recognise the mistakes we never spot because we win the hand anyway or the leeway to do everything right, still lose, and treat the losing result as proof that we made a mistake,


Decisions are bets on the future


Decisions aren’t ‘right’ or ‘wrong’ based on whether they turn out well on any particular iteration. An unwanted result doesn’t make our decision wrong if we had thought about the alternatives and probabilities in advance and made our decisions accordingly. 


Our world is structured to give us lots of opportunities to feel bad about being wrong if we want to measure ourselves by outcomes. Don’t fall in love or even date anybody if you want only positive results.





Friday 25 February 2022

Deception and destruction can still blind the enemy

From The Economist

There are four ways for those who would hide to fight back against those trying to find them: destruction, deafening, disappearance and deception. Technological approaches to all of those options will be used to counter the advantages that bringing more sensors to the battlespace offers. As with the sensors, what those technologies achieve will depend on the tactics used.

Destruction is straightforward: blow up the sensor. Missiles which home in on the emissions from radars are central to establishing air superiority; one of the benefits of stealth, be it that of an f-35 or a Harop drone, lies in getting close enough to do so reliably.

Radar has to reveal itself to work, though. Passive systems can be both trickier to sniff out and cheaper to replace. Theatre-level air-defence systems are not designed to spot small drones carrying high-resolution smartphone cameras, and would be an extraordinarily expensive way of blowing them up.

But the ease with which American drones wandered the skies above Iraq, Afghanistan and other post-9/11 war zones has left a mistaken impression about the survivability of uavs. Most Western armies have not had to worry about things attacking them from the sky since the Korean war ended in 1953. Now that they do, they are investing in short-range air defences. Azerbaijan’s success in Nagorno-Karabakh was in part down to the Armenians not being up to snuff in this regard. Armed forces without many drones—which is still most of them—will find their stocks quickly depleted if used against a seasoned, well-equipped force.

Stocks will surely increase if it becomes possible to field more drones for the same price. And low-tech drones which can be used as flying ieds will make things harder when fighting irregular forces. But anti-drone options should get better too. Stephen Biddle of Columbia University argues that the trends making drones more capable will make anti-drone systems better, too. Such systems actually have an innate advantage, he suggests; they look up into the sky, in which it is hard to hide, while drones look down at the ground, where shelter and camouflage are more easily come by. And small motors cannot lift much by way of armour.

Moving from cheap sensors to the most expensive, satellites are both particularly valuable in terms of surveillance and communication and very vulnerable. America, China, India and Russia, all of which would rely on satellites during a war, have all tested ground-launched anti-satellite missiles in the past two decades; some probably also have the ability to kill one satellite with another. The degree to which they are ready to gouge out each other’s eyes in the sky will be a crucial indicator of escalation should any of those countries start fighting each other. Destroying satellites used to detect missile launches could presage a pre-emptive nuclear strike—and for that very reason could bring one about.

Everybody has a plan until they get punched in the face

Satellites are also vulnerable to sensory overload, as are all sensors. Laser weapons which blind humans are outlawed by international agreement but those that blind cameras are not; nor are microwave beams which fry electronics. America says that Russia tries to dazzle its orbiting surveillance systems with lasers on a regular basis.

The ability to jam, overload or otherwise deafen the other side’s radar and radios is the province of electronic warfare (ew). It is a regular part of military life to probe your adversaries’ ew capabilities when you get a chance. The deployment of American and Russian forces close to each other in northern Syria provided just such an opportunity. “They are testing us every day,” General Raymond Thomas, then head of American special forces, complained in 2018, “knocking our communications down” and going so far as “disabling” America’s own ec-130 electronic-warfare planes.

In Green Dagger, an exercise held in California last October, an American Marine Corps regiment was tasked with seizing a town and two villages defended by an opposing force cobbled together from other American marines, British and Dutch commandos and Emirati special forces. It struggled to do so. When small teams of British commandos attacked the regiment’s rear areas, paralysing its advance, the marines were hard put to target them before they moved, says Jack Watling of the Royal United Services Institute, a think-tank in London. One reason was the commandos’ effective ew attacks on the marines’ command posts.

Just as what sees can be blinded and what hears, deafened, what tries to understand can be confused. Britain’s national cyber-strategy, published in December, explicitly says that one task of the country’s new National Cyber Force, a body staffed by spooks and soldiers, is to “disrupt online and communications systems”. Armies that once manoeuvred under air cover will now need to do so under “cyber-deception cover”, says Ed Stringer, a retired air marshal who led recent reforms in British military thinking. “There’s a point at which the screens of the opposition need to go a bit funny,” says Mr Stringer, “not so much that they immediately spot what you’re doing but enough to distract and confuse.” In time the lines between ew, cyber-offence and psychological operations seem set to blur.

The ability to degrade the other side’s sensors, interrupt its communications and mess with its head does not replace old-fashioned camouflage and newfangled stealth; they remain the bread and butter of a modern military. Tanks are covered in foliage; snipers wear ghillie suits. Warplanes use radiation-absorbent material and angled surfaces so as not to reflect radio waves back to the radar that sent them. Russia has platoons dedicated to spraying the air with aerosols designed to block ultraviolet, infrared and radar waves. During their recent border stand-off, India and China both employed camouflage designed to confuse sensors with a broader spectral range than the human eye.

According to Mr Biddle, over the past 30 years “cover and concealment”, along with other tactics, have routinely allowed forces facing American precision weapons to avoid major casualties. He points to the examples of al-Qaeda at the Battle of Tora Bora in eastern Afghanistan in 2001 and Saddam Hussein’s Republican Guard in 2003, both of whom were overrun in close combat rather than through long-range strikes. Weapons get more lethal, he says, but their targets adapt.

Hiding is made easier by the fact that the seekers’ new capabilities, impressive as they may be, are constrained by the realities of budgets and logistics. Not everything armies want can be afforded; not everything they procure can be put into the field in a timely manner. In real operations, as opposed to PowerPoint presentations, sensor coverage is never unlimited.

“There is no way that we're going to be able to see everything, all of the time, everywhere,” says a British general. “It's just physically impossible. And therefore there will always be something that can happen without us seeing it.” In the Green Dagger exercise the attacking marine regiment lacked thermal-imaging equipment and did not have prompt access to satellite pictures. It was a handicap, but a realistic one. Rounding up commandos was not the regiment’s “main effort”, in military parlance. It might well not have been kitted out for it.

When hiding is hard, it helps to increase the number of things the enemy has to look at. “With modern sensors…it is really, really difficult to avoid being detected,” says Petter Bedoire, the chief technology officer for Saab, a Swedish arms company. “So instead you need to saturate your adversaries’ sensors and their situational awareness.” A system looking at more things will make more mistakes. Stretch it far enough and it could even collapse, as poorly configured servers do when hackers mount “denial of service” attacks designed to overwhelm them with internet traffic.

Dividing your forces is a good way to increase the cognitive load. A lot of small groups are harder to track and target than a few big ones, as the commandos in Green Dagger knew. What is more, if you take shots at one group you reveal some of your shooters to the rest. The less valuable each individual target is, the bigger an issue that becomes.

Decoys up the ante. During the first Gulf war Saddam Hussein unleashed his arsenal of Scud missiles on Bahrain, Israel and Saudi Arabia. The coalition Scud hunters responsible for finding the small (on the scale of a vast desert) mobile missile launchers he was using seemed to have all the technology they might wish for: satellites that could spot the thermal-infrared signature of a rocket launch, aircraft bristling with radar and special forces spread over tens of thousands of square kilometres acting as spotters. Nevertheless an official study published two years later concluded that there was no “indisputable” proof that America had struck any launchers at all “as opposed to high-fidelity decoys”.

One of the advantages data fusion offers seekers is that it demands more of decoys; in surveillance aircraft electronic emissions, radar returns and optical images can now be displayed on a single screen, highlighting any discrepancies between an object’s visual appearance and its electronic signature. But decoy-making has not stood still. Iraq’s fake Scuds looked like the real thing to un observers just 25 metres away; verisimilitude has improved “immensely” since then, particularly in the past decade, says Steen Bisgaard, the founder of GaardTech, an Australian company which builds replica vehicles to serve as both practice targets and decoys.

Mr Bisgaard says he can sell you a very convincing mobile simulacrum of a British Challenger II tank, one with a turret and guns that move, the heat signature of a massive diesel engine and a radio transmitter that works at military wavelengths, all for less than a 20th of the £5m a real tank would set you back. Shipped in a flat pack it can be assembled in an hour or so.

Seeing a tank suddenly appear somewhere, rather than driving there, would be something of a giveaway. But manoeuvre can become part of the mimicry. Rémy Hemez, a French army officer, imagines a future where armies deploy large “robotic decoy formations using ai to move along and create a diversion”. Simulating a build up like the one which Russia has emplaced on Ukraine’s border is still beyond anyone’s capabilities. But decoys and deception—in which Russia’s warriors are well versed—can be used to confuse.

Disappearance and deception often have synergy. Stealth technologies do not need to make an aircraft completely invisible. Just making its radar cross-section small enough that a cheap little decoy can mimic it is a real advantage. The same applies, mutatis mutandis, to submarines. If you build lots of intercontinental-ballistic-missile silos but put icbms into only a few—a tactic China may be exploring—an enemy will have to use hundreds of its missiles to be sure of getting a dozen or so of yours.

Shooting at decoys is not just a waste of material. It also reveals where your shooters are. Silent Impact, a 155mm artillery shell produced by src, an American firm, can transmit electronic signals as if it were a radar or a weapons platform as it flies through the sky and settles to the ground under a parachute. Any enemy who takes the bait reveals the position of their guns.

The advent of ai should offer new ways of telling the real from the fake; but it could also offer new opportunities for deception. The things that make an ai say “Tank!” may be quite different to what humans think of as tankiness, thus unmasking decoys that fool humans. At the same time the ai may ignore features which humans consider blindingly obvious. Benjamin Jensen of American University tells the story of marines training against a high-end sentry camera equipped with object-recognition software. The first marines, who tried to sneak up by crawling low, were quickly detected. Then one of them grabbed a piece of tree bark, placed it in front of his face and walked right up to the camera unmolested. The system saw nothing out of the ordinary about an ambulatory plant.

The problem is that ais, and their masters, learn. In time they will rumble such hacks. Basing a subsequent all-out assault on Birnam Wood tactics would be to risk massacre. “You can always beat the algorithm once by radical improvisation,” says Mr Jensen. “But it's hard to know when that will happen.”

The advantages of staying put

Similar uncertainties will apply more widely. Everyone knows that sensors and autonomous platforms can get cheaper and cheaper, that computing at the edge can reduce strain on the capacity of data systems, and that all this can make kill chains shorter. But the rate of progress—both your progress, and your adversaries’—is hard to gauge. Who has the advantage will often not be known until the forces contest the battlespace.

The unpredictability extends beyond who will win particular fights. It spreads out to the way in which fighting will best be done. Over the past century military thinking has contrasted attrition, which wears down the opponent’s resources in a frontal slugfest, and manoeuvre, which seeks to use fast moving forces to disrupt an enemy’s decision-making, logistics and cohesion. Manoeuvre offers the possibility of victory without the wholesale destruction of the enemies’ forces, and in the West it has come to hold the upper hand, with attrition often seen as a throwback to a more primitive age.

That is a mistake, argues Franz-Stefan Gady of the International Institute for Strategic Studies, a think-tank. Surviving in an increasingly transparent battlespace may well be possible. But it will take effort. Both attackers who want to take ground and defenders who wish to hold it will need to build “complex multiple defensive layers” around their positions, including air defences, electronic countermeasures and sensors of their own. Movement will still be necessary—but it will be dispersed. Consolidated manoeuvres big and sweeping enough to generate “shock and awe” will be slowed down by unwieldy aerial electromagnetic umbrellas and advertise themselves in advance, thereby producing juicy targets.

The message of Azerbaijan’s victory is not that blitzkrieg has been reborn and “the drone will always get through”. It is that preparation and appropriate tactics matter as much as ever, and you need to know what to prepare against. The new technologies of hide and seek will sometimes—if Mr Gady is right, often—favour the defence. A revolution in sensors, data and decision-making built to make targeting easier and kill chains quicker may yet result in a form of warfare that is slower, harder and messier.

Thursday 28 October 2021

Information Asymmetry

From the Economist Schools Brief


 IN 2007 the state of Washington introduced a new rule aimed at making the labour market fairer: firms were banned from checking job applicants’ credit scores. Campaigners celebrated the new law as a step towards equality—an applicant with a low credit score is much more likely to be poor, black or young. Since then, ten other states have followed suit. But when Robert Clifford and Daniel Shoag, two economists, recently studied the bans, they found that the laws left blacks and the young with fewer jobs, not more.

Before 1970, economists would not have found much in their discipline to help them mull this puzzle. Indeed, they did not think very hard about the role of information at all. In the labour market, for example, the textbooks mostly assumed that employers know the productivity of their workers—or potential workers—and, thanks to competition, pay them for exactly the value of what they produce.

You might think that research upending that conclusion would immediately be celebrated as an important breakthrough. Yet when, in the late 1960s, George Akerlof wrote “The Market for Lemons”, which did just that, and later won its author a Nobel prize, the paper was rejected by three leading journals. At the time, Mr Akerlof was an assistant professor at the University of California, Berkeley; he had only completed his PhD, at MIT, in 1966. Perhaps as a result, the American Economic Review thought his paper’s insights trivial. The Review of Economic Studieagreed. The Journal of Political Economy had almost the opposite concern: it could not stomach the paper’s implications. Mr Akerlof, now an emeritus professor at Berkeley and married to Janet Yellen, the chairman of the Federal Reserve, recalls the editor’s complaint: “If this is correct, economics would be different.”

In a way, the editors were all right. Mr Akerlof’s idea, eventually published in the Quarterly Journal of Economics in 1970, was at once simple and revolutionary. Suppose buyers in the used-car market value good cars—“peaches”—at $1,000, and sellers at slightly less. A malfunctioning used car—a “lemon”—is worth only $500 to buyers (and, again, slightly less to sellers). If buyers can tell lemons and peaches apart, trade in both will flourish. In reality, buyers might struggle to tell the difference: scratches can be touched up, engine problems left undisclosed, even odometers tampered with.

To account for the risk that a car is a lemon, buyers cut their offers. They might be willing to pay, say, $750 for a car they perceive as having an even chance of being a lemon or a peach. But dealers who know for sure they have a peach will reject such an offer. As a result, the buyers face “adverse selection”: the only sellers who will be prepared to accept $750 will be those who know they are offloading a lemon.

Smart buyers can foresee this problem. Knowing they will only ever be sold a lemon, they offer only $500. Sellers of lemons end up with the same price as they would have done were there no ambiguity. But peaches stay in the garage. This is a tragedy: there are buyers who would happily pay the asking-price for a peach, if only they could be sure of the car’s quality. This “information asymmetry” between buyers and sellers kills the market.

Is it really true that you can win a Nobel prize just for observing that some people in markets know more than others? That was the question one journalist asked of Michael Spence, who, along with Mr Akerlof and Joseph Stiglitz, was a joint recipient of the 2001 Nobel award for their work on information asymmetry. His incredulity was understandable. The lemons paper was not even an accurate description of the used-car market: clearly not every used car sold is a dud. And insurers had long recognised that their customers might be the best judges of what risks they faced, and that those keenest to buy insurance were probably the riskiest bets.

Yet the idea was new to mainstream economists, who quickly realised that it made many of their models redundant. Further breakthroughs soon followed, as researchers examined how the asymmetry problem could be solved. Mr Spence’s flagship contribution was a 1973 paper called “Job Market Signalling” that looked at the labour market. Employers may struggle to tell which job candidates are best. Mr Spence showed that top workers might signal their talents to firms by collecting gongs, like college degrees. Crucially, this only works if the signal is credible: if low-productivity workers found it easy to get a degree, then they could masquerade as clever types.

This idea turns conventional wisdom on its head. Education is usually thought to benefit society by making workers more productive. If it is merely a signal of talent, the returns to investment in education flow to the students, who earn a higher wage at the expense of the less able, and perhaps to universities, but not to society at large. One disciple of the idea, Bryan Caplan of George Mason University, is currently penning a book entitled “The Case Against Education”. (Mr Spence himself regrets that others took his theory as a literal description of the world.)

Signalling helps explain what happened when Washington and those other states stopped firms from obtaining job-applicants’ credit scores. Credit history is a credible signal: it is hard to fake, and, presumably, those with good credit scores are more likely to make good employees than those who default on their debts. Messrs Clifford and Shoag found that when firms could no longer access credit scores, they put more weight on other signals, like education and experience. Because these are rarer among disadvantaged groups, it became harder, not easier, for them to convince employers of their worth.

Signalling explains all kinds of behaviour. Firms pay dividends to their shareholders, who must pay income tax on the payouts. Surely it would be better if they retained their earnings, boosting their share prices, and thus delivering their shareholders lightly taxed capital gains? Signalling solves the mystery: paying a dividend is a sign of strength, showing that a firm feels no need to hoard cash. By the same token, why might a restaurant deliberately locate in an area with high rents? It signals to potential customers that it believes its good food will bring it success.

Signalling is not the only way to overcome the lemons problem. In a 1976 paper Mr Stiglitz and Michael Rothschild, another economist, showed how insurers might “screen” their customers. The essence of screening is to offer deals which would only ever attract one type of punter.

Suppose a car insurer faces two different types of customer, high-risk and low-risk. They cannot tell these groups apart; only the customer knows whether he is a safe driver. Messrs Rothschild and Stiglitz showed that, in a competitive market, insurers cannot profitably offer the same deal to both groups. If they did, the premiums of safe drivers would subsidise payouts to reckless ones. A rival could offer a deal with slightly lower premiums, and slightly less coverage, which would peel away only safe drivers because risky ones prefer to stay fully insured. The firm, left only with bad risks, would make a loss. (Some worried a related problem would afflict Obamacare, which forbids American health insurers from discriminating against customers who are already unwell: if the resulting high premiums were to deter healthy, young customers from signing up, firms might have to raise premiums further, driving more healthy customers away in a so-called “death spiral”.)

The car insurer must offer two deals, making sure that each attracts only the customers it is designed for. The trick is to offer one pricey full-insurance deal, and an alternative cheap option with a sizeable deductible. Risky drivers will balk at the deductible, knowing that there is a good chance they will end up paying it when they claim. They will fork out for expensive coverage instead. Safe drivers will tolerate the high deductible and pay a lower price for what coverage they do get.

This is not a particularly happy resolution of the problem. Good drivers are stuck with high deductibles—just as in Spence’s model of education, highly productive workers must fork out for an education in order to prove their worth. Yet screening is in play almost every time a firm offers its customers a menu of options.

Airlines, for instance, want to milk rich customers with higher prices, without driving away poorer ones. If they knew the depth of each customer’s pockets in advance, they could offer only first-class tickets to the wealthy, and better-value tickets to everyone else. But because they must offer everyone the same options, they must nudge those who can afford it towards the pricier ticket. That means deliberately making the standard cabin uncomfortable, to ensure that the only people who slum it are those with slimmer wallets.

Hazard undercuts Eden

Adverse selection has a cousin. Insurers have long known that people who buy insurance are more likely to take risks. Someone with home insurance will check their smoke alarms less often; health insurance encourages unhealthy eating and drinking. Economists first cottoned on to this phenomenon of “moral hazard” when Kenneth Arrow wrote about it in 1963.

Moral hazard occurs when incentives go haywire. The old economics, noted Mr Stiglitz in his Nobel-prize lecture, paid considerable lip-service to incentives, but had remarkably little to say about them. In a completely transparent world, you need not worry about incentivising someone, because you can use a contract to specify their behaviour precisely. It is when information is asymmetric and you cannot observe what they are doing (is your tradesman using cheap parts? Is your employee slacking?) that you must worry about ensuring that interests are aligned.

Such scenarios pose what are known as “principal-agent” problems. How can a principal (like a manager) get an agent (like an employee) to behave how he wants, when he cannot monitor them all the time? The simplest way to make sure that an employee works hard is to give him some or all of the profit. Hairdressers, for instance, will often rent a spot in a salon and keep their takings for themselves.

But hard work does not always guarantee success: a star analyst at a consulting firm, for example, might do stellar work pitching for a project that nonetheless goes to a rival. So, another option is to pay “efficiency wages”. Mr Stiglitz and Carl Shapiro, another economist, showed that firms might pay premium wages to make employees value their jobs more highly. This, in turn, would make them less likely to shirk their responsibilities, because they would lose more if they were caught and got fired. That insight helps to explain a fundamental puzzle in economics: when workers are unemployed but want jobs, why don’t wages fall until someone is willing to hire them? An answer is that above-market wages act as a carrot, the resulting unemployment, a stick.

And this reveals an even deeper point. Before Mr Akerlof and the other pioneers of information economics came along, the discipline assumed that in competitive markets, prices reflect marginal costs: charge above cost, and a competitor will undercut you. But in a world of information asymmetry, “good behaviour is driven by earning a surplus over what one could get elsewhere,” according to Mr Stiglitz. The wage must be higher than what a worker can get in another job, for them to want to avoid the sack; and firms must find it painful to lose customers when their product is shoddy, if they are to invest in quality. In markets with imperfect information, price cannot equal marginal cost.

The concept of information asymmetry, then, truly changed the discipline. Nearly 50 years after the lemons paper was rejected three times, its insights remain of crucial relevance to economists, and to economic policy. Just ask any young, black Washingtonian with a good credit score who wants to find a job.


Saturday 30 January 2021

The GameStop affair is like tulip mania on steroids

It’s eerily similar to the 17th-century Dutch bubble, but with the self-organising potential of the internet added to the mix writes Dan Davies in The Guardian


  

Towards the end of 1636, there was an outbreak of bubonic plague in the Netherlands. The concept of a lockdown was not really established at the time, but merchant trade slowed to a trickle. Idle young men in the town of Haarlem gathered in taverns, and looked for amusement in one of the few commodities still trading – contracts for the delivery of flower bulbs the following spring. What ensued is often regarded as the first financial bubble in recorded history – the “tulip mania”.

Nearly 400 years later, something similar has happened in the US stock market. This week, the share price of a company called GameStop – an unexceptional retailer that appears to have been surprised and confused by the whole episode – became the battleground between some of the biggest names in finance and a few hundred bored (mostly) bros exchanging messages on the WallStreetBets forum, part of the sprawling discussion site Reddit. 

The rubble is still bouncing in this particular episode, but the broad shape of what’s happened is not unfamiliar. Reasoning that a business model based on selling video game DVDs through shopping malls might not have very bright prospects, several of New York’s finest hedge funds bet against GameStop’s share price. The Reddit crowd appears to have decided that this was unfair and that they should fight back on behalf of gamers. They took the opposite side of the trade and pushed the price up, using derivatives and brokerage credit in surprisingly sophisticated ways to maximise their firepower.

To everyone’s surprise, the crowd won; the hedge funds’ risk management processes kicked in, and they were forced to buy back their negative positions, pushing the price even higher. But the stock exchanges have always frowned on this sort of concerted action, and on the use of leverage to manipulate the market. The sheer volume of orders had also grown well beyond the capacity of the small, fee-free brokerages favoured by the WallStreetBets crowd. Credit lines were pulled, accounts were frozen and the retail crowd were forced to sell; yesterday the price gave back a large proportion of its gains.

To people who know a lot about stock exchange regulation and securities settlement, this outcome was quite inevitable – it’s part of the reason why things like this don’t happen every day. To a lot of American Redditors, though, it was a surprising introduction to the complexity of financial markets, taking place in circumstances almost perfectly designed to convince them that the system is rigged for the benefit of big money.

Corners, bear raids and squeezes, in the industry jargon, have been around for as long as stock markets – in fact, as British hedge fund legend Paul Marshall points out in his book Ten and a Half Lessons From Experience something very similar happened last year at the start of the coronavirus lockdown, centred on a suddenly unemployed sports bookmaker called Dave Portnoy. But the GameStop affair exhibits some surprising new features.

Most importantly, it was a largely self-organising phenomenon. For most of stock market history, orchestrating a pool of people to manipulate markets has been something only the most skilful could achieve. Some of the finest buildings in New York were erected on the proceeds of this rare talent, before it was made illegal. The idea that such a pool could coalesce so quickly and without any obvious sign of a single controlling mind is brand new and ought to worry us a bit. 

And although some of the claims made by contributors to WallStreetBets that they represent the masses aren’t very convincing – although small by hedge fund standards, many of them appear to have five-figure sums to invest – it’s unfamiliar to say the least to see a pool motivated by rage or other emotions as opposed to the straightforward desire to make money. Just as air traffic regulation is based on the assumption that the planes are trying not to crash into one another, financial regulation is based on the assumption that people are trying to make money for themselves, not to destroy it for other people.

When I think about market regulation, I’m always reminded of a saying of Édouard Herriot, the former mayor of Lyon. He said that local government was like an andouillette sausage; it had to stink a little bit of shit, but not too much. Financial markets aren’t video games, they aren’t democratic and small investors aren’t the backbone of capitalism. They’re nasty places with extremely complicated rules, which only work to the extent that the people involved in them trust one another. Speculation is genuinely necessary on a stock market – without it, you could be waiting days for someone to take up your offer when you wanted to buy or sell shares. But it’s a necessary evil, and it needs to be limited. It’s a shame that the Redditors found this out the hard way.