Search This Blog

Showing posts with label unintended. Show all posts
Showing posts with label unintended. Show all posts

Friday, 21 July 2023

A Level Economics 67: Causes of Government Intervention Failure

Government interventions to correct market failures can sometimes lead to government failure, where the intended policy objectives are not achieved or result in unintended negative consequences. Here are some common causes of government failure when intervening in markets:

  1. Information Asymmetry: Government policymakers may lack complete information about the complexities of the market or fail to accurately predict the future consequences of their interventions. This information asymmetry can lead to poorly designed policies that do not effectively address the market failure.

Example: If the government implements a subsidy program to encourage the adoption of a new renewable energy technology without fully understanding the long-term costs and benefits, it could result in inefficient allocation of resources and unintended financial burdens.

  1. Regulatory Capture: Sometimes, the regulatory agencies responsible for overseeing market interventions may become subject to regulatory capture, where they develop a close relationship with the industries they are supposed to regulate. This can lead to policies that favor the interests of powerful industry players rather than promoting the public good.

Example: In the financial sector, regulatory capture may occur if regulators develop cozy relationships with banks and financial institutions, leading to weak oversight and inadequate regulation of risky financial practices.

  1. Political Interests and Lobbying: Government interventions can be influenced by political interests and lobbying efforts from various stakeholders. This can result in policies that cater to the interests of specific groups rather than addressing the market failure in a fair and equitable manner.

Example: If a powerful agricultural lobby influences the government's agricultural subsidy policies, the subsidies may disproportionately benefit large agribusinesses rather than smaller family farms.

  1. Unintended Consequences: Government interventions can have unintended consequences that undermine the original objectives. Policies that may appear beneficial in theory can lead to negative outcomes in practice.

Example: Rent control laws intended to make housing more affordable may reduce the incentive for landlords to maintain their properties, leading to a decline in the quality and availability of rental housing.

  1. Administrative Inefficiencies: Government programs can suffer from administrative inefficiencies, including bureaucratic red tape and delays in implementation. This can hinder the effectiveness of the intervention and result in resource misallocation.

Example: If a government program aimed at providing financial assistance to small businesses involves complex application procedures and lengthy approval processes, it may fail to reach those in need of assistance promptly.

  1. Budget Constraints: Government interventions often require substantial funding. If resources are limited or misallocated, the effectiveness of the intervention may be compromised.

Example: A government-sponsored job training program may have limited success if the budget is insufficient to cover the costs of adequate training and support services for participants.

Conclusion:

Government interventions to correct market failures are essential, but they can lead to government failure if not carefully designed and implemented. Policymakers need to consider the potential causes of government failure, assess the risks, and continually evaluate the effectiveness of their interventions. Transparency, accountability, and evidence-based decision-making are critical to minimizing the risks of government failure and ensuring that interventions achieve their intended objectives without creating unintended negative consequences.

A level Economics 66: Government Intervention and Market Distortions

Government intervention in markets, while often implemented with good intentions, can lead to unintended consequences and create distortions. Here are some examples of how government intervention can cause distortions in agriculture, housing, and labor markets:

1. Agriculture Market:

Price Floors: Government-imposed price floors in agriculture, such as guaranteed minimum prices, can create surpluses of agricultural products. If the minimum price set by the government is above the market equilibrium price, farmers may produce more than the market demands. This surplus can lead to overproduction and the accumulation of unsold goods.

Example: In the case of wheat, if the government sets a minimum price above the equilibrium price, farmers may produce more wheat than consumers need, resulting in a surplus that requires storage or export at subsidized prices.

2. Housing Market:

Rent Controls: Government-imposed rent controls limit the amount landlords can charge for rental properties. While this measure aims to protect tenants from excessive rent increases, it can create shortages of rental housing and reduce landlords' incentives to maintain and invest in their properties.

Example: In a city with rent controls, landlords may choose to convert their rental properties into condominiums for sale, reducing the supply of available rental units and potentially leading to higher overall housing costs for residents.

3. Labor Market:

Minimum Wage: While minimum wage laws aim to improve workers' earnings, they can create distortions in the labor market. Setting a minimum wage above the equilibrium wage can result in higher unemployment, as employers may be unable or unwilling to hire additional workers at the mandated wage rate.

Example: If the government raises the minimum wage significantly, some small businesses may reduce hiring or cut back on employee hours to manage increased labor costs.

Conclusion:

While government intervention can be necessary to correct market failures and protect vulnerable populations, it is essential to consider the potential distortions that such interventions may create. Policymakers need to carefully assess the impact of their actions on markets and be aware of unintended consequences that could arise. Striking a balance between intervention and market efficiency is crucial for achieving policy objectives without causing unnecessary distortions. It requires thoughtful analysis, ongoing evaluation, and flexibility in adapting policies to changing market conditions.

Thursday, 3 May 2018

Big Tech is sorry. Why Silicon Valley can’t fix itself

Tech insiders have finally started admitting their mistakes – but the solutions they are offering could just help the big players get even more powerful. By Ben Tarnoff and Moira Weigel in The Guardian 


Big Tech is sorry. After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.

Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.” Justin Rosenstein, an engineer who helped build Facebook’s “like” button and Gchat, regrets having contributed to technology that he now considers psychologically damaging, too. “Everyone is distracted,” Rosenstein says. “All of the time.” 

Ever since the internet became widely used by the public in the 1990s, users have heard warnings that it is bad for us. In the early years, many commentators described cyberspace as a parallel universe that could swallow enthusiasts whole. The media fretted about kids talking to strangers and finding porn. A prominent 1998 study from Carnegie Mellon University claimed that spending time online made you lonely, depressed and antisocial.

In the mid-2000s, as the internet moved on to mobile devices, physical and virtual life began to merge. Bullish pundits celebrated the “cognitive surplus” unlocked by crowdsourcing and the tech-savvy campaigns of Barack Obama, the “internet president”. But, alongside these optimistic voices, darker warnings persisted. Nicholas Carr’s The Shallows (2010) argued that search engines were making people stupid, while Eli Pariser’s The Filter Bubble (2011) claimed algorithms made us insular by showing us only what we wanted to see. In Alone, Together (2011) and Reclaiming Conversation (2015), Sherry Turkle warned that constant connectivity was making meaningful interaction impossible.

Still, inside the industry, techno-utopianism prevailed. Silicon Valley seemed to assume that the tools they were building were always forces for good – and that anyone who questioned them was a crank or a luddite. In the face of an anti-tech backlash that has surged since the 2016 election, however, this faith appears to be faltering. Prominent people in the industry are beginning to acknowledge that their products may have harmful effects.

Internet anxiety isn’t new. But never before have so many notable figures within the industry seemed so anxious about the world they have made. Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.

It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity. The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.

The hub of the new tech humanism is the Center for Humane Technology in San Francisco. Founded earlier this year, the nonprofit has assembled an impressive roster of advisers, including investor Roger McNamee, Lyft president John Zimmer, and Rosenstein. But its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction. In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.

As suspicion of Silicon Valley grows, the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track. For this, they have been getting a lot of attention. As the backlash against tech has grown, so too has the appeal of techies repenting for their sins. The Center for Humane Technology has been profiled – and praised by – the New York Times, the Atlantic, Wired and others.

But tech humanism’s influence cannot be measured solely by the positive media coverage it has received. The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”, and Twitter boss Jack Dorsey recently claimed he wants to improve the platform’s “conversational health”. 

Even Mark Zuckerberg, famous for encouraging his engineers to “move fast and break things”, seems to be taking a tech humanist turn. In January, he announced that Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.

Zuckerberg’s choice of words is significant: Time Well Spent is the name of the advocacy group that Harris led before co-founding the Center for Humane Technology. In April, Zuckerberg brought the phrase to Capitol Hill. When a photographer snapped a picture of the notes Zuckerberg used while testifying before the Senate, they included a discussion of Facebook’s new emphasis on “time well spent”, under the heading “wellbeing”.

This new concern for “wellbeing” may strike some observers as a welcome development. After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.

But these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.

The Center for Humane Technology argues that technology must be “aligned” with humanity – and that the best way to accomplish this is through better design. Their website features a section entitled The Way Forward. A familiar evolutionary image shows the silhouettes of several simians, rising from their crouches to become a man, who then turns back to contemplate his history.

“In the future, we will look back at today as a turning point towards humane design,” the header reads. To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”.

There is a good reason why the language of tech humanism is penetrating the upper echelons of the tech industry so easily: this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives. Their success turned the Bay Area tech industry into a global powerhouse – and produced the digitised world that today’s tech humanists now lament.

The story begins in the 1960s, when Silicon Valley was still a handful of electronics firms clustered among fruit orchards. Computers came in the form of mainframes then. These machines were big, expensive and difficult to use. Only corporations, universities and government agencies could afford them, and they were reserved for specialised tasks, such as calculating missile trajectories or credit scores.

Computing was industrial, in other words, not personal, and Silicon Valley remained dependent on a small number of big institutional clients. The practical danger that this dependency posed became clear in the early 1960s, when the US Department of Defense, by far the single biggest buyer of digital components, began cutting back on its purchases. But the fall in military procurement wasn’t the only mid-century crisis around computing.

Computers also had an image problem. The inaccessibility of mainframes made them easy to demonise. In these whirring hulks of digital machinery, many observers saw something inhuman, even evil. To antiwar activists, computers were weapons of the war machine that was killing thousands in Vietnam. To highbrow commentators such as the social critic Lewis Mumford, computers were instruments of a creeping technocracy that threatened to extinguish personal freedom.

But during the course of the 1960s and 70s, a series of experiments in northern California helped solve both problems. These experiments yielded breakthrough innovations like the graphical user interface, the mouse and the microprocessor. Computers became smaller, more usable and more interactive, reducing Silicon Valley’s reliance on a few large customers while giving digital technology a friendlier face.

The pioneers who led this transformation believed they were making computing more human. They drew deeply from the counterculture of the period, and its fixation on developing “human” modes of living. They wanted their machines to be “extensions of man”, in the words of Marshall McLuhan, and to unlock “human potential” rather than repress it. At the centre of this ecosystem of hobbyists, hackers, hippies and professional engineers was Stewart Brand, famed entrepreneur of the counterculture and founder of the Whole Earth Catalog. In a famous 1972 article for Rolling Stone, Brand called for a new model of computing that “served human interest, not machine”.

Brand’s disciples answered this call by developing the technical innovations that transformed computers into the form we recognise today. They also promoted a new way of thinking about computers – not as impersonal slabs of machinery, but as tools for unleashing “human potential”.

No single figure contributed more to this transformation of computing than Steve Jobs, who was a fan of Brand and a reader of the Whole Earth Catalog. Jobs fulfilled Brand’s vision on a global scale, launching the mass personal computing era with the Macintosh in the mid-80s, and the mass smartphone era with the iPhone two decades later. Brand later acknowledged that Jobs embodied the Whole Earth Catalog ethos. “He got the notion of tools for human use,” Brand told Jobs’ biographer, Walter Isaacson.

Building those “tools for human use” turned out to be great for business. The impulse to humanise computing enabled Silicon Valley to enter every crevice of our lives. From phones to tablets to laptops, we are surrounded by devices that have fulfilled the demands of the counterculture for digital connectivity, interactivity and self-expression. Your iPhone responds to the slightest touch; you can look at photos of anyone you have ever known, and broadcast anything you want to all of them, at any moment.

In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention. To guide us out of that wilderness, tech humanists say we need more humanising. They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.

Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.

It is difficult to imagine human beings without technology. The story of our species began when we began to make tools. Homo habilis, the first members of our genus, left sharpened stones scattered across Africa. Their successors hit rocks against each other to make sparks, and thus fire. With fire you could cook meat and clear land for planting; with ash you could fertilise the soil; with smoke you could make signals. In flickering light, our ancestors painted animals on cave walls. The ancient tragedian Aeschylus recalled this era mythically: Prometheus, in stealing fire from the gods, “founded all the arts of men.”

All of which is to say: humanity and technology are not only entangled, they constantly change together. This is not just a metaphor. Recent researchsuggests that the human hand evolved to manipulate the stone tools that our ancestors used. The evolutionary scientist Mary Marzke shows that we developed “a unique pattern of muscle architecture and joint surface form and functions” for this purpose.

The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities. For millennia, people have feared that new media were eroding the very powers that they promised to extend. In The Phaedrus, Socrates warned that writing on wax tablets would make people forgetful. If you could jot something down, you wouldn’t have to remember it. In the late middle ages, as a culture of copying manuscripts gave way to printed books, teachers warned that pupils would become careless, since they no longer had to transcribe what their teachers said.

Yet as we lose certain capacities, we gain new ones. People who used to navigate the seas by following stars can now program computers to steer container ships from afar. Your grandmother probably has better handwriting than you do – but you probably type faster.

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

Intentionally or not, this is what tech humanists are doing when they talk about technology as threatening human nature – as if human nature had stayed the same from the paleolithic era until the rollout of the iPhone. Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them. And while the tech humanists may believe they are acting in the common good, they themselves acknowledge they are doing so from above, as elites. “We have a moral responsibility to steer people’s thoughts ethically,” Tristan Harris has declared.

Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes. The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.

This paternalism produces a central irony of tech humanism: the language that they use to describe users is often dehumanising. “Facebook appeals to your lizard brain – primarily fear and anger,” says McNamee. Harris echoes this sentiment: “Imagine you had an input cable,” he has said. “You’re trying to jack it into a human being. Do you want to jack it into their reptilian brain, or do you want to jack it into their more reflective self?”

The Center for Humane Technology’s website offers tips on how to build a more reflective and less reptilian relationship to your smartphone: “going greyscale” by setting your screen to black-and-white, turning off app notifications and charging your device outside your bedroom. It has also announced two major initiatives: a national campaign to raise awareness about technology’s harmful effects on young people’s “digital health and well-being”; and a “Ledger of Harms” – a website that will compile information about the health effects of different technologies in order to guide engineers in building “healthier” products.

These initiatives may help some people reduce their smartphone use – a reasonable personal goal. But there are some humans who may not share this goal, and there need not be anything unhealthy about that. Many people rely on the internet for solace and solidarity, especially those who feel marginalised. The kid with autism may stare at his screen when surrounded by people, because it lets him tolerate being surrounded by people. For him, constant use of technology may not be destructive at all, but in fact life-saving.

Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.

One can easily imagine a version of Facebook that embraces the principles of tech humanism while remaining a profitable and powerful monopoly. In fact, these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.

When Zuckerberg announced that Facebook would prioritise “time well spent” over total time spent, it came a couple weeks before the company released their 2017 Q4 earnings. These reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”.

Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable. In a recent interview, he said: “Over the long term, even if time spent goes down, if people are spending more time on Facebook actually building relationships with people they care about, then that’s going to build a stronger community and build a stronger business, regardless of what Wall Street thinks about it in the near term.”

Sheryl Sandberg has also stressed that the shift will create “more monetisation opportunities”. How? Everyone knows data is the lifeblood of Facebook – but not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”. Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently. Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.

Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact. Advertisers can target the closest friends of the users who already like a product, on the assumption that close friends tend to like the same things.

 
Facebook CEO Mark Zuckerberg testifies before the US Senate last month. Photograph: Jim Watson/AFP/Getty Images

So when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.

In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

In many ways, this process recalls an earlier stage in the evolution of capitalism. In the 19th century, factory owners in England discovered they could only make so much money by extending the length of the working day. At some point, workers would die of exhaustion, or they would revolt, or they would push parliament to pass laws that limited their working hours. So industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.

A similar situation confronts Facebook today. They have to make the attention of the user more valuable – and the language and concepts of tech humanism can help them do it. So far, it seems to be working. Despite the reported drop in total time spent, Facebook recently announced huge 2018 Q1 earnings of $11.97bn (£8.7bn), smashing Wall Street estimates by nearly $600m.

Today’s tech humanists come from a tradition with deep roots in Silicon Valley. Like their predecessors, they believe that technology and humanity are distinct, but can be harmonised. This belief guided the generations who built the “humanised” machines that became the basis for the industry’s enormous power. Today it may provide Silicon Valley with a way to protect that power from a growing public backlash – and even deepen it by uncovering new opportunities for profit-making.

Fortunately, there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.

To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention. But it does suggest that living well with technology can’t be a matter of making technology more “human”. This goal isn’t just impossible – it’s also dangerous, because it puts us at the mercy of experts who tell us how to be human. It cedes control of our technological future to those who believe they know what’s best for us because they understand the essential truths about our species.

The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.

Today, that power is wielded by corporations, which own our technology and run it for profit. The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.

There is an alternative. If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right. The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.

Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.

What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power. Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources. After all, Silicon Valley wouldn’t exist without billions of dollars of public funding, not to mention the vast quantities of information that we all provide for free. Facebook’s market capitalisation is $500bn with 2.2 billion users – do the math to estimate how much the time you spend on Facebook is worth. You could apply the same logic to Google. There is no escape: whether or not you have an account, both platforms track you around the internet.

In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms. 

This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.

More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.

The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.

Saturday, 17 September 2016

The Intellectual Yet Idiot

by Nassim Nicholas Taleb

What we have been seeing worldwide, from India to the UK to the US, is the rebellion against the inner circle of no-skin-in-the-game policymaking “clerks” and journalists-insiders, that class of paternalistic semi-intellectual experts with some Ivy league, Oxford-Cambridge, or similar label-driven education who are telling the rest of us 1) what to do, 2) what to eat, 3) how to speak, 4) how to think… and 5) who to vote for.

But the problem is the one-eyed following the blind: these self-described members of the “intelligenzia” can’t find a coconut in Coconut Island, meaning they aren’t intelligent enough to define intelligence hence fall into circularities — but their main skill is capacity to pass exams written by people like them. With psychology papers replicating less than 40%, dietary advice reversing after 30 years of fatphobia, macroeconomic analysis working worse than astrology, the appointment of Bernanke who was less than clueless of the risks, and pharmaceutical trials replicating at best only 1/3 of the time, people are perfectly entitled to rely on their own ancestral instinct and listen to their grandmothers (or Montaigne and such filtered classical knowledge) with a better track record than these policymaking goons.


Indeed one can see that these academico-bureaucrats who feel entitled to run our lives aren’t even rigorous, whether in medical statistics or policymaking. They cant tell science from scientism — in fact in their eyes scientism looks more scientific than real science. (For instance it is trivial to show the following: much of what the Cass-Sunstein-Richard Thaler types — those who want to “nudge” us into some behavior — much of what they call “rational” or “irrational” comes from their misunderstanding of probability theory and cosmetic use of first-order models.) They are also prone to mistake the ensemble for the linear aggregation of its components as we saw in the chapter extending the minority rule.

The Intellectual Yet Idiot (IYI) is a production of modernity hence has been accelerating since the mid twentieth century, to reach its local supremum today, along with the broad category of people without skin-in-the-game who have been invading many walks of life. Why? Simply, in most countries, the government’s role is between five and ten times what it was a century ago (expressed in percentage of GDP). The IYI seems ubiquitous in our lives but is still a small minority and is rarely seen outside specialized outlets, think tanks, the media, and universities — most people have proper jobs and there are not many openings for the IYI.

Beware the semi-erudite who thinks he is an erudite. He fails to naturally detect sophistry.

The IYI pathologizes others for doing things he doesn’t understand without ever realizing it is his understanding that may be limited. He thinks people should act according to their best interests and he knows their interests, particularly if they are “red necks” or English non-crisp-vowel class who voted for Brexit. When Plebeians do something that makes sense to them, but not to him, the IYI uses the term “uneducated”. What we generally call participation in the political process, he calls by two distinct designations: “democracy” when it fits the IYI, and “populism” when the plebeians dare voting in a way that contradicts his preferences. While rich people believe in one tax dollar one vote, more humanistic ones in one man one vote, Monsanto in one lobbyist one vote, the IYI believes in one Ivy League degree one-vote, with some equivalence for foreign elite schools, and PhDs as these are needed in the club.




More socially, the IYI subscribes to The New Yorker. He never curses on twitter. He speaks of “equality of races” and “economic equality” but never went out drinking with a minority cab driver. Those in the U.K. have been taken for a ride by Tony Blair. The modern IYI has attended more than one TEDx talks in person or watched more than two TED talks on Youtube. Not only will he vote for Hillary Monsanto-Malmaison because she seems electable and some other such circular reasoning, but holds that anyone who doesn’t do so is mentally ill.

The IYI has a copy of the first hardback edition of The Black Swan on his shelves, but mistakes absence of evidence for evidence of absence. He believes that GMOs are “science”, that the “technology” is not different from conventional breeding as a result of his readiness to confuse science with scientism.

Typically, the IYI get the first order logic right, but not second-order (or higher) effects making him totally incompetent in complex domains.
In the comfort of his suburban home with 2-car garage, he advocated the “removal” of Gadhafi because he was “a dictator”, not realizing that removals have consequences (recall that he has no skin in the game and doesn’t pay for results).

The IYI is member of a club to get traveling privileges; if social scientist he uses statistics without knowing how they are derived (like Steven Pinker and psycholophasters in general); when in the UK, he goes to literary festivals; he drinks red wine with steak (never white); he used to believe that fat was harmful and has now completely reversed; he takes statins because his doctor told him to do so; he fails to understand ergodicity and when explained to him, he forgets about it soon later; he doesn’t use Yiddish words even when talking business; he studies grammar before speaking a language; he has a cousin who worked with someone who knows the Queen; he has never read Frederic Dard, Libanius Antiochus, Michael Oakeshot, John Gray, Amianus Marcellinus, Ibn Battuta, Saadiah Gaon, or Joseph De Maistre; he has never gotten drunk with Russians; he never drank to the point when one starts breaking glasses (or, preferably, chairs); he doesn’t know the difference between Hecate and Hecuba; he doesn’t know that there is no difference between “pseudointellectual” and “intellectual” in the absence of skin in the game; has mentioned quantum mechanics at least twice in the past five years in conversations that had nothing to do with physics.

He knows at any point in time what his words or actions are doing to his reputation.

But a much easier marker: he doesn’t deadlift.

Monday, 28 April 2014

The Law of Unintended Consequences - How well-intentioned laws, courts cripple growth in India

S A Aiyar in Times of India

A key reason why India’s economic growth has halved from 9% to 4.5% per year is that, in search of inclusive growth, the courts and legislatures have increasingly made legitimate business difficult. It now takes 12 years to open a new coal mine. This is not inclusive growth but paralysis and stagnation. 
The new land acquisition law aims at quick, fair acquisition . But the secretary of the department of industrial policy and production says the Act has made it “virtually impossible” to acquire land for roads, ports or other infrastructure. Higher compensation provided in the new law is welcome, but it also mandates a social impact assessment for each project, followed by expert group clearance, followed by an 80% vote of affected persons. Legal challenges are possible at each stage. Instead of quick, fair acquisition, we have dither and delay. 
India has become a major global player in clinical trials for new drugs. But complaints have arisen against malpractices by some companies — not informing patients of the risks, not giving insurance cover or compensation, negligence leading to deaths. The obvious answer is to prosecute and jail the guilty, deterring further misdeeds. 
But in India the courts take forever to conclude cases, so misdeeds are not deterred. Instead of focusing on quick justice, the Supreme Court has decreed lengthy new procedures for clinical trials, causing huge delays and costs for legitimate activity. 
The Serum Institute of India, a top global vaccines producer , has suffered delays of over a year in clearance for Phase 3 trials of a rotavirus vaccine. So, it is shifting clinical trials to other Asian countries for this, and for a dengue vaccine too. 
Lupin Pharmaceuticals, a top drug company, has a research park in Pune. But delays in clearances have forced it to shift clinical trials to Europe and Japan, despite much higher costs there. If Lupin’s procedures are good enough for Europe and Japan, they should be good enough for India. But our courts are under the illusion that good practices are created by a jungle of rules. Sorry, they are actually created by swift punishment that deters the guilty. That’s why clinical trials suffer from fewer malpractices in Europe or Japan.
The Supreme Court should focus on speedy convictions, not ever more regulations. 
Despite having the world’s third biggest reserves of iron ore and coal, India has begun importing both. The courts have banned iron mining in some states, and court inquiries into corrupt coal block allocations have frozen fresh mining. Now, illegal mining surely should be stopped. But the right way is to nail the guilty, not stop all legitimate activity. No illegal miners have been convicted beyond appeals, but many legitimate miners have suffered huge losses. 
Illegal sand mining is rampant. Sand is essential for making concrete for construction. But the courts have passed increasingly stringent rules, curbing mining from river beds on environmental grounds. This has created a huge shortage of sand, which in some states sells at Rs 1,800/tonne, more than the price of coal some years ago. Cowed by court strictures and threats of prosecution, many Collectors are playing safe by simply not issuing new sand licences or renewing old ones that expire. Faced with public outrage over illegal mining, the Green Tribunal has mandated environmental clearance (and hence delays) for even the smallest patches of sand. Will this check illegal activity? No, but it will reduce legal mining, making India even more dependent on the sand mafia for supplies. 
These examples are just the tip of the iceberg. Our courts are not designed for making policy: they are designed to judge whether actions are in accordance with the law. They are not experts in the essentially political function of balancing the needs of production and social protection.
Politicians are accountable to voters for bad policies, like those on land acquisition. But the courts are accountable to nobody for causing administrative paralysis, bankrupting honest companies , or increasing poverty by checking economic growth. 
That’s why court activism should be limited to extreme cases where governments are so corrupt that intervention is essential. There’s an old judicial saying that it’s better to let many crooks go free than jail an innocent man. Yet much judicial activism penalizes innocent entrepreneurs and bureaucrats
Misgovernance in India is not just the result of crooked politicians and businessmen. It is also the result of wellintentioned but badly designed laws. Above all, it is the result of a dysfunctional police-judicial system. Unending legal delays encourage law-breakers in every walk of life. The solution is not policy takeover by the courts, but quick justice.