Search This Blog

Showing posts with label opportunity. Show all posts
Showing posts with label opportunity. Show all posts

Wednesday 19 July 2023

A Level Economics 30: Profit

Difference between Normal and Abnormal Profits:

Normal Profits: Normal profits, also known as zero economic profits, refer to the minimum level of profits necessary to keep a business operating in the long run.
Normal profits are the amount of profit that covers all costs, including both explicit costs (such as wages, rent, and materials) and implicit costs (opportunity costs of using the resources).
When a firm earns normal profits, it means it is earning a return that is just sufficient to keep the owners or shareholders satisfied and willing to continue investing in the business.
In this case, the firm is neither making above-average profits nor incurring losses. It is essentially covering all costs and earning a reasonable return on investment.

Abnormal Profits: Abnormal profits, also known as economic profits or supernormal profits, occur when a firm earns more than the normal level of profits.
Abnormal profits represent a situation where a business is earning revenue that exceeds both explicit and implicit costs.
In other words, abnormal profits are above and beyond what is required to cover all costs and provide a normal return on investment.
Abnormal profits indicate that the firm has a competitive advantage, such as unique products, innovative processes, or significant market power, allowing it to generate higher revenues and outperform its competitors.

Example: Let's consider a hypothetical bakery. In a competitive market, several bakeries are operating, and each bakery is earning normal profits. They are covering their explicit costs (wages, ingredients, rent) and implicit costs (such as the opportunity cost of the owner's time and capital invested) while earning a reasonable return.

However, one particular bakery introduces a new and highly popular line of pastries that quickly becomes a favorite among customers. Due to the high demand for these pastries, this bakery starts generating significantly higher revenue compared to its competitors. As a result, it earns abnormal profits.

The bakery's abnormal profits indicate that it is earning more than the minimum necessary to cover all costs and provide a normal return. This exceptional performance could be attributed to its unique product offering or its ability to capture a significant market share. The abnormal profits act as an incentive for the bakery to continue investing in its business and potentially expand operations.Difference between Accounting Profit and Economic Profit:

Accounting Profit: Accounting profit refers to the profit calculated using traditional accounting methods. It is the revenue generated minus explicit costs, such as wages, rent, materials, and other operating expenses.
Accounting profit does not consider implicit costs, which are the opportunity costs associated with using the resources, including the owner's time and capital invested.
Accounting profit provides a financial measure of a firm's performance according to the accepted accounting principles and is primarily used for financial reporting and tax purposes.

Economic Profit: Economic profit is a broader measure of profit that considers both explicit and implicit costs, providing a more comprehensive view of a firm's profitability.
Economic profit subtracts both explicit and implicit costs from total revenue to calculate the true economic benefit or return on investment.
Implicit costs include the opportunity costs of resources, such as the foregone earnings from alternative uses of capital or the owner's time.
Economic profit represents the net benefit of using resources in a particular business venture compared to their next best alternative use.

Example: Let's say an entrepreneur starts a business and calculates an accounting profit of $100,000 per year. This profit is derived by subtracting explicit costs, such as $300,000 in operating expenses (wages, rent, materials), from total revenue of $400,000.

However, when considering economic profit, the entrepreneur realizes that the implicit costs of the business are significant. They estimate that if they were not running their own business, they could earn an annual salary of $80,000 in a similar industry. This opportunity cost of their time and potential earnings is an implicit cost that must be factored in.

Therefore, the economic profit would be calculated as total revenue ($400,000) minus both explicit costs ($300,000) and implicit costs ($80,000), resulting in an economic profit of $20,000.

In this example, the accounting profit is $100,000, reflecting the revenue left after deducting explicit costs. However, when considering the implicit costs or the opportunity cost of the entrepreneur's time, the economic profit becomes $20,000, indicating the true net benefit of running the business compared to the next best alternative use of resources.

Saturday 15 July 2023

A Level Economics 16: The Supply Curve

 Why do supply curves normally slope upward from left to right?


Supply curves typically slope upward from left to right due to the law of supply, which states that producers are willing to supply more of a good at higher prices and less at lower prices. Several factors contribute to this upward-sloping pattern:

  1. Production Costs: As the price of a good increases, producers have a greater incentive to supply more of it because higher prices often result in higher profits. However, producing additional units may require additional resources and incur higher production costs. For instance, suppliers may need to invest in additional labor, raw materials, or machinery, which can increase their costs. To cover these increased costs and earn higher profits, producers are willing to supply more at higher prices.

  2. Opportunity Costs: Opportunity cost refers to the value of the next best alternative forgone when making a choice. When the price of a good rises, suppliers face an opportunity cost of producing alternative goods they could have produced instead. As a result, suppliers allocate more resources and production efforts to the higher-priced good, which leads to an increase in supply.

  3. Increasing Marginal Costs: The concept of increasing marginal costs also contributes to the upward slope of the supply curve. As production increases, producers may encounter diminishing returns or face constraints that make it increasingly expensive to produce additional units. This results in higher marginal costs of production, which necessitates higher prices to justify supplying additional units of the good.

  4. Technological Constraints: Technological limitations can also influence the upward slope of the supply curve. Suppliers may face constraints in terms of production capacity, available technology, or access to resources. As the quantity supplied increases, producers may need to invest in more advanced technology or incur additional costs to expand production capacity, which can lead to higher prices.

  5. Supplier Behavior: Suppliers' expectations and behavior can influence the upward slope of the supply curve. If producers anticipate that prices will rise in the future, they may reduce current supply to take advantage of the expected higher prices. Conversely, if producers anticipate falling prices, they may increase current supply to avoid potential losses. Such behavior aligns with the upward-sloping supply curve.

Overall, the upward slope of the supply curve reflects the positive relationship between price and quantity supplied. Higher prices incentivize producers to allocate more resources, incur higher production costs, and overcome technological constraints to supply larger quantities of a good. This relationship captures the fundamental dynamics of supply in response to price changes.

Friday 16 June 2023

Fallacies of Capitalism 12: The Lump of Labour Fallacy

The Lump of Labour Fallacy

The lump of labor fallacy is a mistaken belief that there is only a fixed amount of work or jobs available in an economy. It suggests that if someone gains employment or works fewer hours, it must mean that someone else loses a job or remains unemployed. However, this idea is flawed.

Here's a simple explanation:

  1. Fixed Pie Fallacy: Imagine a pie that represents all the available work in the economy. The lump of labor fallacy assumes that the pie is fixed, and if one person takes a larger slice (more work), there will be less left for others. This assumption overlooks the potential for economic growth and the creation of new opportunities.

Example: "Assuming that there is only a fixed amount of work available is like believing that the pie will never grow bigger, even when more bakers join the kitchen."

  1. Technological Advancements: Technological progress often leads to increased productivity and efficiency. While it may replace certain jobs, it also creates new ones. The lump of labor fallacy fails to account for the dynamic nature of the job market and how innovation can generate fresh employment opportunities.

Example: "When ATMs were introduced, people worried that bank tellers would become jobless. However, the technology not only made banking more convenient but also led to the emergence of new roles in customer service and technology maintenance."

  1. Changing Demand and Specialization: Economic shifts and changes in consumer preferences continually reshape the job market. As demand for certain products or services diminishes, it opens up avenues for new industries and occupations to thrive. The lump of labor fallacy overlooks this adaptive nature of economies.

Example: "When the demand for typewriters declined, many feared that typists would become unemployed. However, the rise of computers and the internet created a surge in demand for IT specialists and web developers."

In summary, the lump of labor fallacy wrongly assumes that there is a limited amount of work available, failing to consider factors like economic growth, technological advancements, and changing market demands. By understanding the dynamic nature of economies, we can see that job opportunities can expand and transform rather than being fixed or limited.

Fallacies of Capitalism 1: Inevitability of Inequality

How does the 'inevitability of inequality' fallacy ignore the role of social and institutional factors in perpetuating the unequal distribution of wealth and opportunities in a capitalist system?


The "inevitability of inequality" fallacy suggests that inequality is a natural and unavoidable outcome of a capitalist system, implying that it is inherently fair and just. However, this fallacy ignores the significant role of social and institutional factors that contribute to the unequal distribution of wealth and opportunities. Let me break it down with some simple examples:

  1. Unequal starting points: In a capitalist system, individuals have different starting points due to factors like family wealth, education, and social connections. These disparities make it harder for those with fewer resources to compete on an equal footing. For instance, imagine two children who want to become doctors. One child comes from a wealthy family with access to the best schools and tutors, while the other child comes from a low-income family and attends underfunded schools. The unequal starting points put the second child at a significant disadvantage, limiting their opportunities for success.

  2. Discrimination and bias: Social factors such as discrimination based on race, gender, or socioeconomic status can perpetuate inequality. Discrimination may lead to unequal treatment in hiring practices, education, or access to resources. For example, imagine a qualified job applicant who is denied a position because of their gender or ethnicity, while a less qualified candidate from a privileged background is chosen. Discrimination hinders individuals' ability to succeed and reinforces inequality in society.

  3. Power imbalances: Capitalist systems often concentrate power and wealth in the hands of a few individuals or corporations. These powerful entities can influence policies, regulations, and institutions to their advantage, further perpetuating inequality. For instance, consider a large corporation that has significant political influence. They may lobby for policies that favour their interests, such as tax breaks or deregulation, while undermining measures that could reduce inequality, such as progressive taxation or workers' rights.

  4. Lack of social mobility: Inequality can persist if social and institutional factors make it difficult for individuals to move up the social ladder. For example, imagine a society where access to quality education is primarily determined by wealth. If children from low-income families are unable to receive a good education, it becomes challenging for them to break the cycle of poverty and improve their economic prospects. This lack of social mobility reinforces existing inequalities over generations.

These examples demonstrate that the "inevitability of inequality" fallacy overlooks the social and institutional factors that contribute to the unequal distribution of wealth and opportunities in a capitalist system. By recognising these factors and working towards creating a more equitable society, we can address and reduce the systemic barriers that perpetuate inequality.

Thursday 15 June 2023

What elite American universities can learn from Oxbridge

Simon Kuper in The FT  

Both the US and UK preselect their adult elites early, by admitting a few 18-year-olds into brand-name universities. Everyone else in each age cohort is essentially told, “Sorry kid, probably not in this lifetime.”  

The happy few come disproportionately from rich families. Many Ivy League colleges take more students from the top 1 per cent of household incomes than the bottom 60 per cent. Both countries have long agonised about how to diversify the student intake. Lots of American liberals worry that ancestral privilege will be further cemented at some point this month, when the Supreme Court is expected to outlaw race-conscious affirmative action in university admissions. 

Whatever the court decides, US colleges have ways to make themselves more meritocratic. They could learn from Britain’s elite universities, which, in just the past few years, have become much more diverse in class and ethnicity. It’s doable, but only if you want to do it — which the US probably doesn’t. 

Pressure from the government helped embarrass Oxford and Cambridge into overhauling admissions. (And yes, we have to fixate on Oxbridge because it’s the main gateway to the adult elite.) On recent visits to both universities, I was awestruck by the range of accents, and the scale of change. Oxbridge colleges now aim for “contextual admissions”, including the use of algorithms to gauge how much disadvantage candidates have surmounted to reach their academic level. For instance: was your school private or state? What proportion of pupils got free school meals? Did your parents go to university?  

Admissions tutors compare candidates’ performance in GCSEs — British exams taken aged 16 — to that of their schoolmates. Getting seven As at a school where the average is four counts for more than getting seven at a school that averages 10. The brightest kid at an underprivileged school is probably smarter than the 50th-best Etonian. 

Oxbridge has made admissions interviews less terrifying for underprivileged students, who often suffer from imposter syndrome. If a bright working-class kid freezes at interview, one Oxford tutor told me he thinks: “I will not let you talk yourself out of a place here.” And to counter the interview coaching that private-school pupils receive, Oxford increasingly hands candidates texts they haven’t seen before. 

Oxbridge hosts endless summer schools and open days for underprivileged children. The head of one Oxford college says that it had at least one school visit every day of term. The pupils are shown around by students from similar backgrounds. The message to the kids is: “You belong here.” 

It’s working. State schools last year provided a record 72.5 per cent of Cambridge’s British undergraduate admissions. From 2018 to 2022, more than one in seven UK-domiciled Oxford undergraduates came from “socio-economically disadvantaged areas”. Twenty-eight per cent of Oxford students identified as “black and minority ethnic”; slightly more undergraduates now are women than men. Academics told me that less privileged students are more likely to experience social or mental-health problems, but usually get good degrees. These universities haven’t relaxed their standards. On the contrary, by widening the talent pool, they are finding more talent. 

Elite US colleges could do that even without affirmative action. First, they would have to abolish affirmative action for white applicants. A study led by Peter Arcidiacono of Duke University found that more than 43 per cent of white undergraduates admitted to Harvard from 2009 to 2014 were recruited athletes, children of alumni, “on the dean’s interest list” (typically relatives of donors) or “children of faculty and staff”. Three-quarters wouldn’t have got in otherwise. This form of corruption doesn’t exist in Britain. One long-time Oxford admissions tutor told me that someone in his job could go decades without even being offered a donation as bait for admitting a student. Nor do British alumni expect preferential treatment for their children. 

The solutions to many American societal problems are obvious if politically unfeasible: ban guns, negotiate drug prices with pharmaceutical companies. Similarly, elite US universities could become less oligarchical simply by agreeing to live with more modest donations — albeit still the world’s biggest. Harvard’s endowment of $50.9bn is more than six times that of the most elite British universities. 

But US colleges probably won’t change, says Martin Carnoy of Stanford’s School of Education. Their business model depends on funding from rich people, who expect something in return. He adds: “It’s the same with the electoral system. Once you let private money into a public good, it becomes unfair.” 

Both countries have long been fake meritocracies. The US intends to remain one.

Tuesday 28 June 2022

Every Decision is a Bet : Life is poker not chess - 2

Abridged and adapted from Thinking in Bets by Annie Duke

 



Merriam Webster’s Online Dictionary defines ‘bet’ as ‘a choice made by thinking about what will probably happen’. ‘To risk losing (something) when you try to do or achieve something’ and ‘to make decisions that are based on the belief that something will happen or is true’.


These definitions often overlooked the border aspects of betting: choice, probability, risk, decision, belief. By this definition betting doesn’t have to take place only in a casino or against somebody else.


We routinely decide among alternatives, put resources at risk, assess the likelihood of different outcomes and consider what it is that we value. Every decision commits us to some course of action that, by definition, eliminates acting on other alternatives. All such decisions are bets. Not placing a bet on something is, itself a bet.


Choosing to go to the movies means that we are choosing to not do all other things with our time. If we accept a job offer, we are also choosing to foreclose all other alternatives.  There is always an opportunity cost in choosing one path over others. This is betting in action.


The betting elements of decisions - choice, probability, risk etc. are more obvious in some situations than others. Investments are clearly bets. A decision about a stock (buy, don’t buy, sell, hold..) involves a choice about the best use of our financial resources.


We don’t think of our parenting choices as bets but they are. We want our children to be happy, productive adults when we send them out into the world. Whenever we make a parenting choice (about discipline, nutrition, parenting philosophy, where to live etc.), we are betting that our choice will achieve the future we want for our children.


Job and relocation decisions are bets. Sales negotiations and contracts are bets. Buying a house is a bet. Ordering the chicken instead of vegetables is a bet. Everything is a bet.


Most bets are bets against ourselves


In most of our decisions, we are not betting against another person. We are betting against all the future versions of ourselves that we are not choosing. Whenever we make a choice we are betting on a potential future. We are betting that the future version of us that results from the decisions we make will be better off. At stake in a decision is that the return to us (measured in money, time, happiness, health or whatever we value) will be greater than what we are giving up by betting against the other alternative future versions of us.


But, how can we be sure that we are choosing the alternative that is best for us? What if another alternative would bring us more happiness, satisfaction or money? The answer, of course, is we can’t be sure. Things outside our control (luck) can influence the result. The futures we imagine are merely possible. They haven’t happened yet. We can only make our best guess, given what we know and don’t know, at what the future will look like. When we decide, we are betting whatever we value on one set of possible and uncertain futures. That is where the risk is.


Poker players live in a world where that risk is made explicit. They can get comfortable with uncertainty because they put it up front in their decisions. Ignoring the risk and uncertainty in every decision might make us feel better in the short run, but the cost to the quality of our decision making can be immense. If we can find ways to be more comfortable with uncertainty, we can see the world more accurately and be better for it. 


Monday 14 February 2022

English football: why are there so few black people in senior positions?

Simon Kuper in The FT







Possibly the only English football club run mostly by black staff is Queens Park Rangers, in the Championship, the English game’s second tier. 

QPR’s director of football, Les Ferdinand, and technical director, Chris Ramsey, have spent their entire careers in the sport watching hiring discrimination persist almost everywhere else. Teams have knelt in protest against racism, but Ferdinand says, “I didn’t want to see people taking the knee. I just wanted to see action. I’m tired of all these gestures.”  

Now a newly founded group, the Black Footballers Partnership (BFP), argues that it is time to adopt compulsory hiring quotas for minorities. Voluntary measures have not worked, says its executive director, Delroy Corinaldi. 

The BFP has commissioned a report from Stefan Szymanski (economics professor at the University of Michigan, and my co-author on the book Soccernomics) to document apparent discrimination in coaching, executive and scouting jobs. 

It is a dogma of football that these roles must be filled by ex-players — but only, it seems, by white ones. Last year 43 per cent of players in the Premier League were black, yet black people held “only 4.4 per cent of managerial positions, usually taken by former players” and 1.6 per cent of “executive, leadership and ownership positions”, writes Szymanski. 

Today 14 per cent of holders of the highest coaching badge in England, the Uefa Pro Licence, are black, but they too confront prejudice. Looking ahead, the paucity of black scouts and junior coaches is keeping the pipeline for bigger jobs overwhelmingly white. Corinaldi hopes that current black footballers will follow England’s forward Raheem Sterling in calling for more off-field representation. 

There have been 28 black managers in the English game since the Football League was founded in 1888, calculates Corinaldi. As for the Premier League, which has had 11 black managers in 30 years, he says: “Sam Allardyce [an ex-England manager] has had nearly as many roles as the whole black population.” The situation is similar in women’s football, says former England international Anita Asante. 

Ramsey, who entered coaching in the late 1980s, when he says “there were literally no black coaches”, reflects: “There’s always a dream that you’re going to make the highest level, so naively you coach believing that your talent will get you there, but very early on I realised that wasn’t going to happen.”  

Reluctant to hire 

He says discrimination in hiring is always unspoken: “People hide behind politically correct language. They will take a knee, and say, ‘I’m all for it’. You’re just never really seen as able to do the job. And then people sometimes employ people less qualified than you. Plenty of white managers have failed, and I just want to have the opportunity to be as bad as them, and to be given an opportunity again. You don’t want to have to be better just because you’re black.” 

When Ferdinand’s glittering playing career ended, he worried that studying for his coaching badges might “waste five years of my life”, given that the white men running clubs were reluctant to hire even famous black ex-players such as John Barnes and Paul Ince. In Ferdinand’s first seven years on the market, he was offered one managerial job. “People tend to employ what looks, sounds and acts like them,” he shrugs. Yet he says he isn’t angry: “Anger’s not the right word, because that’s unfortunately how they see a lot of young black men, as angry.” 

He suspects QPR hired him in part because its then co-chair, the Malaysian Tony Fernandes, is a person of colour. After the two men met and began talking, recalls Ferdinand, “he said, ‘Why are you not doing this job [management] in football?’ I said, ‘Because I’ve not been given the opportunity.’ The conversations went from there. Had he not been a person of colour, I perhaps wouldn’t have had the opportunity to talk to him in the way that I did.” 

Szymanski can identify only two black owners in English football, both at small clubs: Ben Robinson of Burton Albion, and Ryan Giggs, co-owner of Salford City. 

Szymanski believes discrimination persists for managerial jobs in part because football managers have little impact on team performance — much less than is commonly thought. He calculates that over 10 seasons, the average correlation between a club’s wage bill for players and its league position exceeds 90 per cent. If the quality of players determines results almost by itself, then managers are relatively insignificant, and so clubs can continue to hire the stereotype manager — a white male ex-player aged between 35 and 55 — without harming their on-field performance. 

For about 20 years, English football has launched various fruitless attempts to address discrimination. Ramsey recalls the Football Association — the national governing body — inviting black ex-players to “observe” training sessions. He marvels: “You’re talking about qualified people with full badges standing and watching people train. And most of them have been in the game longer than the people they’re watching.” 

Modest though that initiative was, Ferdinand recalls warning FA officials: “A certain amount of people at St George’s Park [the FA’s National Football Centre], when you tell them this is the initiative, their eyes will be rolling and thinking, ‘Here we go, we’re doing something for them again, we’re trying to give them another opportunity.’ What those people don’t realise is: we don’t get opportunities.”  

Rooney Rule 

After the NFL of American gridiron football introduced the Rooney Rule in 2003, requiring teams to interview minority candidates for job openings, the English ex-player Ricky Hill presented the idea to the League Managers Association. Ramsey recalls, “Everyone said, ‘God, this is brilliant’.” Yet only in the 2016/2017 season did 10 smaller English clubs even pilot the Rooney Rule. Ramsey says: “We are expected to accept as minority coaches that these things take a long time. I have seen this train move along so slowly that it’s ridiculous.” He mourns the black managerial careers lost in the wait. 

In 2019 the Rooney Rule was made mandatory in the three lower tiers of English professional football, though not in the Premier League or anywhere else in Europe. Clubs had to interview at least one black, Asian or minority ethnic (Bame) candidate (if any applied) for all first team managerial, coaching and youth development roles. Why didn’t the rule noticeably increase minority hiring? Ferdinand replies, “Because there’s nobody being held accountable to it. What is the Rooney Rule? You give someone the opportunity to come through the door and talk.” Moreover, English football’s version of the rule has a significant loophole: clubs are exempt if they interview only one candidate, typically someone found through the white old boys’ network. 

Nor has the Rooney Rule made much difference in the NFL. In 2020, 57.5 per cent of the league’s players were black, but today only two out of 32 head coaches are, while one other identifies as multiracial. This month, the former Miami Dolphins coach Brian Flores filed a lawsuit against the NFL and three clubs, accusing them of racist and discriminatory practices. He and other black coaches report being called for sham interviews for jobs that have already been filled, as teams tick the Rooney Rule’s boxes. 

Voluntary diversity targets 

In 2020 England’s FA adopted a voluntary “Football Leadership Diversity Code”. Only about half of English professional clubs signed it. They committed to achieving percentage targets for Bame people among new hires: 15 per cent for senior leadership and team operations positions, and 25 per cent for men’s coaching — “a discrepancy in goals that itself reflects the problem”, comments Szymanski. Clubs were further allowed to water down these targets “based on local demographics”. 

The FA said: “The FA is deeply committed to ensuring the diversity of those playing and coaching within English football is truly reflective of our modern society. 

“We’re focused on increasing the number of, and ongoing support for, coaches who have been historically under-represented in the game. This includes a bursary programme for the Uefa qualifications required to coach in academy and senior professional football.” 

A report last November showed mixed results. Many clubs had missed the code’s targets, with several Premier League clubs reporting zero diversity hires. On the other hand, more than 20 per cent of new hires in men’s football were of Bame origin, which was at least well above historical hiring rates. 

Do clubs take the code seriously? Ferdinand smiles ironically: “From day one I didn’t take it seriously. Because it’s a voluntary code. What’s the repercussions if you don’t follow the voluntary code? No one will say anything, no one will do anything about it.”  

The BFP and the League Managers Association have called for the code’s targets to be made compulsory. Ferdinand cites the example of countries that set mandatory quotas for women on corporate boards of listed companies. 

Asante says it takes minorities in positions of power to understand the problems of minorities. “If you are a majority in any group, when are you ever thinking about the needs of others?” Corinaldi adds: “When you have a monoculture in any boardroom, you only know what you know, and it tends to be the same stories you heard growing up.” He predicts that once football has more black directors and senior executives, they will hire more diversely. 

The BFP’s model for English football is the National Basketball Association in the US, a 30-team league with 14 African-American head coaches. For now, that feels like a distant utopia. Ramsey warns: “If there is no revolutionary action, we’ll be having this same conversation in 10 years’ time.” And he remembers saying exactly those words 10 years ago.

Monday 17 May 2021

How to avoid the return of office cliques

Some managers are wary of telling staff that going into a workplace has networking benefits writes Emma Jacobs in The FT

After weighing up the pros and cons of future working patterns, Dropbox decided against the hybrid model — when the working week is split between the office and home. “It has some pretty significant drawbacks,” says Melanie Collins, chief people officer. Uppermost is that it “could lead to issues with inclusion, or disparities with respect to performance or career trajectory”. In the end, the cloud storage and collaboration platform opted for a virtual-first policy, which prioritises remote work over the office. 

As offices open, there are fears that if hybrid is mismanaged, organisational power will revert to the workplace with executives forming in-office cliques and those employees who seek promotion and networking opportunities switching back to face time with senior staff as a way to advance their careers.

The office pecking order 

Status-conscious workers may be itching to return to the office, says Tomas Chamorro-Premuzic, professor of business psychology at Columbia University and UCL. “Humans are hierarchical by nature, and the office always conveyed status and hierarchy — car parking spots, cars, corner office, size, windows. The risk now is that, in a fully hybrid and flexible world, status ends up positively correlated with the number of days at the office.” 

This could create a two-tier workforce: those who want flexibility to work from home — notably those with caring responsibilities — and those who gravitate towards the office. Rosie Campbell, professor of politics and director of the Global Institute for Women’s Leadership at King’s College London, says that past research has shown that “part-time or remote workers tend not to get promoted”. This has been described as the “flexibility stigma, defined as the “discrimination and negative perception towards workers who work flexibly, and [consequent] negative career outcomes”. 

Research by Heejung Chung, reader in sociology and social policy at Kent University, carried out before the pandemic, found that “women, especially mothers (of children below 12) [were] likely to have experienced some sort of negative career consequence due to flexible working”. Lockdowns disproportionately increased caring responsibilities for women, through home-schooling and closure of childcare facilities. 

Missing out on career development 

Some companies are creating regional hubs or leasing local co-working spaces so that workers can go to offices closer to home, reducing commute times and the costs of expensive office space. Lloyds Banking Group is among a number of banks, for example, that have said they will use surplus space in their branches for meetings. The risk, Campbell says, is workers using local offices miss out on exposure to senior leaders and larger networks that might advance their careers. “People might say it’s easier to be at home or use suburban hubs but it might actually be better to go into the office. Regional or suburban hubs are giving you a place to work that isn’t at home but isn’t giving you any of the face time.” 

Employers and team leaders may need to be explicit about the purpose of the office: not only is it a good place for collaborating with teams and serendipitous conversations but also for networking.  
 
Mark Mortensen, associate professor of organisational behaviour at Insead, points out it is difficult — and paternalistic — as a manager to suggest an employee spends more time in the office to boost their career. A recent opinion article by Cathy Merrill, chief executive of Washingtonian Media, in the Washington Post, sparked a huge backlash on social media and more importantly, her employees, for arguing that those who do not return to the office might find themselves out of a job. “The hardest people to let go are the ones you know,” she wrote. 

Her staff felt their remote work had been unappreciated and were angry that they had not been consulted over future work plans — so they went on strike. 

Mortensen does not advise presenting staff with job loss threats, but puts forward a case for frank and open conversations about the value of time in the office. “Informal networks aren’t just nice to have, they are important. We need to tell people the risk is if you are working remotely you will be missing out on something that might prove beneficial in your career. It’s tough. People will say they sell things on their skills but you have to be honest and say that relationships are important. Weak ties can be the most critical in shaping people’s career paths.” 

The problem is that after dealing with a pandemic and lockdowns, workers may not be in the best place to know what they want out of future work patterns. Chamorro-Premuzic says that he fears that even people who are enjoying it right now, may not realise “they are burnt out. It’s like the introvert who likes working from home, they’re playing to their strength — staying in their own comfort zone.” 

Examine workplace culture 

As employers try to configure ways of working they need to scrutinise workplace culture and find out why employees might prefer to be at home. Some will have always felt excluded from networks and sponsorship in the office — and being away from it means that they do not have to think about it. 

Future Forum, Slack’s future of work think-tank, found that black knowledge workers were more likely to prefer a hybrid or remote work model because the office was a frequent reminder “of their outsider status in both subtle (microaggressions) and not-so-subtle (overt discrimination) ways”. It said the solution was not to give “black employees the ability to work from home, while white executives return to old habits [but] about fundamentally changing your own ways of working and holding people accountable for driving inclusivity in your workplace”. 

Some experts believe that the pandemic has fundamentally altered workplace behaviour. Tsedal Neeley, professor of business administration at Harvard Business School and author of Remote Work Revolution, is optimistic. “Individuals are worried about their career trajectory because the paranoia is, ‘If we don’t go to the office will we get the same opportunities and career mobility if we’re not physically in the office?’ These would be very legitimate worries 13 months ago but less of a concern now.” 

Chung co-authored a report by Birmingham University that found more fathers taking on caring responsibilities and an increase in the “number of couples who indicate that they have shared housework [and] care activities during lockdown”. This might shift couples’ attitudes to splitting work and home duties and alter employers’ stigmatisation of flexible working. 

Prevent an in-crowd 

There are some measures that employers can take to try to prevent office cliques forming. Some workplaces will require teams to come in on the same days so employees get access to their manager, rather than leaving it to individuals to arrange their own office schedules. Though this would mean team members might not get access to senior leaders or form ties with other teams that they might have done when the office was the default. 

Lauren Pasquarella Daley, senior director of women and the future of work at Catalyst, a non-profit that advocates for women at work, says senior executives need to be “intentional about sponsorship and mentoring” rather than letting these relationships form by chance. 

They must also be role models for flexible working. “If employees don’t feel it’s OK to take advantage of remote work then they won’t do so.” This means ensuring meetings are documented. If, for example, one person is working outside the office then everyone needs to act as if they are remote, too. 

Chamorro-Premuzic says managers should work on the assumption that in-office cliques will form. This means organisations need to put in place better measures of objectives, performance measures independent of where people are, as well as measuring and monitoring bias (for example, if you know how often people come to work, you can test whether there is a correlation between being at work and getting a positive performance review, which would suggest bias or adverse impact), and training leaders and managers on how to be inclusive. 

“We may not have tonnes of data on the disparate impact of hybrid policies on underprivileged groups, but it is naive to assume it won’t happen. The big question is how to mitigate it,” he says.

Saturday 6 February 2021

The parable of John Rawls

Janan Ganesh in The FT


In the latest Pixar film, Soul, every human life starts out as a blank slate in a cosmic holding pen. Not until clerks ascribe personalities and vocations does the corporeal world open. As all souls are at their mercy, there is fairness of a kind. There is also chilling caprice. And so Pixar cuts the stakes by ensuring that each endowment is benign. No one ends up with dire impairments or unmarketable talents in the “Great Before”. 

Kind as he was (a wry Isaiah Berlin, it is said, likened him to Christ), John Rawls would have deplored the cop-out. This year is the 50th anniversary of the most important tract of political thought in the last century or so. To tweak the old line about Plato, much subsequent work in the field amounts to footnotes to A Theory of Justice. Only some of this has to do with its conclusions. The method that yielded them was nearly as vivid. 

Rawls asked us to picture the world we should like to enter if we had no warning of our talents. Nor, either, of our looks, sex, parents or even tastes. Don this “veil of ignorance”, he said, and we would maximise the lot of the worst-off, lest that turned out to be us. As we brave our birth into the unknown, it is not the average outcome that troubles us. 

From there, he drew principles. A person’s liberties, which should go as far as is consistent with those of others, can’t be infringed. This is true even if the general welfare demands it. As for material things, inequality is only allowed insofar as it lifts the absolute level of the poorest. Some extra reward for the hyper-productive: yes. Flash-trading or Lionel Messi’s leaked contract: a vast no. Each of these rules puts a floor — civic and economic — under all humans. 

True, the phrase-making helped (“the perspective of eternity”). So did the timing: 1971 was the Keynesian Eden, before Opec grew less obliging. But it was the depth and novelty of Rawls’s thought that brought him reluctant stardom. 

Even those who denied that he had “won” allowed that he dominated. Utilitarians, once-ascendant in their stress on the general, said he made a God of the individual. The right, sure that they would act differently under the veil, asked if this shy scholar had ever met a gambler. But he was their reference point. And others’ too. A Theory might be the densest book to have sold an alleged 300,000 copies in the US alone. It triumphed. 

And it failed. Soon after it was published, the course of the west turned right. The position of the worst-off receded as a test of the good society. Robert Nozick, Rawls’s libertarian Harvard peer, seemed the more relevant theorist. It was a neoliberal world that saw both men out in 2002

An un-public intellectual, Rawls never let on whether he cared. Revisions to his theory, and their forewords, suggest a man under siege, but from academic quibbles not earthly events. For a reader, the joy of the book is in tracking a first-class mind as it husbands a thought from conception to expression. Presumably that, not averting Reaganism, was the author’s aim too. 

And still the arc of his life captures a familiar theme. It is the ubiquity of disappointment — even, or especially, among the highest achievers. Precisely because they are capable of so much, some measure of frustration is their destiny. I think of Tony Blair, thrice-elected and still, post-Brexit, somehow defeated. (Sunset Boulevard, so good on faded actors, should be about ex-politicians.) Or of friends who have made fortunes but sense, and mind, that no one esteems or much cares about business. 

The writer Blake Bailey tells an arresting story about Gore Vidal. The Sage of Amalfi was successful across all literary forms save poetry. He was rich enough to command one of the grandest residential views on Earth. If he hadn’t convinced Americans to ditch their empire or elect him to office, these were hardly disgraces. On that Tyrrhenian terrace, though, when a friend asked what more he could want, he said he wanted “200 million people” to “change their minds”. At some level, however mild his soul, so must have Rawls.

Thursday 31 December 2020

Hope for Britain after Brexit

Those who predict economic Armageddon ignore the reality. The status quo wasn’t working – now there’s an opportunity for change writes Larry Elliott in The Guardian

‘The mass exodus of banks and other financial institutions from the City of London, predicted since June 2016, has not materialised.’ View over the Thames to the City. Photograph: Niklas Halle’n/AFP/Getty Images

So this is it. Forty-eight years after Britain joined what was then the European Economic Community, the fasten seatbelt signs are switched on and the cabin lights have been dimmed. It is time for departure.

Many in the UK, especially on the left, are in despair that this moment has arrived. For them, this can never be the journey to somewhere better: instead it is the equivalent of the last helicopter leaving the roof of the US embassy in Saigon in 1975.

The lefties who voted for Brexit see it differently. For them (us, actually, because I am one of them), the vote to leave was historically progressive. It marked the rejection of a status quo that was only delivering for the better off by those who demanded their voice was heard. Far from being a reactionary spasm, Brexit was democracy in action. 

Now the UK has a choice. It can continue to mourn or it can take advantage of the opportunities that Brexit has provided. For a number of reasons, it makes sense to adopt the latter course.

For a start, it is clear that the UK has deep, structural economic problems despite – and in some cases because of – almost half a century of EU membership. Since 1973, the manufacturing base has shrivelled, the trade balance has been in permanent deficit, and the north-south divide has widened. Free movement of labour has helped entrench Britain’s reputation as a low-investment, low-productivity economy. Brexit means that those farmers who want their fruit harvested will now have to do things that the left ought to want: pay higher wages or invest in new machinery.

The part of the economy that has done best out of EU membership has been the bit that needed least help: the City of London. Each country in the EU has tended to specialise: the Germans do the high-quality manufactured goods; France does the food and drink; the UK does the money. Yet the mass exodus of banks and other financial institutions that has been predicted since June 2016 has not materialised, because London is a global as well as a European financial centre. The City will continue to thrive.

If there are problems with the UK economy, it is equally obvious there are big problems with the EU as well: slow growth, high levels of unemployment, a rapidly ageing population. The single currency – which Britain fortunately never joined – has failed to deliver the promised benefits. Instead of convergence between member states there has been divergence; instead of closing the gap in living standards with the US, the eurozone nations have fallen further behind.

In their heads, those predicting Armageddon for the UK imagine the EU to still be Germany’s miracle economy – the Wirtschaftswunder – of the 1960s. The reality is somewhat different. It is Italy, where living standards are no higher than they were when the single currency was introduced two decades ago. It is Greece, forced to accept ideologically motivated austerity in return for financial support. The four freedoms of the single market – no barriers to the movement of goods, services, people and capital – are actually the four pillars of neoliberalism.

The Covid-19 crisis has demonstrated the importance of nation states and the limitations of the EU. Britain’s economic response to the pandemic was speedy and coordinated: the Bank of England cut interest rates and boosted the money supply while the Treasury pumped billions into the NHS and the furlough scheme. It has taken months and months of wrangling for the eurozone to come up with the same sort of joined-up approach.

Earlier in the year, there was criticism of the government when it decided to opt out of the EU vaccine procurement programme, but this now looks to have been a smart move. Brussels has been slow to place orders for drugs that are effective, in part because it has bowed to internal political pressure to spread the budget around member states – and its regulator has been slower to give approval for treatments. Big does not always mean better.

Leaving the EU means UK governments no longer have anywhere to hide. They have economic levers they can pull – procurement, tax, ownership, regulation, investment in infrastructure, subsidies for new industries, trade policy – and they will come under pressure to use them.

Many on the remainer left accept the EU has its faults, but they fear that Brexit will be the start of something worse: slash and burn deregulation that will make Britain a nastier place to live.

This, though, assumes that Britain will have rightwing governments in perpetuity. It used to be the left who welcomed change and the right that wanted things to remain the same. The inability to envisage what a progressive government could do with Brexit represents a political role reversal and a colossal loss of nerve.

Thursday 6 June 2019

‘Socialism for the rich’: the evils of bad economics

The economic arguments adopted by Britain and the US in the 1980s led to vastly increased inequality – and gave the false impression that this outcome was not only inevitable, but good writes Jonathan Aldred in The Guardian


In most rich countries, inequality is rising, and has been rising for some time. Many people believe this is a problem, but, equally, many think there’s not much we can do about it. After all, the argument goes, globalisation and new technology have created an economy in which those with highly valued skills or talents can earn huge rewards. Inequality inevitably rises. Attempting to reduce inequality via redistributive taxation is likely to fail because the global elite can easily hide their money in tax havens. Insofar as increased taxation does hit the rich, it will deter wealth creation, so we all end up poorer. 

One strange thing about these arguments, whatever their merits, is how they stand in stark contrast to the economic orthodoxy that existed from roughly 1945 until 1980, which held that rising inequality was not inevitable, and that various government policies could reduce it. What’s more, these policies appear to have been successful. Inequality fell in most countries from the 1940s to the 1970s. The inequality we see today is largely due to changes since 1980.

In both the US and the UK, from 1980 to 2016, the share of total income going to the top 1% has more than doubled. After allowing for inflation, the earnings of the bottom 90% in the US and UK have barely risen at all over the past 25 years. More generally, 50 years ago, a US CEO earned on average about 20 times as much as the typical worker. Today, the CEO earns 354 times as much.

Any argument that rising inequality is largely inevitable in our globalised economy faces a crucial objection. Since 1980 some countries have experienced a big increase in inequality (the US and the UK); some have seen a much smaller increase (Canada, Japan, Italy), while inequality has been stable or falling in others (France, Belgium and Hungary). So rising inequality cannot be inevitable. And the extent of inequality within a country cannot be solely determined by long-run global economic forces, because, although most richer countries have been subject to broadly similar forces, the experiences of inequality have differed.

The familiar political explanation for this rising inequality is the huge shift in mainstream economic and political thinking, in favour of free markets, triggered by the elections of Ronald Reagan and Margaret Thatcher. Its fit with the facts is undeniable. Across developed economies, the biggest rise in inequality since 1945 occurred in the US and UK from 1980 onwards.

The power of a grand political transformation seems persuasive. But it cannot be the whole explanation. It is too top-down: it is all about what politicians and other elites do to us. The idea that rising inequality is inevitable begins to look like a convenient myth, one that allows us to avoid thinking about another possibility: that through our electoral choices and decisions in daily life we have supported rising inequality, or at least acquiesced in it. Admittedly, that assumes we know about it. Surveys in the UK and US consistently suggest that we underestimate both the level of current inequality and how much it has recently increased. But ignorance cannot be a complete excuse, because surveys also reveal a change in attitudes: rising inequality has become more acceptable – or at least, less unacceptable – especially if you are not on the wrong end of it.

Inequality is unlikely to fall much in the future unless our attitudes turn unequivocally against it. Among other things, we will need to accept that how much people earn in the market is often not what they deserve, and that the tax they pay is not taking from what is rightfully theirs.

One crucial reason why we have done so little to reduce inequality in recent years is that we downplay the role of luck in achieving success. Parents teach their children that almost all goals are attainable if you try hard enough. This is a lie, but there is a good excuse for it: unless you try your best, many goals will definitely remain unreachable.

Ignoring the good luck behind my success helps me feel good about myself, and makes it much easier to feel I deserve the rewards associated with success. High earners may truly believe that they deserve their income because they are vividly aware of how hard they have worked and the obstacles they have had to overcome to be successful.

But this is not true everywhere. Support for the idea that you deserve what you get varies from country to country. And in fact, support for such beliefs is stronger in countries where there seems to be stronger evidence that contradicts them. What explains this?

Attitude surveys have consistently shown that, compared to US residents, Europeans are roughly twice as likely to believe that luck is the main determinant of income and that the poor are trapped in poverty. Similarly, people in the US are about twice as likely as Europeans to believe that the poor are lazy and that hard work leads to higher quality of life in the long run.

 
Ronald Reagan and Margaret Thatcher in 1988. Photograph: Reuters

Yet in fact, the poor (the bottom 20%) work roughly the same total annual hours in the US and Europe. And economic opportunity and intergenerational mobility is more limited in the US than in Europe. The US intergenerational mobility statistics bear a striking resemblance to those for height: US children born to poor parents are as likely to be poor as those born to tall parents are likely to be tall. And research has repeatedly shown that many people in the US don’t know this: perceptions of social mobility are consistently over-optimistic.

European countries have, on average, more redistributive tax systems and more welfare benefits for the poor than the US, and therefore less inequality, after taxes and benefits. Many people see this outcome as a reflection of the different values that shape US and European societies. But cause-and-effect may run the other way: you-deserve-what-you-get beliefs are strengthened by inequality.

Psychologists have shown that people have motivated beliefs: beliefs that they have chosen to hold because those beliefs meet a psychological need. Now, being poor in the US is extremely tough, given the meagre welfare benefits and high levels of post-tax inequality. So Americans have a greater need than Europeans to believe that you deserve what you get and you get what you deserve. These beliefs play a powerful role in motivating yourself and your children to work as hard as possible to avoid poverty. And these beliefs can help alleviate the guilt involved in ignoring a homeless person begging on your street.

This is not just a US issue. Britain is an outlier within Europe, with relatively high inequality and low economic and social mobility. Its recent history fits the cause-and-effect relationship here. Following the election of Margaret Thatcher in 1979, inequality rose significantly. After inequality rose, British attitudes changed. More people became convinced that generous welfare benefits make poor people lazy and that high salaries are essential to motivate talented people. However, intergenerational mobility fell: your income in Britain today is closely correlated with your parents’ income.

If the American Dream and other narratives about everyone having a chance to be rich were true, we would expect the opposite relationship: high inequality (is fair because of) high intergenerational mobility. Instead, we see a very different narrative: people cope with high inequality by convincing themselves it is fair after all. We adopt narratives to justify inequality because society is highly unequal, not the other way round. So inequality may be self-perpetuating in a surprising way. Rather than resist and revolt, we just cope with it. Less Communist Manifesto, more self-help manual.

Inequality begets further inequality. As the top 1% grow richer, they have more incentive and more ability to enrich themselves further. They exert more and more influence on politics, from election-campaign funding to lobbying over particular rules and regulations. The result is a stream of policies that help them but are inefficient and wasteful. Leftwing critics have called it “socialism for the rich”. Even the billionaire investor Warren Buffett seems to agree: “There’s been class warfare going on for the last 20 years and my class has won,” he once said.

This process has been most devastating when it comes to tax. High earners have most to gain from income tax cuts, and more spare cash to lobby politicians for these cuts. Once tax cuts are secured, high earners have an even stronger incentive to seek pay rises, because they keep a greater proportion of after-tax pay. And so on.

Although there have been cuts in the top rate of income tax across almost all developed economies since 1979, it was the UK and the US that were first, and that went furthest. In 1979, Thatcher cut the UK’s top rate from 83% to 60%, with a further reduction to 40% in 1988. Reagan cut the top US rate from 70% in 1981 to 28% in 1986. Although top rates today are slightly higher – 37% in the US and 45% in the UK – the numbers are worth mentioning because they are strikingly lower than in the post-second-world-war period, when top tax rates averaged 75% in the US and were even higher in the UK.

Some elements of the Reagan-Thatcher revolution in economic policy, such as Milton Friedman’s monetarist macroeconomics, have subsequently been abandoned. But the key policy idea to come out of microeconomics has become so widely accepted today that it has acquired the status of common sense: that tax discourages economic activity and, in particular, income tax discourages work.

This doctrine seemingly transformed public debate about taxation from an endless argument over who gets what, to the promise of a bright and prosperous future for all. The “for all” bit was crucial: no more winners and losers. Just winners. And the basic ideas were simple enough to fit on the back of a napkin.

One evening in December 1974, a group of ambitious young conservatives met for dinner at the Two Continents restaurant in Washington DC. The group included the Chicago University economist Arthur Laffer, Donald Rumsfeld (then chief of staff to President Gerald Ford), and Dick Cheney (then Rumsfeld’s deputy, and a former Yale classmate of Laffer’s).

While discussing Ford’s recent tax increases, Laffer pointed out that, like a 0% income tax rate, a 100% rate would raise no revenue because no one would bother working. Logically, there must be some tax rate between these two extremes that would maximise tax revenue. Although Laffer does not remember doing so, he apparently grabbed a napkin and drew a curve on it, representing the relationship between tax rates and revenues. The Laffer curve was born and, with it, the idea of trickle-down economics.

The key implication that impressed Rumsfeld and Cheney was that, just as tax rates lower than 100% must raise more revenue, cuts in income tax rates more generally could raise revenue. In other words, there could be winners, and no losers, from tax cuts. But could does not mean will. No empirical evidence was produced in support of the mere logical possibility that tax cuts could raise revenue, and even the economists employed by the incoming Reagan administration six years later struggled to find any evidence in support of the idea.

 
George Osborne, who lowered the UK’s top rate of tax from 50% to 45% in 2013. Photograph: Matt Cardy/PA

Yet it proved irresistible to Reagan, the perennial optimist, who essentially overruled his expert advisers, convinced that the “entrepreneurial spirit unleashed by the new tax cuts would surely bring in more revenue than his experts imagined”, as the historian Daniel T Rodgers put it. (If this potent brew of populist optimism and impatience with economic experts seems familiar today, that might be explained in part by the fact that Laffer was also a campaign adviser to Donald Trump.)

For income tax cuts to raise tax revenue, the prospect of higher after-tax pay must motivate people to work more. The resulting increase in GDP and income may be enough to generate higher tax revenues, even though the tax rate itself has fallen. Although the effects of the big Reagan tax cuts are still disputed (mainly because of disagreement over how the US economy would have performed without the cuts), even those sympathetic to trickle-down economics conceded that the cuts had negligible impact on GDP – and certainly not enough to outweigh the negative effect of the cuts on tax revenues.

But the Laffer curve did remind economists that a revenue-maximising top tax rate somewhere between 0% and 100% must exist. Finding the magic number is another matter: the search continues today. It is worth a brief dig into this research, not least because it is regularly used to veto attempts to reduce inequality by raising tax on the rich. In 2013, for example, the UK chancellor of the exchequer George Osborne reduced the top rate of income tax from 50% to 45%, arguing Laffer-style that the tax cut would lead to little, if any, loss of revenue. Osborne’s argument relied on economic analysis suggesting that the revenue-maximising top tax rate for the UK is about 40%.

Yet the assumptions behind this number are shaky, as most economists involved in producing such figures acknowledge. Let’s begin with the underlying idea: if lower tax rates raise your after-tax pay, you are motivated to work more. It seems plausible enough but, in practice, the effects are likely to be minimal. If income tax falls, many of us cannot work more, even if we wanted to. There is little opportunity to get paid overtime, or otherwise increase our paid working hours, and working harder during current working hours does not lead to higher pay. Even for those who have these opportunities, it is far from clear that they will work more or harder. They may even decide to work less: since after-tax pay has risen, they can choose to work fewer hours and still maintain their previous income level. So the popular presumption that income tax cuts must lead to more work and productive economic activity turns out to have little basis in either common sense or economic theory.

There are deeper difficulties with Osborne’s argument, difficulties not widely known even among economists. It is often assumed that if the top 1% is incentivised by income tax cuts to earn more, those higher earnings reflect an increase in productive economic activity. In other words, the pie gets bigger. But some economists, including the influential Thomas Piketty, have shown this was not true for CEOs and other top corporate managers following the tax cuts in the 1980s. Instead, they essentially funded their own pay rises by paying shareholders less, which led in turn to lower dividend tax revenue for the government. In fact, Piketty and colleagues have argued that the revenue-maximising top income tax rate may be as high as 83%.

The income tax cuts for the rich of the past 40 years were originally justified by economic arguments: Laffer’s rhetoric was seized upon by politicians. But to economists, his ideas were both familiar and trivial. Modern economics provides neither theory nor evidence proving the merit of these tax cuts. Both are ambiguous. Although politicians can ignore this truth for a while, it suggests that widespread opposition to higher taxes on the rich is ultimately based on reasons beyond economics.

When the top UK income tax rate was raised to 50% in 2009 (until Osborne cut it to 45% four years later) the composer Andrew Lloyd Webber, one of Britain’s wealthiest people, responded bluntly: “The last thing we need is a Somali pirate-style raid on the few wealth creators who still dare to navigate Britain’s gale-force waters.” In the US, Stephen Schwarzman, CEO of private equity firm Blackstone, likened proposals to remove a specialised tax exemption to the German invasion of Poland.

While we may scoff at these moans from the super-rich, most people unthinkingly accept the fundamental idea behind them: that income tax is a kind of theft, taking income which is rightfully owned by the person who earned it. It follows that tax is, at best, a necessary evil, and so should be minimised as far as possible. On these grounds, the 83% top tax rate discussed by Piketty is seen as unacceptable.

There is an entire cultural ecosystem that has evolved around the idea of tax-as-theft, recognisable today in politicians’ talk about “spending taxpayers’ money”, or campaigners celebrating “tax freedom day”. This language exists outside the world of politics, too. Tax economists, accountants and lawyers refer to the so-called “tax burden”.

But the idea that you somehow own your pre-tax income, while obvious, is false. To begin with, you could never have ownership rights prior to, or independent from, taxation. Ownership is a legal right. Laws require various institutions, including police and a legal system, to function. These institutions are financed through taxation. The tax and the ownership rights are effectively created simultaneously. We cannot have one without the other.


FacebookTwitterPinterest ‘There’s been class warfare going on for the last 20 years, and my class has won’ … US billionaire Warren Buffett. Photograph: Kevin Lamarque/Reuters

However, if the only function of the state is to support private ownership rights (maintaining a legal system, police, and so on), it seems that taxation could be very low – and any further taxation on top could still be seen as a form of theft. Implicit in this view is the idea of incomes earned, and so ownership rights created, in an entirely private market economy, with the state entering only later, to ensure these rights are maintained. Many economics textbooks picture the state in this way, as an add-on to the market. Yet this, too, is a fantasy.

In the modern world, all economic activity reflects the influence of government. Markets are inevitably defined and shaped by government. There is no such thing as income earned before government comes along. My earnings partly reflect my education. Earlier still, the circumstances of my birth and my subsequent health reflects the healthcare available. Even if that healthcare is entirely “private”, it depends on the education of doctors and nurses, and the drugs and other technologies available. Like all other goods and services, these in turn depend on the economic and social infrastructure, including transport networks, communications systems, energy supplies and extensive legal arrangements covering complex matters such as intellectual property, formal markets such as stock exchanges, and jurisdiction across national borders. Lord Lloyd-Webber’s wealth depends on government decisions about the length of copyright on the music he wrote. In sum, it is impossible to isolate what is “yours” from what is made possible, or influenced, by the role of government.

Talk of taxation as theft turns out to be a variation on the egotistical tendency to see one’s success in splendid isolation, ignoring the contribution of past generations, current colleagues and government. Undervaluing the role of government leads to the belief that if you are smart and hard-working, the high taxes you endure, paying for often wasteful government, are not a good deal. You would be better off in a minimal-state, low-tax society.

One reply to this challenge points to the evidence on the rich leaving their home country to move to a lower tax jurisdiction: in fact, very few of them do. But here is a more ambitious reply from Warren Buffett: “Imagine there are two identical twins in the womb … And the genie says to them: ‘One of you is going to be born in the United States, and one of you is going to be born in Bangladesh. And if you wind up in Bangladesh, you will pay no taxes. What percentage of your income would you bid to be born in the United States?’ … The people who say: ‘I did it all myself’ … believe me, they’d bid more to be in the United States than in Bangladesh.” 

Much of the inequality we see today in richer countries is more down to decisions made by governments than to irreversible market forces. These decisions can be changed. However, we have to want to control inequality: we must make inequality reduction a central aim of government policy and wider society. The most entrenched, self-deluding and self-perpetuating justifications for inequality are about morality, not economy. The great economist John Kenneth Galbraith nicely summarised the problem: “One of man’s oldest exercises in moral philosophy … is the search for a superior moral justification for selfishness. It is an exercise which always involves a certain number of internal contradictions and even a few absurdities. The conspicuously wealthy turn up urging the character-building value of privation for the poor.”