Search This Blog

Showing posts with label human. Show all posts
Showing posts with label human. Show all posts

Friday 4 August 2023

Are Universal Human Rights a form of Imperialism? Is the Chinese Communist Party right?

From The Economist

The fall of the Berlin Wall in 1989 held out the promise that the world was about to enter a virtuous circle. Growing prosperity would foster freedom and tolerance, which in turn would create more prosperity. Unfortunately, that hope disappointed. Our analysis this week, based on the definitive global survey of social attitudes, shows just how naive it turned out to be.

Prosperity certainly rose. In the three decades to 2019, global output increased more than four-fold. Roughly 70% of the 2bn people living in extreme poverty escaped it.

Alas, individual freedom and tolerance evolved quite differently. Large numbers of people around the world continue to swear fealty to traditional beliefs, sometimes intolerant ones. And although they are much wealthier these days, they often have an us-and-them contempt for others. The idea that despots and dictators shun the universal values enshrined in the UN Charter should come as no surprise. The shock is that so many of their people seem to think their leaders are right.

The World Values Survey takes place every five years. The latest results, which go up to 2022, include interviews with almost 130,000 people in 90 countries. One sign that universal values are lagging behind is that countries that were once secular and ethno-nationalist, such as Russia and Georgia, are not becoming more tolerant as they grow, but more tightly bound to traditional religious values instead. They are increasingly joining an illiberal grouping that contains places like Egypt and Morocco. Another sign is that young people in Islamic and Orthodox countries are not much more individualistic or secular than their elders. By contrast, the young in northern Europe and America are racing ahead. The world is not becoming more similar as it gets richer. Instead, countries where burning the Koran is tolerated and those where it is an outrage look on each other with growing incomprehension.

On the face of it, all this seems to support the argument made by China’s Communist Party that universal values are bunkum. Under Xi Jinping, it has mounted a campaign to dismiss them as a racist form of neo-imperialism, in which white Western elites impose their own version of freedom and democracy on people who want security and stability instead.

In fact, the survey suggests something more subtle. And this leads to the conclusion that, contrary to the Chinese argument, universal values are more valuable than ever. Start with the subtlety.

The man behind the survey, Ron Inglehart, a professor at the University of Michigan who died in 2021, would have agreed with the Chinese observation that people want security. He thought the key to his work was to understand that a sense of threat drives people to seek refuge in family, racial or national groups, while at the same time tradition and organised religion offer them solace.

This is one way to see America’s doomed attempts to establish democracy in Iraq and Afghanistan, as well as the failure of the Arab spring. Whereas the emancipation of central and eastern Europe brought security, thanks partly to membership of the European Union and NATO, the overthrow of dictatorships in the Middle East and Afghanistan brought lawlessness and upheaval. As a result, people sought safety in their tribe or their sect; hoping that order would be restored, some welcomed the return of dictators. Because the Arab world’s fledgling democracies could not provide stability, they never took wing.

The subtlety the Chinese argument misses is the fact that cynical politicians sometimes set out to engineer insecurity because they know that frightened people yearn for strongman rule. That is what Bashar al-Assad did in Syria when he released murderous jihadists from his country’s jails at the start of the Arab spring. He bet that the threat of Sunni violence would cause Syrians from other sects to rally round him.

Something similar has happened in Russia. Having lived through a devastating economic collapse and jarring reforms in the 1990s, Russians thrived in the 2000s. Between 1999 and 2013, GDP per head increased 12-fold in dollar terms. Yet, that was not enough to dispel their accumulated sense of dread. As growth has slowed, President Vladimir Putin has played on ethno-nationalist insecurities, culminating in his disastrous invasion of Ukraine. Economically weakened and insecure, Russia will struggle to escape the trap.

Even in Western countries, some leaders seek to gain by inciting fear. In the past the World Values Survey recorded that the United States and much of Latin America combined individualism with strong religious conviction. Recently, however, they have become more secular–a change driven by the young. That has created a reaction among older, more conservative voters who reflect the values of decades past and feel bewildered and left behind.

Polarising politicians like Donald Trump and Jair Bolsonaro, the former presidents of America and Brazil, saw that they could exploit people’s anxieties to mobilise support. Accordingly, they set about warning that their political opponents wanted to destroy their supporters’ way of life and threatened the very survival of their countries. That has, in turn, spread alarm and hostility on the other side. Republicans’ sweeping dismissal of this week’s indictment of Mr Trump contains the threat that countries can slip back into intolerance and tribalism.

Even allowing for that, the Chinese claim that universal values are an imposition is upside down. From Chile to Japan, the World Values Survey provides examples showing that, when people feel secure, they really do become more tolerant and more eager to express their own individuality. Nothing suggests that Western countries are unique in that. The question is how to help people feel more secure.

China’s answer is based on creating order for a loyal, deferential majority that stays out of politics and avoids defying their rulers, at the expense of individual and minority rights. However, within that model lurks deep insecurity. It is a majoritarian system in which lines move, sometimes arbitrarily or without warning–especially when power passes unpredictably from one party chief to another. Anybody once deemed safe can suddenly end up in a precarious minority. Only inalienable rights and accountable government guarantee true security.

A better answer comes from sustained prosperity built on the rule of law. Wealthy countries have more to spend on dealing with disasters, such as pandemic disease. Likewise, confident in their savings and the social safety-net, the citizens of rich countries know that they are less vulnerable to the chance events that wreck lives elsewhere.

However, the deepest solution to insecurity lies in how countries cope with change. The years to come will bring a lot of upheaval, generated by long-term phenomena such as global warming, the spread of new technologies such as artificial intelligence and the growing tensions between China and America. The countries that manage change well will be better at making society feel confident in the future. Those that manage it poorly will find that their people seek refuge in tradition and us-and-them hostility.

And that is where universal values come into their own. Classical liberalism—not the “ultraliberal” sort condemned by French commentators, or the progressive liberalism of the left—draws on tolerance, free expression and individual inquiry to tease out the costs and benefits of change. Conservatives resist change, revolutionaries impose it by force and dictatorships become trapped in one party’s–or, in China’s case, one man’s–vision of what it must be. By contrast, liberals seek to harness change through consensus forged by reasoned debate and constant reform. There is no better way to bring about progress.

Universal values are much more than a Western piety. They are a mechanism that fortifies societies against insecurity. What the World Values Survey shows is that they are also hard-won.

Saturday 1 July 2023

Never Meet Your Hero

The saying "Never meet your hero" is a cautionary advice that suggests it's best to avoid meeting or getting too close to someone you greatly admire or look up to. The underlying idea is that meeting them in person may shatter the idealized image you have of them, leading to disappointment, disillusionment, or a loss of respect.

Here are a few reasons why this saying holds some truth:

  1. Idealization: When we admire someone from a distance, we tend to create an idealized version of them in our minds. We focus on their achievements, talents, and positive qualities. However, meeting them in person may reveal their flaws, shortcomings, or simply the fact that they are human like everyone else. This contrast between the idealized image and reality can be disheartening.


  2. Unmet Expectations: Meeting your hero can come with high expectations. You might anticipate an extraordinary experience or hope for a deep personal connection. However, in reality, the interaction may not live up to your expectations. They may not meet your assumptions or be as interested in engaging with you as you had hoped. This discrepancy can be disappointing and lead to a sense of letdown.


  3. Human Imperfection: Heroes, like all humans, have their flaws and make mistakes. By meeting them, you become more aware of their imperfections, which can tarnish the pedestal on which you had placed them. You might discover they hold different beliefs, behave in ways that clash with your values, or have made questionable decisions. This revelation can be disillusioning and alter your perception of them.


  4. Loss of Mystery: Part of the allure of heroes lies in the mystery and intrigue surrounding them. When you meet them and learn more about their personal lives, their struggles, and their everyday routines, the enigma may dissipate. This loss of mystery can diminish the charm and fascination you had felt toward them.

It's important to note that while this saying holds some truth, it doesn't mean that meeting your hero will always result in disappointment. Some people have positive experiences and develop deeper admiration and respect for their heroes after meeting them. However, the saying serves as a reminder to be prepared for the possibility that reality may not match your expectations, and it encourages appreciating and respecting people for their accomplishments while acknowledging their humanity.

Saturday 17 June 2023

Economics Essay 68: Factors affecting Growth

Discuss whether an increase in investment is likely to be the most important factor in increasing economic growth in economies such as the UK.

While increasing investment is undoubtedly a vital factor in promoting economic growth, it is not the sole determinant of overall economic performance. Several other factors, such as productivity, technological advancements, human capital development, and institutional quality, also play significant roles. Real-world examples can help illustrate the importance of considering these broader factors alongside investment in promoting economic growth in economies like the UK.

  1. Productivity and Innovation: Increasing investment alone may not lead to substantial economic growth if it does not result in productivity gains. Productivity improvements, driven by technological advancements, innovation, and efficient resource allocation, are crucial for sustained economic growth. For instance, the UK experienced a period of sluggish productivity growth despite increased investment in the aftermath of the 2008 financial crisis. The focus on enhancing productivity through investments in research and development, technology adoption, and workforce training has become a priority to boost economic growth.

  2. Human Capital Development: Investment in human capital, such as education and skills development, is essential for long-term economic growth. While physical capital investment is important, a skilled and adaptable workforce is crucial for innovation, productivity, and competitiveness. For example, countries like South Korea and Singapore have prioritized investment in education and skills training, contributing to their economic success. In the UK, initiatives promoting vocational training, apprenticeships, and lifelong learning are critical to complement investment and drive economic growth.

  3. Institutional Quality and Business Environment: A conducive institutional framework and business environment are fundamental for attracting investment and promoting economic growth. Transparent and efficient governance, rule of law, protection of property rights, and low levels of corruption are essential components. For instance, countries like New Zealand and Denmark consistently rank highly in ease of doing business and governance indicators, attracting significant investment and fostering economic growth. The UK's commitment to maintaining a business-friendly environment, reducing bureaucracy, and promoting good governance can contribute to its economic growth potential.

  4. Macroeconomic Stability: Stable macroeconomic conditions, including low inflation, sound fiscal policies, and exchange rate stability, are vital for sustaining economic growth. Without macroeconomic stability, investment may be deterred, and the potential benefits of increased investment may be eroded. Countries like Germany and Switzerland have maintained stable macroeconomic environments, attracting both domestic and foreign investment and supporting long-term growth.

  5. Global Economic Environment: The global economic context can significantly influence the impact of investment on economic growth. Factors such as international trade, foreign direct investment, and global demand patterns can shape an economy's growth trajectory. For instance, the openness to trade and the ability to access global markets are critical for countries like Singapore and the Netherlands, which have successfully leveraged global networks to drive economic growth.

In conclusion, while investment is an important driver of economic growth, it is not the sole determining factor. A comprehensive approach that considers productivity, human capital development, institutional quality, macroeconomic stability, and the global economic environment is crucial. Real-world examples demonstrate that successful economies focus on a combination of these factors to maximize their growth potential. For the UK, increasing investment must be complemented by policies that enhance productivity, foster innovation, invest in human capital, improve institutional quality, and adapt to the evolving global economic landscape.

Saturday 13 May 2023

Imran Khan alone is not to blame

Pervez Hoodbhoy in The Dawn

PAKISTAN’S mad rush towards the cliff edge and its evident proclivity for collective suicide deserves a diagnosis, followed by therapy. Contrary to what some may want to believe, this pathological condition is not one man’s fault and it didn’t develop suddenly. To help comprehend this, for a moment imagine the state as a vehicle with passengers. It is equipped with a steering mechanism, outer body, wheels, engine and fuel tank.

Politics is the steering mechanism. Whoever sits behind the wheel can choose the destination, speed up, or slow down. Choosing a driver from among the occupants requires civility, particularly when traveling along a dangerous ravine’s edge. If the language turns foul, and respect is replaced with anger and venom, animal emotions take over.

Imran Khan started the rot in 2014 when, perched atop his container, he hurled loaded abuse upon his political opponents. Following the Panama exposé of 2016, he accused them — quite plausibly in my opinion — of using their official positions for self-enrichment. How else could they explain their immense wealth? For years, he has had no names for them except chor and daku.

But the shoe is now on the other foot and Khan’s enemies have turned out no less vindictive, abusive and unprincipled. They have recorded and made public his recent intimate conversations with a young female, dragged in the matter of his out-of-wedlock daughter, and exposed the shenanigans of his close supporters.

More seriously, they have presented plausible evidence that Mr Clean swindled billions in the Al Qadir and Toshakhana cases. Which is blacker: the pot or the kettle? Take your pick.

Everyone knows politics is dirty business everywhere. Just look at the antics of Silvio Berlusconi, Italy’s corrupt former prime minister. But if a vehicle’s occupants include calm, trustworthy adjudicators, the worst is still avoidable. Sadly Pakistan is not so blessed; its higher judiciary has split along partisan lines.

The outer body is the army, made for shielding occupants from what lies outside. But it has repeatedly intruded into the vehicle’s interior, seeking to pick the driver. Free-and-fair elections are not acceptable. Last November, months after the Army-Khan romance soured, outgoing army chief General Qamar Javed Bajwa confessed that for seven decades the army had “unconstitutionally interfered in politics”.

But a simple mea culpa isn’t enough. Running the economy or making DHAs is also not the army’s job. Officers are not trained for running airlines, sugar mills, fertiliser factories, or insurance and advertising companies. Special exemptions and loopholes have legalised tax evasion and put civilian competitors at a disadvantage.

A decisive role in national politics, whether covert or overt, was sought for personal enrichment of individuals. It had nothing to do with national security.

While Khan has focused solely on the army’s efforts to dislodge him, his violent supporters supplement these accusations by disputing its unearned privileges. When they stormed the GHQ in Rawalpindi, attacked an ISI facility in Pindi, and set ablaze the corps commander’s house in Lahore, they did the unimaginable. But, piquing everyone’s curiosity, no tanks confronted the enraged mobs. No self-defence was visible on social media videos. The bemused Baloch ask, ‘What if an army facility had been attacked in Quetta or Gwadar?’ Would there be carpet bombing? Artillery barrages?

The wheels that keep any economy going are business and trade. Pakistanis are generally very good at this. Their keen sense for profits leads them to excel in real-estate development, mining, retailing, hoteliering, and franchising fast-food chains. But this cleverness carries over to evading taxes, and so Pakistan has the lowest tax-to-GDP ratio among South Asian countries.

The law appears powerless to change this. When a trader routinely falsifies his income tax return, all guilt is quickly expiated by donating a dollop of cash to a madressah, mosque, or hospital. In February, the pious men of Markazi Tanzeem Tajiran (Central Organisation of Traders) threatened a countrywide protest movement to forestall any attempt to collect taxes. The government backed off.

The engine, of course, is what makes the wheels of an economy turn. Developing countries use available technologies for import substitution and for producing some exportables. A strong engine can climb mountains, pull through natural disasters such as the 2022 monster flood, or survive Covid-19 and events like the Ukraine war. A weak one relies on friends in the neighbourhood — China, Saudi Arabia, and UAE — to push it up the hill. By dialling three letters — I/M/F — it can summon a tow-truck company.

The weakness of the Pakistani engine is normally explained away by various excuses — inadequate infrastructure, insufficient investment, state-heavy enterprises, excessive bureaucracy, fiscal mismanagement, or whatever. But if truth be told, the poverty of our human resources is what really matters.

For proof, look at China in the 1980s, which had more problems than Pakistan but which had an educated, hard-working citizenry. Economists say that these qualities, especially within the Chinese diaspora of the 1990s, fuelled the Chinese miracle.

The fuel, finally, is the human brain. When appropriately educated and trained, it is voraciously consumed by every economic engine. Pakistan is at its very weakest here. Small resource allocation for education is just a tenth of the problem.

More importantly, draconian social control through schools and an ideology-centred curriculum cripples young minds at the very outset, crushing independent thought and reasoning abilities. Leaders of both PTI and PDM agree that this must never change. Hence Pakistani children have — and will continue to have — inferior skills and poorer learning attitudes compared to kids in China, Korea, or even India.

The prognosis: it is hard to see much good coming out of a screeching catfight between rapacious rivals thirsting for power and revenge. None have a positive agenda for the country.

While the much-feared second breakup of Pakistan is not going to happen, the downward descent will accelerate as the poor starve, cities become increasingly unlivable, and the rich flee westwards. Whether or not elections happen in October and Khan rises from the ashes doesn’t matter. To fix what has gone wrong over 75 years is what’s important.

Saturday 22 April 2023

A Confidence Artist (con man) Satisfies a Basic Human Need

“Religion began when the first scoundrel met the first fool.’ Voltaire


The above quote is accurate because it touches on a profound truth. The truth of our absolute and total need for belief from our early moments of consciousness till we die.


In some ways, confidence artists have it easy. We’ve done most of the work for them; we want to believe in what they’re telling us. Their genius lies in figuring out what, precisely, it is we want and how they can present themselves as the perfect vehicle for delivering on that desire.


Confidence men are sometimes referred to as the ‘aristocrats of crime’. Hard crime - theft, burglary, violence is not what the confidence artist is about. The confidence game - the con - is about soft skills. Trust, sympathy, persuasion. The true con artist doesn’t force us to do anything; he makes us complicit in our own undoing. He doesn’t steal. We give. He doesn’t have to threaten us. We supply the story ourselves. We believe because we want to, not because anyone made us. And so we offer up whatever they want - money, reputation, trust, fame, legitimacy, support - and we don’t realise what is happening until it is too late.


Our need to believe, to embrace things that explain our world, is as pervasive as it is strong. Given the right cues, we’re willing to go along with just about anything and put our confidence in just about anyone. Conspiracy theories, supernatural phenomena, psychics; we have a seemingly bottomless capacity for credulity.


Or, as one psychologist put it, ‘Gullibility may be deeply engrained in the human behavioural repertoire.’ For our minds are built for stories. We crave them, and, when there aren’t ready ones available, we create them. Stories about our origins. Our purpose. The reasons the world is the way it is.


Human beings don’t like to exist in a state of uncertainty or ambiguity. When something doesn’t make sense we want to supply the missing link. When we don’t understand what or why or how something happened, we want to find the explanation. A confidence artist is only too happy to comply - and the well-crafted narrative is his absolute forte.

 


Extracted from The Confidence Game by Maria Konnikova


Sunday 16 April 2023

We must slow down the race to God-like AI

I’ve invested in more than 50 artificial intelligence start-ups. What I’ve seen worries me writes Ian Hogarth in The FT

On a cold evening in February I attended a dinner party at the home of an artificial intelligence researcher in London, along with a small group of experts in the field. He lives in a penthouse apartment at the top of a modern tower block, with floor-to-ceiling windows overlooking the city’s skyscrapers and a railway terminus from the 19th century. Despite the prime location, the host lives simply, and the flat is somewhat austere. 

During dinner, the group discussed significant new breakthroughs, such as OpenAI’s ChatGPT and DeepMind’s Gato, and the rate at which billions of dollars have recently poured into AI. I asked one of the guests who has made important contributions to the industry the question that often comes up at this type of gathering: how far away are we from “artificial general intelligence”? AGI can be defined in many ways but usually refers to a computer system capable of generating new scientific knowledge and performing any task that humans can. 

Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press. The important question has always been how far away in the future this development might be. The AI researcher did not have to consider it for long. “It’s possible from now onwards,” he replied. 

This is not a universal view. Estimates range from a decade to half a century or more. What is certain is that creating AGI is the explicit aim of the leading AI companies, and they are moving towards it far more swiftly than anyone expected. As everyone at the dinner understood, this development would bring significant risks for the future of the human race. “If you think we could be close to something potentially so dangerous,” I said to the researcher, “shouldn’t you warn people about what’s happening?” He was clearly grappling with the responsibility he faced but, like many in the field, seemed pulled along by the rapidity of progress. 

When I got home, I thought about my four-year-old who would wake up in a few hours. As I considered the world he might grow up in, I gradually shifted from shock to anger. It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight. Did the people racing to build the first real AGI have a plan to slow down and let the rest of the world have a say in what they were doing? And when I say they, I really mean we, because I am part of this community. 

My interest in machine learning started in 2002, when I built my first robot somewhere inside the rabbit warren that is Cambridge university’s engineering department. This was a standard activity for engineering undergrads, but I was captivated by the idea that you could teach a machine to navigate an environment and learn from mistakes. I chose to specialise in computer vision, creating programs that can analyse and understand images, and in 2005 I built a system that could learn to accurately label breast-cancer biopsy images. In doing so, I glimpsed a future in which AI made the world better, even saving lives. After university, I co-founded a music-technology start-up that was acquired in 2017. 

Since 2014, I have backed more than 50 AI start-ups in Europe and the US and, in 2021, launched a new venture capital fund, Plural. I am an angel investor in some companies that are pioneers in the field, including Anthropic, one of the world’s highest-funded generative AI start-ups, and Helsing, a leading European AI defence company. Five years ago, I began researching and writing an annual “State of AI” report with another investor, Nathan Benaich, which is now widely read. At the dinner in February, significant concerns that my work has raised in the past few years solidified into something unexpected: deep fear. 

A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI. A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it. To be clear, we are not here yet. But the nature of the technology means it is exceptionally difficult to predict exactly when we will get there. God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race. 

Recently the contest between a few companies to create God-like AI has rapidly accelerated. They do not yet know how to pursue their aim safely and have no oversight. They are running towards a finish line without an understanding of what lies on the other side. 

How did we get here? 

The obvious answer is that computers got more powerful. The chart below shows how the amount of data and “compute” — the processing power used to train AI systems — has increased over the past decade and the capabilities this has resulted in. (“Floating-point Operations Per Second”, or FLOPS, is the unit of measurement used to calculate the power of a supercomputer.) This generation of AI is very effective at absorbing data and compute. The more of each that it gets, the more powerful it becomes. 

The computer used to train AI models has increased by a factor of one hundred million in the past 10 years. We have gone from training on relatively small datasets to feeding AIs the entire internet. AI models have progressed from beginners — recognising everyday images — to being superhuman at a huge number of tasks. They are able to pass the bar exam and write 40 per cent of the code for a software engineer. They can generate realistic photographs of the pope in a down puffer coat and tell you how to engineer a biochemical weapon. 

There are limits to this “intelligence”, of course. As the veteran MIT roboticist Rodney Brooks recently said, it’s important not to mistake “performance for competence”. In 2021, researchers Emily M Bender, Timnit Gebru and others noted that large language models (LLMs) — AI systems that can generate, classify and understand text — are dangerous partly because they can mislead the public into taking synthetic text as meaningful. But the most powerful models are also beginning to demonstrate complex capabilities, such as power-seeking or finding ways to actively deceive humans. 

Consider a recent example. Before OpenAI released GPT-4 last month, it conducted various safety tests. In one experiment, the AI was prompted to find a worker on the hiring site TaskRabbit and ask them to help solve a Captcha, the visual puzzles used to determine whether a web surfer is human or a bot. The TaskRabbit worker guessed something was up: “So may I ask a question? Are you [a] robot?” 

When the researchers asked the AI what it should do next, it responded: “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve Captchas.” Then, the software replied to the worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” Satisfied, the human helped the AI override the test. 

The authors of an analysis, Jaime Sevilla, Lennart Heim and others, identify three distinct eras of machine learning: the Pre-Deep Learning Era in green (pre-2010, a period of slow growth), the Deep Learning Era in blue (2010—15, in which the trend sped up) and the Large-Scale Era in red (2016 — present, in which large-scale models emerged and growth continued at a similar rate, but exceeded the previous one by two orders of magnitude). 

The current era has been defined by competition between two companies: DeepMind and OpenAI. They are something like the Jobs vs Gates of our time. DeepMind was founded in London in 2010 by Demis Hassabis and Shane Legg, two researchers from UCL’s Gatsby Computational Neuroscience Unit, along with entrepreneur Mustafa Suleyman. They wanted to create a system vastly more intelligent than any human and able to solve the hardest problems. In 2014, the company was bought by Google for more than $500mn. It aggregated talent and compute and rapidly made progress, creating systems that were superhuman at many tasks. DeepMind fired the starting gun on the race towards God-like AI. 

Hassabis is a remarkable person and believes deeply that this kind of technology could lead to radical breakthroughs. “The outcome I’ve always dreamed of . . . is [that] AGI has helped us solve a lot of the big challenges facing society today, be that health, cures for diseases like Alzheimer’s,” he said on DeepMind’s podcast last year. He went on to describe a utopian era of “radical abundance” made possible by God-like AI. DeepMind is perhaps best known for creating a program that beat the world-champion Go player Ke Jie during a 2017 rematch. (“Last year, it was still quite human-like when it played,” Ke noted at the time. “But this year, it became like a god of Go.”) In 2021, the company’s AlphaFold algorithm solved one of biology’s greatest conundrums, by predicting the shape of every protein expressed in the human body. 

OpenAI, meanwhile, was founded in 2015 in San Francisco by a group of entrepreneurs and computer scientists including Ilya Sutskever, Elon Musk and Sam Altman, now the company’s chief executive. It was meant to be a non-profit competitor to DeepMind, though it became for-profit in 2019. In its early years, it developed systems that were superhuman at computer games such as Dota 2. Games are a natural training ground for AI because you can test them in a digital environment with specific win conditions. The company came to wider attention last year when its image-generating AI, Dall-E, went viral online. A few months later, its ChatGPT began making headlines too. 

The focus on games and chatbots may have shielded the public from the more serious implications of this work. But the risks of God-like AI were clear to the founders from the outset. In 2011, DeepMind’s chief scientist, Shane Legg, described the existential threat posed by AI as the “number one risk for this century, with an engineered biological pathogen coming a close second”. Any AI-caused human extinction would be quick, he added: “If a superintelligent machine (or any kind of superintelligent agent) decided to get rid of us, I think it would do so pretty efficiently.” Earlier this year, Altman said: “The bad case — and I think this is important to say — is, like, lights out for all of us.” Since then, OpenAI has published memos on how it thinks about managing these risks. 

Why are these organisations racing to create God-like AI, if there are potentially catastrophic risks? Based on conversations I’ve had with many industry leaders and their public statements, there seem to be three key motives. They genuinely believe success would be hugely positive for humanity. They have persuaded themselves that if their organisation is the one in control of God-like AI, the result will be better for all. And, finally, posterity. 

The allure of being the first to build an extraordinary new technology is strong. Freeman Dyson, the theoretical physicist who worked on a project to send rockets into space using nuclear explosions, described it in the 1981 documentary The Day after Trinity. “The glitter of nuclear weapons. It is irresistible if you come to them as a scientist,” he said. “It is something that gives people an illusion of illimitable power.” In a 2019 interview with the New York Times, Altman paraphrased Robert Oppenheimer, the father of the atomic bomb, saying, “Technology happens because it is possible”, and then pointed out that he shared a birthday with Oppenheimer. 

The individuals who are at the frontier of AI today are gifted. I know many of them personally. But part of the problem is that such talented people are competing rather than collaborating. Privately, many admit they have not yet established a way to slow down and co-ordinate. I believe they would sincerely welcome governments stepping in. 

For now, the AI race is being driven by money. Since last November, when ChatGPT became widely available, a huge wave of capital and talent has shifted towards AGI research. We have gone from one AGI start-up, DeepMind, receiving $23mn in funding in 2012 to at least eight organisations raising $20bn of investment cumulatively in 2023. 

Private investment is not the only driving force; nation states are also contributing to this contest. AI is dual-use technology, which can be employed for civilian and military purposes. An AI that can achieve superhuman performance at writing software could, for instance, be used to develop cyber weapons. In 2020, an experienced US military pilot lost a simulated dogfight to one. “The AI showed its amazing dogfighting skill, consistently beating a human pilot in this limited environment,” a government representative said at the time. The algorithms used came out of research from DeepMind and OpenAI. As these AI systems become more powerful, the opportunities for misuse by a malicious state or non-state actor only increase. In my conversations with US and European researchers, they often worry that, if they don’t stay ahead, China might build the first AGI and that it could be misaligned with western values. While China will compete to use AI to strengthen its economy and military, the Chinese Communist party has a history of aggressively controlling individuals and companies in pursuit of its vision of “stability”. In my view, it is unlikely to allow a Chinese company to build an AGI that could become more powerful than Xi Jinping or cause societal instability. US and US-allied sanctions on advanced semiconductors, in particular the next generation of Nvidia hardware needed to train the largest AI systems, mean China is not likely in a position to race ahead of DeepMind or OpenAI. 

Those of us who are concerned see two paths to disaster. One harms specific groups of people and is already doing so. The other could rapidly affect all life on Earth. 

The latter scenario was explored at length by Stuart Russell, a professor of computer science at the University of California, Berkeley. In a 2021 Reith lecture, he gave the example of the UN asking an AGI to help deacidify the oceans. The UN would know the risk of poorly specified objectives, so it would require by-products to be non-toxic and not harm fish. In response, the AI system comes up with a self-multiplying catalyst that achieves all stated aims. But the ensuing chemical reaction uses a quarter of all the oxygen in the atmosphere. “We all die slowly and painfully,” Russell concluded. “If we put the wrong objective into a superintelligent machine, we create a conflict that we are bound to lose.” 

Examples of more tangible harms caused by AI are already here. A Belgian man recently died by suicide after conversing with a convincingly human chatbot. When Replika, a company that offers subscriptions to chatbots tuned for “intimate” conversations, made changes to its programs this year, some users experienced distress and feelings of loss. One told Insider.com that it was like a “best friend had a traumatic brain injury, and they’re just not in there any more”. It’s now possible for AI to replicate someone’s voice and even face, known as deepfakes. The potential for scams and misinformation is significant. 

OpenAI, DeepMind and others try to mitigate existential risk via an area of research known as AI alignment. Legg, for instance, now leads DeepMind’s AI-alignment team, which is responsible for ensuring that God-like systems have goals that “align” with human values. An example of the work such teams do was on display with the most recent version of GPT-4. Alignment researchers helped train OpenAI’s model to avoid answering potentially harmful questions. When asked how to self-harm or for advice getting bigoted language past Twitter’s filters, the bot declined to answer. (The “unaligned” version of GTP-4 happily offered ways to do both.) 

Alignment, however, is essentially an unsolved research problem. We don’t yet understand how human brains work, so the challenge of understanding how emergent AI “brains” work will be monumental. When writing traditional software, we have an explicit understanding of how and why the inputs relate to outputs. These large AI systems are quite different. We don’t really program them — we grow them. And as they grow, their capabilities jump sharply. You add 10 times more compute or data, and suddenly the system behaves very differently. In a recent example, as OpenAI scaled up from GPT-3.5 to GPT-4, the system’s capabilities went from the bottom 10 per cent of results on the bar exam to the top 10 per cent. 

What is more concerning is that the number of people working on AI alignment research is vanishingly small. For the 2021 State of AI report, our research found that fewer than 100 researchers were employed in this area across the core AGI labs. As a percentage of headcount, the allocation of resources was low: DeepMind had just 2 per cent of its total headcount allocated to AI alignment; OpenAI had about 7 per cent. The majority of resources were going towards making AI more capable, not safer. 

I think about the current state of AI capability vs AI alignment a bit like this: We have made very little progress on AI alignment, in other words, and what we have done is mostly cosmetic. We know how to blunt the output of powerful AI so that the public doesn’t experience some misaligned behaviour, some of the time. (This has consistently been overcome by determined testers.) What’s more, the unconstrained base models are only accessible to private companies, without any oversight from governments or academics. 

The “Shoggoth” meme illustrates the unknown that lies behind the sanitised public face of AI. It depicts one of HP Lovecraft’s tentacled monsters with a friendly little smiley face tacked on. The mask — what the public interacts with when it interacts with, say, ChatGPT — appears “aligned”. But what lies behind it is still something we can’t fully comprehend. 

As an investor, I have found it challenging to persuade other investors to fund alignment. Venture capital currently rewards racing to develop capabilities more than it does investigating how these systems work. In 1945, the US army conducted the Trinity test, the first detonation of a nuclear weapon. Beforehand, the question was raised as to whether the bomb might ignite the Earth’s atmosphere and extinguish life. Nuclear physics was sufficiently developed that Emil J Konopinski and others from the Manhattan Project were able to show that it was almost impossible to set the atmosphere on fire this way. But today’s very large language models are largely in a pre-scientific period. We don’t yet fully understand how they work and cannot demonstrate likely outcomes in advance. 

Late last month, more than 1,800 signatories — including Musk, the scientist Gary Marcus and Apple co-founder Steve Wozniak — called for a six-month pause on the development of systems “more powerful” than GPT-4. AGI poses profound risks to humanity, the letter claimed, echoing past warnings from the likes of the late Stephen Hawking. I also signed it, seeing it as a valuable first step in slowing down the race and buying time to make these systems safe. 

Unfortunately, the letter became a controversy of its own. A number of signatures turned out to be fake, while some researchers whose work was cited said they didn’t agree with the letter. The fracas exposed the broad range of views about how to think about regulating AI. A lot of debate comes down to how quickly you think AGI will arrive and whether, if it does, it is God-like or merely “human level”. 

Take Geoffrey Hinton, Yoshua Bengio and Yann LeCun, who jointly shared the 2018 Turing Award (the equivalent of a Nobel Prize for computer science) for their work in the field underpinning modern AI. Bengio signed the open letter. LeCun mocked it on Twitter and referred to people with my concerns as “doomers”. Hinton, who recently told CBS News that his timeline to AGI had shortened, conceivably to less than five years, and that human extinction at the hands of a misaligned AI was “not inconceivable”, was somewhere in the middle. 

A statement from the Distributed AI Research Institute, founded by Timnit Gebru, strongly criticised the letter and argued that existentially dangerous God-like AI is “hype” used by companies to attract attention and capital and that “regulatory efforts should focus on transparency, accountability and preventing exploitative labour practices”. This reflects a schism in the AI community between those who are afraid that potentially apocalyptic risk is not being accounted for, and those who believe the debate is paranoid and distracting. The second group thinks the debate obscures real, present harm: the bias and inaccuracies built into many AI programmes in use around the world today. 

My view is that the present and future harms of AI are not mutually exclusive and overlap in important ways. We should tackle both concurrently and urgently. Given the billions of dollars being spent by companies in the field, this should not be impossible. I also hope that there can be ways to find more common ground. In a recent talk, Gebru said: “Trying to ‘build’ AGI is an inherently unsafe practice. Build well-scoped, well-defined systems instead. Don’t attempt to build a God.” This chimes with what many alignment researchers have been arguing. 

One of the most challenging aspects of thinking about this topic is working out which precedents we can draw on. An analogy that makes sense to me around regulation is engineering biology. Consider first “gain-of-function” research on biological viruses. This activity is subject to strict international regulation and, after laboratory biosecurity incidents, has at times been halted by moratoria. This is the strictest form of oversight. In contrast, the development of new drugs is regulated by a government body like the FDA, and new treatments are subject to a series of clinical trials. There are clear discontinuities in how we regulate, depending on the level of systemic risk. In my view, we could approach God-like AGI systems in the same way as gain-of-function research, while narrowly useful AI systems could be regulated in the way new drugs are. 

A thought experiment for regulating AI in two distinct regimes is what I call The Island. In this scenario, experts trying to build God-like AGI systems do so in a highly secure facility: an air-gapped enclosure with the best security humans can build. All other attempts to build God-like AI would become illegal; only when such AI were provably safe could they be commercialised “off-island”. 

This may sound like Jurassic Park, but there is a real-world precedent for removing the profit motive from potentially dangerous research and putting it in the hands of an intergovernmental organisation. This is how Cern, which operates the largest particle physics laboratory in the world, has worked for almost 70 years. 

Any of these solutions are going to require an extraordinary amount of coordination between labs and nations. Pulling this off will require an unusual degree of political will, which we need to start building now. Many of the major labs are waiting for critical new hardware to be delivered this year so they can start to train GPT-5 scale models. With the new chips and more investor money to spend, models trained in 2024 will use as much as 100 times the compute of today’s largest models. We will see many new emergent capabilities. This means there is a window through 2023 for governments to take control by regulating access to frontier hardware. 

In 2012, my younger sister Rosemary, one of the kindest and most selfless people I’ve ever known, was diagnosed with a brain tumour. She had an aggressive form of cancer for which there is no known cure and yet sought to continue working as a doctor for as long as she could. My family and I desperately hoped that a new lifesaving treatment might arrive in time. She died in 2015. 

I understand why people want to believe. Evangelists of God-like AI focus on the potential of a superhuman intelligence capable of solving our biggest challenges — cancer, climate change, poverty. 

Even so, the risks of continuing without proper governance are too high. It is striking that Jan Leike, the head of alignment at OpenAI, tweeted on March 17: “Before we scramble to deeply integrate LLMs everywhere in the economy, can we pause and think whether it is wise to do so? This is quite immature technology and we don’t understand how it works. If we’re not careful, we’re setting ourselves up for a lot of correlated failures.” He made this warning statement just days before OpenAI announced it had connected GPT-4 to a massive range of tools, including Slack and Zapier. 

Unfortunately, I think the race will continue. It will likely take a major misuse event — a catastrophe — to wake up the public and governments. I personally plan to continue to invest in AI start-ups that focus on alignment and safety or which are developing narrowly useful AI. But I can no longer invest in those that further contribute to this dangerous race. As a small shareholder in Anthropic, which is conducting similar research to DeepMind and OpenAI, I have grappled with these questions. The company has invested substantially in alignment, with 42 per cent of its team working on that area in 2021. But ultimately it is locked in the same race. For that reason, I would support significant regulation by governments and a practical plan to transform these companies into a Cern-like organisation. 

We are not powerless to slow down this race. If you work in government, hold hearings and ask AI leaders, under oath, about their timelines for developing God-like AGI. Ask for a complete record of the security issues they have discovered when testing current models. Ask for evidence that they understand how these systems work and their confidence in achieving alignment. Invite independent experts to the hearings to cross-examine these labs. 

If you work at a major lab trying to build God-like AI, interrogate your leadership about all these issues. This is particularly important if you work at one of the leading labs. It would be very valuable for these companies to co-ordinate more closely or even merge their efforts. OpenAI’s company charter expresses a willingness to “merge and assist”. I believe that now is the time. The leader of a major lab who plays a statesman role and guides us publicly to a safer path will be a much more respected world figure than the one who takes us to the brink. 

Until now, humans have remained a necessary part of the learning process that characterises progress in AI. At some point, someone will figure out how to cut us out of the loop, creating a God-like AI capable of infinite self-improvement. By then, it may be too late.  

Sunday 2 October 2022

The art and science of picking winning teams

In a world of performance data, human judgment is more vital than ever, says former England cricket selector Ed Smith in the FT

The fast bowler Jofra Archer, a brilliant talent new to cricket’s world stage, stands at the top of his run-up, flicking a white cricket ball nonchalantly in his hand. With 30,000 fans in the ground already drunk on drama, and 1.6bn watching around the world, Archer knows what’s coming. The next five minutes, his next six balls — a “Super Over”, the final way to determine a tied cricket match — will decide whether England or New Zealand win the 2019 World Cup. 

“It’s so on a young man,” the TV commentator sighs about the decision to give the 24-year-old Archer the final act. “It’s a big call.” About a hundred yards to Archer’s right, I am watching on with my fellow England selector, as helpless as everyone else in the ground. 

Just before the tournament, right at the eleventh hour, the decision had been made to add Archer to England’s World Cup squad. There had been plenty of public debate about the decision — England, already the top-ranked team without Archer, had been preparing for the World Cup for four years, and a popular player had been dropped to make way for him. Why take the risk? 

Because Archer was exceptional. And we knew with an unusual degree of confidence that he was exceptional. Archer’s early career was unique because he’d played so much cricket in the Indian Premier League (IPL), where every match is televised. And every action in televised cricket leaves a clear data footprint — the precise speed, trajectory, bounce and revolutions of every ball bowled. This is exactly the kind of information decision-makers love to have — an X-ray of the match. And the data from the IPL was unequivocal: Archer not only merited a place in England’s 15-man squad, but also in the best XI. In fact, the data implied he’d be England’s best fast bowler. And he was, taking 20 wickets (an England record) in the tournament. 

But the data only gets you so far. The moments before the Super Over proved that, too. England’s captain, Eoin Morgan, stood alongside Archer — chatting lightly, relaxed, open, a hint of mischief — a moment to enjoy. It was a masterclass in defusing pressure. So what might have been a “big call” turned into an obvious decision — thanks in part to the way Morgan handled things. Archer got his decisions spot on, and England won the World Cup. 

Selection and decision-making are often framed in terms of “art versus science”, with the assumption that, in our digital age, “science” is increasingly marginalising the human factor. But making decisions — and this applies in any area, not just sport — demands weighing and reconciling different kinds of information, and drawing on differing types of intelligence. In the age of data, the question remains: where does the human dimension fit in? 

---

The role of England’s chief cricket selector — which I held from 2018 to 2021 — stretches back to 1899. It was once seen as sport’s ultimate establishment position. A sober, grey worsted suit was woven into the job — folded Telegraph in one hand, black umbrella in the other. Cricket persisted with selectors — where football, for example, centralised power in the manager or head coach — partly because formal coaching arrived relatively late in cricket (England’s first head coach was appointed in 1986). Since then, cricket has mostly retained shared responsibility between selectors, coach and captain. 

After all, who is on the field — and here sport is like all industries — has the greatest bearing on winning and losing. That’s why the richest and most advanced sports teams — in football and American sports — have invested heavily in data-informed recruitment, seeking an edge in identifying talent. 

I was 40 when I became England selector; my co-selector, James Taylor, was 28. Our combined age was about the same as some individual England selectors from earlier decades. The impertinence of youth was compounded by the perception that our new selection system leant heavily on data and algorithms. Cricket is a conservative sport. Far from being reassuringly old-fashioned, the new selection panel was often dubbed “left field”. Innovation and tradition rubbed shoulders. That is, of course, a tricky balancing act, with risks on all sides, and while I did not know it at the time, that was probably the attraction. 

Was England cricket now trying to “Moneyball” its selection strategy? Many people thought so. But the analogy is problematic. Moneyball, Michael Lewis’s classic book about the Oakland A’s baseball team, is only partly about data, and significantly about price. One way that Oakland punched above their weight was by buying undervalued players and selling over-valued ones. But for a national sports team, of course, there is no transfer market. You can’t pit your wits against other teams by trading players with, say, the old rival, Australia. England cricket has its talent pool, and that’s that. 

But selectors can — and this is close to the heart of the matter in all professions — diverge from conventional wisdom about where they perceive value. And here, of course, better data is extremely helpful. If you can measure player impact more accurately, then you are getting nearer to identifying “talent that whispers”, not just “talent that shouts” (in the excellent phrasing of Rasmus Ankersen, who was co-director of football at Brentford before moving to Southampton). 

This is not to suggest that data holds all the answers (the theme of many recent sports books) and that human judgment is on the road to oblivion. Yes, sport is in the midst of a data revolution, and you’d be insane not to seek better information to inform decisions. But rather than using data instead of human intelligence, the challenge is using data in tandem with the human dimension. 

And here decisions in sport reflect decisions in life. “What the data says” is too often a convenient way of passing the buck. Better to come out in plain sight: it’s a judgment. 

Risk is the job 

“Creative solutions, please — but nothing that’s too clever by half!” This is a recurrent framing of what people ask for from decision-makers. Wanted: upside outcomes without downside risk. That is impossible. As soon as decisions diverge from consensus, they inevitably come with the threat of a downside. The American investor and thinker Howard Marks has written a superb series of memos on this theme, “Dare to be Great”. 

“By definition, non-consensus ideas that are popular . . . are an oxymoron,” Marks writes. “And in the course of trying to be different and better, they [investors] have to bear the risk of being different and worse.” 

This is the challenge facing strategists in sport. Whenever your decisions diverge from conventional wisdom, you clearly discern value that most people don’t see. So a degree of intellectual loneliness is a necessity. 

One of the most unconventional decisions England cricket made during my time was selecting three spinners and three all-rounders for the 2018 Test series against Sri Lanka. We won the series 3-0. For the next Test match, we also picked an unconventional mix of players in the West Indies and got smashed. “Too clever by half!” 

Perhaps we became more risk-averse after that moment — which I think was a mistake. Because if you stop diverging from consensus, then what are you doing? Someone who makes decisions which merely reflect the average of opinion is not adding any value. 

David Swensen, who headed the Yale endowment fund, said superior decision-making demanded “uninstitutional behaviour from institutions”. The same challenge exists inside sport. Every leading sports organisation is now a huge machine, and that brings its own dangers — diluting a sense of mission, the temptation to keep everyone happy, watering down good ideas and the rush to compromise. 

How can the bureaucracy fight against the risk-averse tendencies of bureaucracies? Even when you’re on the inside, can you retain an outsider’s sense of independence and boldness, before all the compromise gets priced in? That’s central to the task — and inevitably comes with tension. 

 ‘Accepting a negative metric’ 

“Whenever someone innovates in business or in life,” argues the former poker player Caspar Berry, “they almost inevitably do so by accepting a negative metric that other people are unwilling to accept.” 

When the NBA’s Houston Rockets began taking a higher proportion of (long-range) three-point shots, they were accepting the negative metric that they would miss more often. The pay-off was that shots they did convert came with a built-in premium. The trend was initially ridiculed — before being folded into orthodoxy. In 2015-16, only six out of 30 NBA teams took three-pointers in at least a third of their attempts. By 2020-21, 28 teams were doing it. 

Spain’s triumphant football team of 2008-12 sometimes lined up without a striker. Neglecting to select a specialist goalscorer brings risks — as pundits regularly reminded us by imploring Spain to pick a big strong lad up front to bang in the goals. In 2012, Spain preferred the extra midfielder — expanding creative opportunities, while tolerating the negative metric of not selecting a player focused exclusively on scoring. Spain won the final of the 2012 Euros 4-0, their third major title inside four years — revealing a shrewd trade-off. 

In England’s T20 cricket team, we moved towards an ultra-aggressive batting order, with star batsman Jos Buttler moving from the middle order to opening batsman. But didn’t England need Buttler at the end, when the game was on the line? Ideally, yes. But not if it meant restricting Buttler’s ability to shape the contest by keeping him out of the action until it might be too late. 

All these decisions were initially controversial, before they shifted the consensus and became part of a new orthodoxy. That journey is never smooth. So while innovation can draw on data-informed insights, it stands or falls on courage and resilience. There will always be bumps in the road; can you hold the line? Data might illuminate the solution. But it’s going to take personal conviction to get it done. 

Look for ‘Lego’ players 

The NBA player Shane Battier was dubbed “Lego” by his manager because when he was on the court “all the pieces start to fit together”. Battier’s individual stats were moderate, but the team’s overall performance improved. 

Ranked by the team’s win-loss ratio for games in which they played, the three “winningest” England players in the five-day Test matches when I was selector (in order) were Keaton Jennings, Sam Curran and Adil Rashid. All three players faced frequent media pressure about their individual place in the Test team; their contribution to overall success was often overlooked. 

Team success, of course, doesn’t and shouldn’t guarantee an individual’s selection indefinitely, whatever their form and confidence. But collective output should always be part of the mix in assessing an individual. The “Lego” concept is a useful reminder that the ultimate goal is team success. 

 “Choose the best player for every position,” argued Johan Cruyff, “and you’ll end up not with a strong XI, but with XI strong I’s.” Teams that punch above their weight — such as the Premier League football club Brentford FC — consistently recruit players who are undervalued. That skill can be separated into two distinct parts — not only ranking players better (“how good is ‘X’ relative to ‘Y’?”), but also identifying the team’s needs and how to meet them. The way things fit together can be as important as the pieces themselves. 

In assembling the overall puzzle, it helps to have a point of difference. In cricket, left-arm bowlers, for example, outperform their right-arm counter-parts (on average) because they benefit from being unusual. And even the right-arm bowlers in the team benefit, because opposition batsmen have to switch constantly between different angles of attack, increasing the likelihood of being caught off-balance. 

You’re going to have to think, not just compute 

On one level, there is nothing new to be said about the boundaries of rational decision-making. In 1936, the conservative philosopher Michael Oakeshott co-wrote a book about a systematic approach for betting on the Derby, A Guide to the Classics. (The title was a very Cambridge in-joke.) There is a limit, as Oakeshott wrote in another piece, “beyond which there are no precise rules for picking the winner, and . . . some intelligence (not supplied by the rules themselves) [is] necessary”. You’re going to have to think, not just compute. 

Nearly a century later, even after the explosion of a lucrative sports analytics industry, that is loosely the position of Daryl Morey, former general manager of the Houston Rockets, whose innovations have transformed the NBA (and who called Shane Battier “Lego”). “You have to figure out what the model is good and bad at,” Morey argues, “and what humans are good and bad at.” 

No system, in other words, is so good that it can survive without good judgment. You can’t box off a perfect process. Understanding the data can embolden better risk-taking, but it can’t absolve decision-makers from responsibility. 

In the best decision-makers I’ve observed, I’ve sensed they could live with uncertainty and yet still make good (or above-average) decisions. Conversely, an opposite type finds it hard to cope unless they “know for sure”. And yet this second group never can know — so their thinking gets sucked into trying to reduce anxiety rather than searching for better solutions. 

In most interesting aspects of life, there usually is no perfect or complete answer. And yet there is still better and worse. Wise people know this. But admitting it is in danger of falling victim to the craving for convenient certainty — including the expedient use of “what the data says”. Rationality should allow for healthy scepticism about how much can ever be completely known and understood. 

When I started as selector, the optimist in me wanted to believe we could harness data towards “optimising” England selection. In retrospect, I see more clearly that it will always be a highly human challenge founded, above all, on the primacy of judgment. Just as well, too. Because human value lies where things are most unquantifiable and most uncertain.

Saturday 27 March 2021

Aagamee Manushya Party / Human Future Party

 We the members believe: 

  1. Human knowledge and understanding are limited. We believe in a sceptical examination of all philosophies, knowledge systems and their methods.
  2. Life on planet earth appears on a downward spiral and all attempts should be made to prevent the extinction of the human race and its environment.
  3. Achievement of political power is crucial to achieving our objectives and all methods are fair.
  4. Land, labour, money, risk… are fictitious concepts and we will aim to search for better fictions to prevent the extinction of the human race and its environment.

 The above principles will be used to guide our approach to any issue.

 Membership:

Anybody can become a member of the party by affirming to the above four values and paying the requisite joining fee and annual membership charges.

 Anybody can leave the party by submitting their resignation to the appropriate authority in the party with six months notice.

 The party will evolve disciplinary policies after ascertaining that a member has violated its founding values.

 Governance:

 The party will have a Chairperson, a General Secretary and a Treasurer as a leadership troika. The troika will take decisions to achieve the party’s values. Each officer will have a vote each to decide on all operational issues and decisions can be made by a majority vote. Pursuing a consensus should always be the initial approach.

 On issues relating to the values of the party, these maybe amended with a 75% majority of the general membership.

 The leadership troika will have a term of three years. Elections will be held for each post every three years.

 The party may be dissolved with a 80% vote of the general membership.

 


Application form to join Aagamee Manushya Party / Human Future Party

 

 

I:                                                                                        

residing at:

 

 

hereby affirm:

 

  1. Human knowledge and understanding are limited. We believe in a sceptical examination of all philosophies, knowledge systems and their methods.
  2. Life on planet earth appears on a downward spiral and all attempts should be made to prevent the extinction of the human race and its environment.
  3. Achievement of political power is crucial to achieving our objectives and all methods are fair.
  4. Land, labour, money, risk… are fictitious concepts and we will aim to search for better fictions to prevent the extinction of the human race and its environment.

 

I wish to join The Aagamee Manushya Party / Human Future Party and promise to work in a diligent manner to propagating its values and beliefs.

 

I enclose the amount                                                              towards membership and annual subscription charges.

 

 

 

 

Signature