Search This Blog

Showing posts with label model. Show all posts
Showing posts with label model. Show all posts

Tuesday, 29 August 2023

A level Economics: How to Improve Economic Forecasting

 Nicholas Gruen in The FT 


Today’s four-day weather forecasts are as accurate as one-day forecasts were 30 years ago. Economic forecasts, on the other hand, aren’t noticeably better. Former Federal Reserve chair Ben Bernanke should ponder this in his forthcoming review of the Bank of England’s forecasting. 

There’s growing evidence that we can improve. But myopia and complacency get in the way. Myopia is an issue because economists think technical expertise is the essence of good forecasting when, actually, two things matter more: forecasters’ understanding of the limits of their expertise and their judgment in handling those limits. 

Enter Philip Tetlock, whose 2005 book on geopolitical forecasting showed how little experts added to forecasting done by informed non-experts. To compare forecasts between the two groups, he forced participants to drop their vague weasel words — “probably”, “can’t be ruled out” — and specify exactly what they were forecasting and with what probability.  

That started sorting the sheep from the goats. The simple “point forecasts” provided by economists — such as “growth will be 3.0 per cent” — are doubly unhelpful in this regard. They’re silent about what success looks like. If I have forecast 3.0 per cent growth and actual growth comes in at 3.2 per cent — did I succeed or fail? Such predictions also don’t tell us how confident the forecaster is. 

By contrast, “a 70 per cent chance of rain” specifies a clear event with a precise estimation of the weather forecaster’s confidence. Having rigorously specified the rules of the game, Tetlock has since shown how what he calls “superforecasting” is possible and how diverse teams of superforecasters do even better.  

What qualities does Tetlock see in superforecasters? As well as mastering necessary formal techniques, they’re open-minded, careful, curious and self-critical — in other words, they’re not complacent. Aware, like Socrates, of how little they know, they’re constantly seeking to learn — from unfolding events and from colleagues. 

Superforecasters actively resist the pull to groupthink, which is never far away in most organisations — or indeed, in the profession of economics as a whole, as practitioners compensate for their ignorance by keeping close to the herd. The global financial crisis is just one example of an event that economists collectively failed to warn the world about. 

There are just five pages referencing superforecasting on the entire Bank of England website — though that’s more than other central banks. 

Bernanke could recommend that we finally set about the search for economic superforecasters. He should also propose that the BoE lead the world by open sourcing economic forecasting.  

In this scenario, all models used would be released fully documented and a “prediction tournament” would focus on the key forecasts. Outsiders would be encouraged to enter the tournament — offering their own forecasts, their own models and their own reconfiguration or re-parameterisation of the BoE’s models. Prizes could be offered for the best teams and the best schools and universities.  

The BoE’s forecasting team(s) should also compete. The BoE could then release its official forecasts using the work it has the most confidence in, whether it is that of its own team(s), outsiders or some hybrid option. Over time, we’d be able to identify which ones were consistently better.  

Using this formula, I predict that the Bank of England’s official forecasts would find their way towards the top of the class — in the UK, and the world.

Wednesday, 16 August 2023

A level Economics: Poverty of Theory and the Friedman-Twist

ChatGPT

"Poverty of theory" is a concept that refers to the limitations of a theory's explanatory power or its inability to accurately describe or predict real-world phenomena. It suggests that a theory might be overly simplistic, incomplete, or disconnected from the complexity and richness of actual observations and experiences. In other words, it highlights situations where a theoretical framework falls short of capturing the intricate nuances and intricacies of the real world.

Here are some examples to illustrate the concept of "poverty of theory" using scenarios where theories are distant from observed reality:

  1. Economic Models and Market Behavior: Economic theories often simplify human behavior and market dynamics, assuming rational decision-making and perfect information. However, in reality, human decisions are influenced by emotions, psychology, and imperfect information. For instance, the 2007-2008 financial crisis was largely unpredicted by prevailing economic models due to their "poverty of theory" in accounting for the irrational exuberance and complex interconnectedness of the financial markets.


  2. Climate Change Predictions: Some climate change models have faced criticism for their oversimplification of natural processes and their inability to account for unexpected feedback loops. The complexities of atmospheric interactions, ocean currents, and ecological changes can lead to a "poverty of theory" when these models fail to accurately predict the pace and extent of climate change impacts in various regions.


  3. Social Welfare Programs and Behavior: Theoretical models used to design social welfare programs often assume that individuals will respond to incentives in a linear and predictable manner. However, people's behavior is influenced by a multitude of factors beyond financial incentives, such as cultural norms, social ties, and psychological motivations. As a result, some welfare programs might not produce the expected outcomes due to their "poverty of theory" in understanding the complexities of human behavior.


  4. Healthcare Policy and Patient Choices: Healthcare policies sometimes rely on simplified models of patient behavior and medical outcomes. These models may not fully consider the various factors that influence health decisions, such as personal beliefs, cultural practices, and socioeconomic constraints. This can lead to a "poverty of theory" when policy interventions fail to yield the desired health outcomes.


  5. Education and Learning Theories: Educational theories might assume uniform learning processes and overlook individual differences in cognitive development and learning styles. When these theories do not account for variations in how students absorb and process information, they exhibit a "poverty of theory" by failing to provide effective instructional strategies for diverse learners.


  6. Political Models and Voter Behavior: Political theories sometimes oversimplify voter decision-making, assuming that people vote solely based on rational evaluations of policy positions. However, emotions, identity, and social factors often play a significant role in shaping political preferences. The "poverty of theory" in such models can lead to unexpected election outcomes that defy theoretical predictions.

In all these examples, the "poverty of theory" stems from an inadequate understanding of the complexities, uncertainties, and contextual factors that shape real-world phenomena. The gap between theoretical assumptions and observed reality highlights the need for more comprehensive, nuanced, and adaptive theoretical frameworks that can better capture the intricacies of the world around us.

---

The quote you're referring to is often attributed to economist Milton Friedman and goes something like this: "The ultimate goal of a positive science is the development of a 'theory' or 'hypothesis' that yields valid and meaningful predictions about phenomena not yet observed." This quote captures Friedman's view that the primary criterion for evaluating economic theories and models is their ability to make accurate predictions, regardless of whether their underlying assumptions are realistic or not.

However, the concept of "poverty of theory" can be seen as a counterpoint to Friedman's quote. While Friedman emphasizes prediction as the ultimate goal of a theory, the idea of "poverty of theory" highlights the limitations of focusing solely on prediction without considering the underlying assumptions and the degree to which the theory reflects the complexities of reality.

In the context of the examples provided earlier, we can link the "poverty of theory" concept to Friedman's quote as follows:

  1. Economic Models and Market Behavior: Friedman's quote emphasizes the importance of accurate prediction. If an economic model accurately predicts market behavior, it might be considered successful according to his perspective. However, the "poverty of theory" concept highlights that even if a model makes accurate predictions in the short term, it might fail to explain the underlying causes of market behaviors or predict long-term systemic risks, as seen in the 2007-2008 financial crisis.


  2. Climate Change Predictions: If a climate change model can accurately predict certain aspects of temperature change, it might align with Friedman's idea of a successful theory. Nevertheless, the "poverty of theory" concept reminds us that if the model overlooks key feedback loops, intricate interactions, and tipping points, its predictions might become less reliable in the face of unexpected events and long-term impacts.


  3. Social Welfare Programs and Behavior: Friedman's perspective would likely value models that predict how individuals respond to incentives in welfare programs. However, the "poverty of theory" concept highlights that solely focusing on prediction might ignore the fact that human behavior is influenced by numerous factors beyond monetary incentives, leading to the need for more comprehensive theories to guide effective policy-making.


  4. Healthcare Policy and Patient Choices: Friedman's quote could be applied to healthcare models that predict patient behaviors based on incentives. Nonetheless, the "poverty of theory" concept points out that such models might not accurately capture the complexities of patient decision-making, potentially leading to misguided policy recommendations that fail to address the broader health outcomes.


  5. Education and Learning Theories: Friedman's emphasis on prediction could lead to valuing learning theories that accurately predict student performance. However, the "poverty of theory" concept emphasizes that these theories might fall short if they neglect individual differences and diverse learning styles, calling for a more holistic understanding of education.


  6. Political Models and Voter Behavior: Friedman's perspective might prioritize models that predict voter decisions based on policy positions. Yet, the "poverty of theory" concept reminds us that relying solely on predictive accuracy might overlook the influence of emotional appeals, social dynamics, and identity politics in actual elections.

In essence, while Friedman's quote underscores the importance of predictive power, the concept of "poverty of theory" cautions against disregarding the limitations and potential inaccuracies that arise from overly simplistic or disconnected theories. Balancing accurate prediction with a rich understanding of underlying mechanisms and complexities is crucial for developing theories that are truly insightful and useful.

---

The idea that unrealistic models can still make accurate predictions is a concept that challenges traditional notions of theory validity. This phenomenon is often referred to as the "Friedman's Folly" or the "F-Twist," named after economist Milton Friedman, who famously said, "The more significant the theory, the more unrealistic the assumptions." This observation suggests that sometimes, theories with highly simplified or unrealistic assumptions can surprisingly yield accurate predictions in specific contexts. However, while this concept has some merit, it also comes with certain weaknesses and limitations.

Explanation of Why Unrealistic Models Can Make Accurate Predictions:

  1. Cancellations of Errors: In some cases, unrealistic assumptions in a model can lead to errors that cancel each other out. These errors might occur in different parts of the model, ultimately resulting in a prediction that aligns with observed data. While the underlying assumptions might not accurately represent reality, their combined effects can produce a seemingly accurate prediction.


  2. Approximation of Complex Reality: Unrealistic models often simplify complex systems to a manageable level. In doing so, they might capture certain dominant factors that have a significant impact on the phenomenon being studied. These dominant factors can drive predictions that hold reasonably well even if other, more realistic but less influential factors are omitted.


  3. Constant Relationships: Certain unrealistic assumptions might inadvertently enforce constant relationships between variables, even if such relationships don't hold in the real world. In specific scenarios, these constant relationships could lead to accurate predictions for a limited range of conditions.

Weaknesses of the Truism:

  1. Limited Generalizability: Unrealistic models are often tailored to specific conditions or ranges of data. While they might predict well within these confines, they could fail dramatically when applied beyond their intended scope. This limits the generalizability and utility of such models.


  2. Lack of Insights: Unrealistic models might provide accurate predictions, but they often lack explanatory power. They don't offer insights into the underlying mechanisms and interactions that drive the observed outcomes. This can hinder our understanding of the phenomenon in question.


  3. Long-Term Reliability: Unrealistic models might accurately predict short-term outcomes but struggle to forecast events over the long term. Factors that are omitted or oversimplified could become critical as time progresses, leading to predictions that deviate from reality.


  4. Unsustainable Assumptions: In cases where unrealistic assumptions are not only unrealistic but also unsustainable, predictions can become highly unreliable. As circumstances change, these assumptions might break down, rendering the model useless.


  5. Risk of Misapplication: Relying on unrealistic models that provide accurate predictions in certain contexts can lead to misapplication when the same model is used in situations where its assumptions no longer hold. This can result in costly errors and misinformed decisions.


  6. Stifling Progress: Overreliance on unrealistic models might discourage the development of more realistic, nuanced, and comprehensive theories. This can hinder the advancement of understanding and the discovery of more accurate models.

In summary, while the concept that unrealistic models can make accurate predictions challenges our conventional thinking, it's important to approach this truism with caution. Unrealistic models can indeed provide useful insights in specific contexts, but their limitations and potential pitfalls should be acknowledged. Balancing predictive accuracy with a deeper understanding of underlying mechanisms and potential future scenarios is crucial for ensuring the robustness and applicability of scientific theories.

---


Real World Examples:

  1. Newtonian Mechanics and Planetary Orbits: Newton's laws of motion and gravitation, while highly simplified and not accounting for relativistic effects, accurately predict the motion of planets in our solar system. Despite their unrealistic assumptions in certain scenarios (e.g., near the speed of light), these laws are still highly useful for calculating planetary orbits.


  2. Gas Laws and Ideal Gases: The ideal gas law assumes that gas molecules have no volume and no interactions, which is not the case in reality. However, under certain conditions, such as low pressures and high temperatures, the ideal gas law provides accurate predictions for the behavior of real gases.


  3. Epidemiological Models for Disease Spread: Some disease spread models assume homogeneous mixing of populations, ignoring spatial and social complexities. These models can still offer valuable insights during early stages of outbreaks and can guide public health responses, even if they don't fully capture real-world dynamics.


  4. Financial Risk Assessment Models: Some financial models assume that asset returns follow a normal distribution, disregarding extreme events (fat tails). Despite this unrealistic assumption, these models might work well under normal market conditions, but they can fail to predict major market crashes.


  5. Weather Forecasting Models: Weather models use highly complex simulations to predict weather patterns, yet they simplify atmospheric processes. While they can provide accurate short-term forecasts, they might struggle to predict long-term climate trends due to their simplified treatment of climate dynamics.


  6. Economic Growth Models: Economic growth models often make assumptions about constant technological progress and uniform resource allocation. While these assumptions don't hold in reality, such models can still provide insights into factors that contribute to economic growth in specific time frames.


  7. Linear Regression in Social Sciences: Linear regression models might assume a linear relationship between variables, ignoring potential nonlinearities and interactions. However, these models can still yield useful insights into correlation trends within the studied data.


  8. Travel Time Estimations: Navigation apps often provide accurate travel time predictions, assuming constant traffic flow and ideal driving conditions. While these models might not consider real-time variations and congestion dynamics, they can be very helpful for planning routes.

Final Thoughts:

The concept that unrealistic models can still yield accurate predictions showcases the intricate relationship between theory and reality. These examples demonstrate that while models might depart from observed reality, they can provide valuable insights and predictions in specific circumstances. However, acknowledging the limitations and weaknesses of such models is crucial for responsible and informed decision-making. Balancing the trade-offs between predictive accuracy, realism, and explanatory power remains a challenge in various scientific fields. Therefore, embracing a cautious and thoughtful approach to the application and development of models is essential to maximize their usefulness while avoiding pitfalls.

---

The statement "All models are wrong, but some are useful" is attributed to statistician George Box. This aphorism encapsulates the understanding that while no model can perfectly capture the complexities of reality, models can still offer valuable insights and predictions that are helpful for making decisions and understanding phenomena. Let's evaluate this statement in the context of the concepts discussed earlier: "poverty of theory," the truism of unrealistic models making accurate predictions, and the weaknesses of this perspective.

  1. Poverty of Theory: The idea of "poverty of theory" aligns well with the statement. When a theory is limited in its ability to accurately represent the full complexity of reality, it can be considered "wrong" in the sense that it falls short of a perfect description. However, even such theories can be "useful" if they provide insights, predictions, or frameworks that aid in understanding or decision-making.


  2. Unrealistic Models and Accurate Predictions: This truism is directly connected to the concept of unrealistic models making accurate predictions. When models with simplified, unrealistic assumptions yield accurate predictions, they demonstrate the paradox that models can be "wrong" in terms of their assumptions but "useful" in terms of their predictive capabilities.


  3. Weaknesses of the Truism: The weaknesses of the "unrealistic models" perspective provide nuance to the statement. These weaknesses highlight that while some models might be useful in making predictions, they might lack explanatory power, generalizability, and sustainability. This means that although the models might be "useful," their limitations and potential risks should be acknowledged.

Comprehensive Evaluation:

  1. Context Matters: The statement recognizes that models are simplifications of reality and are created within specific contexts. Models should be evaluated based on whether they serve their intended purpose within that context.


  2. Trade-Off between Accuracy and Utility: Models often involve a trade-off between accuracy and utility. Perfectly accurate models might be excessively complex and unwieldy, making them impractical to use. Instead, models that strike a balance between accuracy and simplicity can be more useful for decision-making.


  3. Insights vs. Predictions: While some models might not accurately predict outcomes, they can still provide valuable insights into the underlying processes or relationships. These insights might guide further research and help refine models in the future.


  4. Overfitting vs. Underfitting: The truism emphasizes that models can both overfit (fit noise in the data) and underfit (miss important patterns) reality. Finding the right level of complexity and realism is essential to achieve a useful balance.


  5. Cautious Application: The statement underscores the importance of cautious application. Users should be aware of the model's limitations and the context in which it was developed, ensuring that its predictions or insights are not misapplied.


  6. Iterative Process: The development of models is an iterative process. Models that are initially "wrong" in certain aspects can become more "useful" as they are refined, incorporating more realistic assumptions and accounting for previously overlooked factors.


  7. Continual Improvement: The recognition that "all models are wrong" encourages ongoing efforts to improve models and theories. Scientists and researchers are driven to refine models to better align with observed reality while maintaining their usefulness.

In conclusion, the statement "All models are wrong, but some are useful" captures the inherent imperfection of models while acknowledging their practical value. While models may never be perfect representations of reality, they can still provide valuable insights, guide decision-making, and drive progress in understanding complex phenomena. The key lies in understanding the limitations of models, critically evaluating their strengths and weaknesses, and using them appropriately in their intended contexts.


---

Here are real-world examples to embellish each point of the argument:

  1. Poverty of Theory: Climate Change Models - Climate change models often struggle to capture the full complexity of Earth's climate system. However, they are "useful" for predicting broad trends and highlighting potential impacts, aiding policymakers in developing strategies to mitigate climate change.


  2. Unrealistic Models and Accurate Predictions: Stock Price Models - Some stock price models assume efficient markets and rational behavior, which don't align with real-world market dynamics. Yet, certain simplified models can predict short-term stock price movements accurately, helping traders make profitable decisions.


  3. Weaknesses of the Truism: Economic Growth Models - Economic growth models might predict GDP growth based on specific factors like investment and population growth. While "useful" for short-term forecasts, they might lack explanatory power in accounting for unexpected events like technological breakthroughs or financial crises.

Comprehensive Evaluation:

  1. Context Matters: Medical Diagnosis Algorithms - Algorithms used for medical diagnosis might not encompass all potential symptoms and variations in patients. However, they can be "useful" tools for providing preliminary insights to doctors and facilitating faster diagnoses.


  2. Trade-Off between Accuracy and Utility: Flight Simulators - Flight simulators used for pilot training simplify the physics of flight to a level that is manageable for training purposes. While not perfectly accurate, these simulators strike a balance between realism and usability.


  3. Insights vs. Predictions: Psychological Personality Tests - Personality tests like the Myers-Briggs Type Indicator are criticized for their simplicity and limited validity. However, they can offer "useful" insights into individual preferences and tendencies for self-awareness.


  4. Overfitting vs. Underfitting: Machine Learning Models - In machine learning, overfitting can result in models that perform exceptionally well on training data but poorly on new data. "Useful" models find the right balance, generalizing well to new data without overfitting.


  5. Cautious Application: Climate Change Projections - Climate models can project a range of temperature increases, but they might not account for all future uncertainties. These projections are "useful" for understanding potential climate scenarios but should be interpreted with caution.


  6. Iterative Process: Drug Development Models - Early drug development models might not accurately predict how a drug will behave in humans. However, they provide "useful" initial data that guides further research and testing in the drug development pipeline.


  7. Continual Improvement: Spacecraft Trajectory Calculations - Calculating spacecraft trajectories involves complex physics and assumptions. These models improve over time as new data from space missions is collected, refining our understanding of celestial bodies' gravitational interactions.

In summary, these real-world examples underscore the dynamic nature of modeling, where trade-offs between accuracy and utility are made based on specific contexts. While no model can perfectly capture the entirety of reality, their "usefulness" in offering insights, predictions, and frameworks is what makes them valuable tools for decision-making and understanding complex phenomena. The evaluation of models should consider their limitations, their application in the appropriate context, and the iterative process of refining models over time.