Modeling Heuristics

Create a model

  • Use modeling to establish context, increase transparency, and guide iteration
  • Do more modeling when experimentation is impractical or environmentally damaging
  • Do less modeling when cheap, iterative prototyping or experimentation is possible
  • Build models based on fundamental analysis of the system when possible
  • Sample real world results when unable to model all underlying structure from fundamental analysis
  • Maintain and improve models as understanding grows
  • Don’t do more modeling than the available data warrants
  • Use some of the data to build the model and some to check its predictions
  • If competing models have similar explanatory power, choose the simplest one
  • From more to less explanatory: Finished products, prototypes, mock-ups, models, visuals, documentation, hand waving
  • Set boundaries wide enough to see the effects of changing model parameters
  • Set boundaries wide enough to include what generates the behavior of interest
  • Reduce model complexity when noise or mis-measurement risk is high
  • Increase model complexity when data is copious, representative, and high quality

Set model expectations

  • Achieve acceptance for model inputs and assumptions before offering results
  • Identify known areas of uncertainty explicitly
  • Define what the model does not do
  • Prioritize developing insight over making predictions
  • Prioritize consequences over probabilities
  • Use modeling to support, but not replace decision-making
  • Recognize that models are generally optimistic and don’t fully account for real world complexity
  • Limit the precision of conclusions by the precision of inputs
  • Use models to falsify rather than confirm assumptions

Before investing in model complexity

  • Verify the model was built as intended
  • Assess if more complexity creates significantly more explanatory or predictive power
  • Improve quality of input data and assumptions
  • Test input data for consistency and outliers
  • Validate model fit and predictive power against multiple sets of independent data
  • Compare results with outcomes from real-world examples
  • Create and compare multiple independent models
  • Evaluate and adjust model constraints
  • Use calibration techniques to adjust the model’s fit to the data
  • Strike a balance between complexity (overfit) and simplicity (underfit)
  • Build a better conceptual model to reduce perceived complexity
  • Design the normal case in a way that automatically handles the special cases

Use models to create insights

  • Think of models as anticipatory decision making
  • Explain and summarize evidence
  • Identify options that don’t or won’t work
  • Identify which factors have more or less impact
  • Model long periods to identify system cycles
  • Use visual tools, such as flow charts, pictures, and maps to build group understanding and encourage debate
  • Identify system dynamics elements: buffers, stocks, flows, parameters, rules, rates of change, goals, and feedback loops
  • Model the effects of design changes
  • Drive the model to extremes to see what emerges
  • Assess model sensitivity over a wide range of inputs, scenarios, and high consequence events
  • Challenge conventional wisdom and provoke thought outside familiar timescales and frameworks
  • Find ways to eliminate or minimize dependencies
  • Find ways to minimize the amount of information that is important
  • Find ways to minimize and concentrate exception handling
  • Explore contradictory and incomplete information
  • Search for tendencies, clusters, and anomalies
  • Surface and test assumptions
  • Investigate the effects of time delays
  • Test interventions and identify scenarios
  • Evaluate incremental costs and benefits

Facebook
Twitter
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *