I’ve worked as a futures analyst for almost ten years now and six years of that time was spent on the Global Strategic Trends Programme for the UK Ministry of Defence.  During this time, futures analysis (or horizon scanning as its sometimes called in the UK) has seen a lot of change and it’s probably fair to say that as a discipline, we know more today, than back when I started working on it for the first time in 2005.

I’ve focused this blog on mistakes I’ve made; which is perhaps a little crude.  In reality, these are learning points.  At the time we didn’t know these things weren’t necessarily the best way of doing things, it was how it was.  Now, with advances like big data and the growing sophistication of analysis tools and techniques futures analysis is improving.  But, to improve you have to make mistakes; trial and error is difficult sometimes, but worthwhile, as long as you learn from it.  So here are more top six mistakes (believe me, I’ve got more but it seemed wise to stop at six!) but also some thoughts for what it means for the future of futures analysis.

1.    Not being an organized ‘fox’

Experts are important for making predictions.  Experts have a lot of knowledge about certain things (that’s why they’re called experts!).  The use of experts is covered extensively in the ‘Signal and the Noise’ by Nate Silver, who works on the established analogy of ‘hedgehogs’ and ‘foxes’.  In this, experts are seen as hedgehogs; they have one approach to an issue (like the spikes of a hedgehog, represent its one tactic – a ‘prickly’ defence).  Foxes on the other hand are generalists, in contrast to hedgehogs, the analogy states that the fox doesn’t really go for one deep specialisation, instead it has a lot of little bits of knowledge about things.

Remembering this analogy is useful and it helps futures analysts frame how they should engage with experts and what they actually do themselves.  Futures analysts need to keep track of lots of small pieces of information about what could happen in the future.  This sounds simple but in reality it can be a daunting task.  With the bewildering array of information sources and experts out there in the world, a fox needs to be more and more organized, keeping tabs on these things so that they can be referenced in a meaningful way in order to make a balanced assessment.  One thing a futures analyst doesn’t need to be is a deep expert, or hedgehog.  They need to be aware of all the little things, all the very many threads of possibilities and then be able to quantify them and bring them together into assessment.  (I’m tempted to write, ‘spin them into gold’ but that would be wrong for a whole host other reasons, which I’ll get to later.)

2.  Confusing power with wisdom

Futures analysts will often find themselves rating ideas, or hypotheses on the future.  Such rating exercises can relate to sources (is a peer reviewed academic paper more reliable than a fringe discussion site?) but, more delicately, they can also relate to people.  If you’re making an assessment on a completely new area then how do you determine who the experts are when you are a complete novice to the field?  For this you need to look the data institutions generate and rate their reliability and accuracy.  Similarly for experts you need to study their outputs and assess the arguments they’ve made on particular issues.

When assessing experts you need to be clear on what you’re looking for.  Fame and status of an individual or an institution that has a lot of public influence doesn’t not mean that they will be better, occasionally it can prove the opposite.  This is especially the case for experts from large, powerful organisations.  An individual who holds a lot of weight strategically, or in the day to day operations of business or government department, generally doesn’t have a greater knowledge about what could happen in the future even if they do have a higher rank or position.  If anything they are less likely to know about what’s happening as they are unlikely to need to keep lots of little pieces of data and instead, have people to do this for them – be they a General or a University Professor, this tends to hold.  If assessments are tailored to the beliefs of such groups, (or worse the what a group believes these people want to hear) then they quickly becomes skewed and biased.    

3. Fear of reputational damage

Forecasts can take you into some strange areas.  I was once cautioned for not taking my job seriously because I came up with a hypothesis relating Micheal Jackson’s death to a break down of critical infrastructure (my hypothesis centred on the impacts a shock social event, like the death of a cultural icon, could have on communication structures).  Even discussing such an issue was seen as ‘ridiculous’ and by labeling it as so, we started to limit ourselves.  This was at a time, when an individual reputation was considered paramount to their future career; people were actively discouraged from to exploring ‘blue sky ideas’ for fear of committing career suicide or because of the perception that everyone around them was some kind of arbiter of standards.  In the civil service especially, for every person with ideas, there are another three sucking their teeth and saying ‘hmmm…I can just see the Daily Mail headline now.’

Such overriding concerns can mean people become pre-occupied with ‘face’.  Nobody wants to be the one that looks stupid.  No-one is prepared to say something that is outside the norm, for fear that they look like the ‘departmental jester’.   Such fear plays out constantly and I suspect most futures analysts have fallen foul of it at some point (they find themselves questioning their assessment – ‘should I write that, can I say that?’).  If an organization gets it wrong, the assessment, or worse the author is labelled as outlandish and they can be marginalized, or worse ridiculed, for their ideas.  This is very wrong; if the analyst has an idea, that they can illustrate with a hypothesis, data and an outline prediction for how it can happen then, at the very least, their ideas deserve to be heard.  To dismiss it unheard, or worse, use such assessments as ‘evidence’ that the futures assessment is nonsense is very bad but often a sad reality of how strategy formulation works.

When it’s even worse, is when futures analysis as a discipline is treated with the same level of ridicule. As a discipline, futures analysis or ‘horizon scanning’ is often seen as the poorer relative of intelligence analysis.  People often dismiss ‘futurology’ as non-quantitative hand-waving and such an attitude means many powerful individuals see it as nonsense or a waste of time.  The attitude of ‘crystal ball gazing’ in the popular press further reiterates this.   Could such, reputational concern, be a driving force for why many government departments seek to keep such activity at ‘arms length’ from ‘proper’ policy?

4.  Avoiding probability and predictions

Sometimes, perhaps because of some of the other mistakes listed here, there is often a desire to avoid specific predictions and an overall reluctance to use probabilities.  The use of probabilities and the desire to avoid making a specific, ‘x will happen by y, with a likelihood of z’ prediction, is often avoided.  This is often because of the factors I described above – a prediction, in some senses is a benchmark to which a forecaster can be held accountable.  If people are worried that they might look stupid or they think the assessment might reflect badly on the institution they work for you can see why there is reticence to give specifics or quantifiable metrics.  But, if you don’t use probabilities or make specific assessments then how do test yourself?  How do use a consistent, non-subjective benchmark for arguments, ideas and predictions?

There is a mistrust of probability amongst many managers and decision makers.  In my experience, it comes out when someone quotes a number.  As soon as someone takes their belief in something and expresses it on a numerical scale it causes a series of questions to be raised; how subjective is this best guess or how quantifiable/reliable is this estimate?  And often, attitudes to probability can cause people to split into two camps –

  • Areas that can generate ‘hard’ quantifiable data that allows for rigorous modeling for future predictions – such as climate science for example.  Such disciplines are highlighted as being on the more ‘scientific’ end of forecasting.
  • Areas that deal with political or social trends.  In such areas data is harder to quantify, a lot of economic data fits into this as well, but mostly it covers social and political data, which is notoriously difficult to quantify and measure.  With such data, probabilities and predictions tend to be non-specific and without numbers.

A lot of people fear boiling down their belief into a number; perhaps it’s too blunt, or too direct, or perhaps people fear that it will be wrong, or such an exercise can’t be conducted without more analysis.  But, unless you start doing this, unless you start collecting and generating probabilities then you will never get better at prediction.  It has to start somewhere, and however inaccurate or imprecise these first ‘best guesses’ are, they will still to form the foundation for the better future assessments.  Without probabilities you have no benchmark to address or estimate what could happen in the future and this means you never a particularly firm basis to improve from and move forward.

5. Ignoring ‘old’ data.

‘Trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window’. Peter F Drucker

Often there has been some confusion over the reliability of historical data.  A lot of forecasts I worked on, started from ground zero and ignored the data contained in other forecasts or historical trends.  Hopefully, this isn’t because people take Peter Drucker’s words literally!  At its most simplest historical analysis represents a way of looking through the data to come up with ideas and assessments other people have made.  At its most complex; searching as widely as possible for research that may be seeking to address the same things as your analysis will increase the range of your data and also, perhaps, show you the limitations in your own data or challenge your interpretation of it.

Detailed, considered historical analysis means you have a knowledge-base to work from.  The worst I’ve seen of this was in a project where all data, any data, historical or otherwise, was consistently ignored because of trust issues in the team doing the analysis.  This led to a whole range of last minute panics (not to mention terrible morale as the issues were never openly voiced), in which assessments were made ad-hoc at the last minute using general internet searches to provide the data.  This isn’t to say internet searches are bad, its just when they’re your only source done at the last minute, your assessment is likely to be limited and feature a lot of Wikipedia references (again, not necessarily bad, but it does only represent a very small fraction of the available data for an assessment).

6.    Not stating your assumptions, bias, or logic.

Again, this relates to lots of the other things I’ve learnt, rather tortuously over the past 10 years.  It does sound naïve and perhaps obvious, but if you can’t establish a practice and a culture that allows you to state your assumptions and to present your biases (as much as you are consciously able) then any futures analysis you make will be difficult, if not flawed.

Of all the lessons I’ve learned this is perhaps the hardest.  The realization that you’ve produced a forecast that is completely biased to your own world view and has been unconsciously drafted to reflect the perception of what the target market wants to hear is not a pleasant one.  In my defence, back in the 2007-2011 we didn’t really know any different, but as futures analysis develops, we know more now.  We know that we can and do produce biased assessments.  We know that experts can be biased, we also know that events can be designed that work with experts who reflect these biases, so much so, that you can write whatever future you want.  This may achieve some resonation with the people you want to influence, or perhaps the people who have a particular policy requirement that they wish your analysis to fit, but it will not give a more reliable assessment.  (Another Peter Drucker quote is ‘The best way to predict the future is to create it’).

I came to this realization slowly and the simple activity of writing down your beliefs, biases and assumptions around an issue as the first activity in your analysis, has taught me a lot.  However, although a seemingly simple exercise, it has also been the most controversial most quickly.  In doing so, such a basic exercise can expose a wide variety of personal, behavioral and cultural issues that may be long standing – but, its better to have them and be aware of them at the outset, rather than trying to fight them, issue by issue as you try to compile a forecast.

The future of futures analysis

Reflecting on all these things, on the things I tried; the things that worked, the things that failed has led me to seek to develop futures analysis that:

  • Provide a simple to understand assessment of future outcomes that are expressed using numerically related probabilities.
  • Don’t ignore other forecasts.  Adopt principles from the fabled ‘scan of scans’ or producing an ‘aggregate forecast’ will allow you to draw from the material that other analysts and researchers have generated.  Whatever you’re doing, its likely that lots of other people have done work like this before.  It is worth thinking about this trying to get as much data as you can.
  • Provides a full, navigable means of showing why and how the probabilities and assessments have been generated NOTE – this is also not without controversy – senior leaders generally, don’t want to read such detail whereas analysts do – this can lead to outputs that read like ‘War and Peace’ without proper editing (another criticism that is sometimes levied against producing forecasts).
  • Be up front and open about the biases you have.  As a first exercise, diagnose your bias, as it will be this that drives the data you collect and you need to be aware and fully open about this.
  • Be clear on the value experts provide.  Expert opinion is certainly important but facilitated sessions with experts isn’t the only technique for thinking about the future.  A bunch of clever people thinking about the future can be tremendously useful as a creative exercise or as a means of providing a wider range of beliefs around certain issues.  But, if it isn’t properly facilitated such activity can easily be biased, as described in ‘Quiet: The Power of Introverts in a world that can’t stop talking‘ by Susan Cain (I’ve also blogged about my own experiences of this on the Messy Fringe).   If done improperly, type-A personalities can dominate, and often, their beliefs can reflect those held of the audience that invited them (especially if that’s why they invited them in the first place).
  • Don’t bore your target audience, but don’t pander to them either.  Producing a forecast is challenging for all the reasons I’ve described.  But, if you’re trying to do all the above to answer a specific policy demand, or what you think the chief executive wants to hear, then you’re going to doom yourself to inaccuracy.  You need to be aware of the pragmatic reality of who has requested your forecast and perhaps tailor your output so that senior people can read the key assessments quickly, but you still need to show your working.  To be accurate, to be honest, unfortunately, you do have to compile ‘War and Peace’ but, you just need to get your main ideas into the first page, which, as heart-breaking as it sounds, is all that your readers will read – if you’re lucky.

Currently, at Simplexity we are working to take all these things forward.  Through the Open Futures project we are building a on open source, fully searchable database that brings together trend data both from published foresight reports but also from social media feeds.  This is a lot of data, but using data visualisation tools and analysis techniques we are now better able to navigate and understand such large datasets and producing meaningful, unbiased groupings.  These groups can then be assessed and probabilities assigned to particular context-relevant outcomes.  Using big data, long data and the increasingly expanding range of open data sources means the technology for futures analysis is becoming more and more sophisticated, perhaps, hopefully the way we present our assessments will as well.

Comment