For our own amusement we’ve built a simulator of the English football Premiership results (we’re like that) which very faithfully replicates the real results. The results should be a case study lesson in understanding events beyond your control.
The chart shows the number of points scored by each team and their position in the table after 12 games in the season as the results emerged from one run of the simulation (blue line). So the team leading the table has scored 27 points. In contrast the bottom team has only scored 5. Meanwhile, in the real world the situation after 12 games this season is shown in the orange line. As you can readily see the simulation very faithfully replicates what has actually happened and it would be very hard to tell them apart. We also looked at the average points per game, goal difference, away and home results and they also look very much like the real thing.
Looking at this, we suspect that the poor manager in team at the bottom of the table will be feeling a certain amount of heat. Maybe they will even lose their job in the very near future, if they haven’t already. In contrast, the manager of the leading team and the players will be looking forward to a richly deserved bonus cheque and a very healthy transfer fee.
All well and good you might say. But there is a catch. The results from the simulation are generated entirely at random! Absolutely no skill, form, cunning tactics or mojo involved whatsoever. With all the money that rides on results in the Premiership, not to mention the bragging rights and arguments in the pub after each result, this is truly startling.
Now it could well be that consistency is the key, maybe 12 games isn’t long enough for true expertise to shine through and if we ran the simulation to the end of the season the cream would rise to the top*. But this may be small comfort to the unfortunate manager of the bottom performing team if they have already been asked to step aside.
The clear moral of the story is that you simply can’t afford to ignore randomness. In our simulation models of business processes, we always explicitly take this into account. Sometimes we find that applying a particular strategy to improve a service works wonderfully, but running it again it doesn’t look so good purely as a result of random factors like staff absence, work arriving in clusters sometimes and more evenly at others, getting more than your fair share of ‘difficult’ work some periods than others and so on. In the real world you don’t get to run the year several times so how could you know if your improvement strategy failed due to bad luck, or succeeded due to good luck?
As football managers are likely to say, unless you want to risk the ire of the team chairman for results that are just pure luck its always worth understanding how much random factors can influence your results!
Of course the other moral from this is – don’t bet on the pools!
*We don’t think it would be all that different. We ran the simulation 1000 times and mostly it looks much the same as the real world. Overall, teams in second and third positions tend to do better in the real world than the random model and those at the bottom of the table appear to do a little worse in the real world. You could say therefore that the teams at these extreme positions at either end of the table perform slightly better/worse than pure luck. The difference from chance is that the teams at the top have won about one more game out of twelve than chance alone would suggest, and those at the bottom have lost one more game than pure chance.