Some people don’t think football managers make much difference at all. The guys at Soccerbrain for instance are quite skeptical about the influence a manager has:
Coaches don’t matter so long as they are competent, they have the dressing room with them, and they are pragmatic about the tactics they employ given the standard of opposition they are facing.
But what do the statistics say? We have all sorts of metrics for measuring player performance, but are sadly lacking in objective measures for managers. And despite the skepticism, I think that’s an important gap in our knowledge, and one which I am going to look at in post.
To determine the influence of a manager, we need to control for the ability of the players available to him, so we can subtract out that component of performance. The simplest indicator of a team’s ability (though an indirect one) is its market value, i.e. the aggregated transfer value of its players. This, as is now well known, is a very strong predictor of team performance. The chart below plots seasonal goal differences against team market value (as a percent of the total league market value) for each of the seasons 2010 to 2015.
The substantial correlation between team value and goal difference (about 0.79) is perhaps not surprising. After all, the transfer market is reasonably efficient and the best players cost more. But how much difference, if any, does the manager make? The question here is whether a manager can improve performance beyond that implied by his team’s market value, or conversely whether he can have a negative impact and induce under-performance. To keep reasonably topical I used matches from the last six seasons of the Premier League (2010-2015). The appropriate managers were assigned to each match, but to avoid having a larger number of managers than necessary, managers with eight or fewer games in charge were designated caretakers, and assigned a single identifier. This gave me a database of 2,780 matches played by 32 different teams under 66 different managers (including the caretaker).
To determine manager effects I use a regression methodology, with goal difference (goals scored by teami - goals scored by teamj) in a particular match as the outcome variable. Match goal differences are approximately normally distributed, so we can use linear regression. In addition, the goal difference aggregated over the season is closely correlated with league position, so it is a credible indicator of success.
I used the following predictors of goal difference:
- Match Venue. This is included because teams playing at home have an advantage of about 0.35 goals per match. Match venue was encoded as 0 if teami was playing at home, or 1 if teami was playing away.
- Market Value. This variable represents the difference in player market values between the two teams in the match. Market values were extracted from the transfermarkt website, and were calculated for each season. However, instead of using the raw market values I use a percentage value denoted V; V = 100*Raw Market Value/Total League Value. Dividing by the total league value takes care of the inflation of market values over time. The market value variable was encoded as Vi-Vj where once again i and j are the two teams in the match.
- Control Variables. I used two controls intended to capture other characteristics of the clubs that might influence performance, but which are not attributable to the manager.
- Turnover. A high turnover indicates a big club with superior assets like a large staff and modern training facilities allowing it to maximize the performance of their players. So a high-turnover club might get more from a team than a low-turnover club with the same player market value. To take account of increasing turnover in the period, turnover was expressed as a percentage of the league total each season. The turnover control variable was the difference in percentage turnover (T) between the two teams Ti-Tj.
- Past performance. This was the average points scored in the five years prior to 2010. (Teams spending a season outside the Premier League were assigned 29 points for that season, which is the average points total of a relegated club). The idea here is that a consistently high level of performance over five years indicates a well-run club with good systems in place. The perfomance control variable was the difference in average points (P) between the teams, Pi-Pj.
- The next two predictors represented the influence of the respective managers, and we want these encoded so that the manager effect is something like influence of manageri - influence of managerj . To achieve this I used one variable for each manager, and encoded them 1 for the manager of teami, -1 for the manager of teamj and 0 otherwise.
The regression equation then looked something like this:
Gij = ß0 + ß1Venuei + ß2Vij + ß3Tij +ß4Pij + ßmgr(i)Mi + ßmgr(j)Mj .
Here, Gij is the match goal difference, ß0 is the intercept, Vij , Tij , Pij are respectively the market value, turnover and performance differences between the teams, Mi and Mj are the respective team managers and the ßmgr coefficients represent the manager influences.
As it stands this equation is not identifiable, so a sum-to-zero constraint was added across the manager coefficients, ∑ßmgr = 0.
The regression equation was estimated using Bayesian methods. Non-informative normally distributed priors with a mean of 0 and a precision of .0001 were put on all the ßs. A non-informative uniform prior was specified for the variance of Gij.
Because the market value and control variables are highly correlated, their coefficients aren’t really meaningful; the key results to explore are the manager coefficients, which represent manager influence. This are expressed in terms of the goal difference per match, beyond that explained by the non-manager variables, and with the average Premier League manager having a coefficient of 0. The 10 best and 10 worst managers are shown in the chart below.
Are The Results Plausible?
I was somewhat surprised by some of the findings. For example, Ranieri’s 2015 Leicester team outperformed their expectations by 1.4 goals per match, which means 53 goals over the season - which sounds unlikely, and gave me some pause for thought. But a little more analysis showed it is not quite so outrageous as it seems. Let’s look at some of the predictions collated by Simon Gleave at the start of the 2015 season. Most journalists and experts predicted Leicester would finish 18th or 19th. That corresponds to a goal difference of around -28; the statistical modellers were more optimistic, and predicted a finish in 13th place, corresponding to a goal difference of about -10. Leicester’s actual goal difference was + 32, which is 60 more than the experts expected, and 42 more than the modellers. So I’m not alone in finding a huge over-performance in Leicester’s numbers.
Now digging into my own model, I find that Leicester’s goal difference predicted from market value and the control variables alone is -15; but this underestimates the club in two ways. First, the model used a market value of £78m, which was Leicester’s value at the beginning of the 2015 season. By the time the season had ended, the team was valued at £183m - more than double. So some of the over-performance I found was because Leicester’s players were undervalued in the model. Second, the past performance control variable was based on performance between 2005 and 2009, clearly inappropriate for an insurgent team like Leicester. Fixing these two factors added a further 10 goals to Leicester’s predicted total. Now Leicester’s predicted goal difference became -5, corresponding to a league position of 12th. So instead of being responsible for 53 added goals, the “Ranieri” effect can be reduced to 43.
Now do I believe all the 43 additional goals can be attributed to Ranieri? No I don’t. Some of them are doubtless due to other factors omitted from the model. Some could in principle be included if we had the data, and some perhaps could not. For instance, Leicester might have benefited from being underestimated by their opponents, an effect that is difficult to quantify. And secondly, Ranieri’s data includes only one season with one club, so it is difficult to distinguish Ranieri’s influence from influences belonging to the club itself, and which were not captured by my other variables. But do I believe Ranieri can take credit for a big chunk of Leicester’s enhanced performance? I do. A glance at his record shows a CV studded with success …
|Cagliari||From C1 to Serie A in successive seasons|
|Napoli||Qualified for UEFA Cup|
|Fiorentina||Won promotion at first attempt; Coppa Italia and the Supercoppa Italiana|
|Valencia||Copa del Rey , UEFA Intertoto Cup, Champions League qualification|
|Chelsea||Runners up 2004; Champions League semi-final|
|Monaco||Promotion from Ligue 2 in his first season; Ligue 1 runner-up in his second season.|
We will know more about Ranieri’s true Premier League ranking at the end of this season, and no, he won’t top the list any more.
The complete results are tabulated below.
|14||Quique Sánchez Flores||38||0.33|
|27||Louis van Gaal||76||0.11|
|39||Roberto Di Matteo||48||-0.07|
|57||Paolo Di Canio||12||-0.49|
|63||Ole Gunnar Solskjær||18||-1.02||*|
All models have their flaws, and this one is no different. But while I would never claim to have succeeded in sorting Premier League managers in the right order, the highly ranked managers do include some acknowledged greats, as well as some surprises, and the lowest ranked managers include a high proportion of generally agreed duffers. So despite its limitations, I do think this analysis goes some way towards distinguishing the good from the bad, and hopefully others will be able to improve on this in the future.
One more thing. Recent England managers are distinguished by their mediocrity. None of them make my top 20. Sam Allardyce is the best and ranked 26/66, Hodgson is a distinctly average 32/66, and Steve McLaren props up most of the Premier League with a rank of 60/66.