Experiments on Clustering Possession Sequences (Part 1)

/, Recent Posts/Experiments on Clustering Possession Sequences (Part 1)

Experiments on Clustering Possession Sequences (Part 1)

The fluidity of football is one of the main obstacles to analysis. Unlike baseball or cricket which are divided into discrete one-v-one plays, football plays consist of passing and dribbling sequences which may be long or short, cover the whole pitch or some small area of it, and involve varying numbers of players.  It would be useful if we could come up with a way of classifying these sequences, and this is what I will discuss in this post.  The aim is to identify a manageable number of distinct sequence types.

The unsupervised statistical procedure for assigning data examples to distinct types is known as cluster analysis. I’ll first describe a clustering experiment, employing a type of neural network called a convolutional autoencoder. (Spoiler alert: it didn’t work.)

Clustering Sequences using A Convolutional Autoencoder

An autoencoder is a type of neural network model. Its purpose is to produce a low dimensional representation of its inputs, in much the same way as principal components analysis converts a large number of variables into a small number of components that preserves most of the original information. An autoencoder consists of two parts, an encoder and a decoder. The encoder compresses the input into a low dimensional space, and the decoder then expands it.  The model is trained by making the output of the decoder resemble the original input as much as possible. When this has been achieved we know that the compressed data produced by the encoder is a fair representation of the input.  This compressed version may then be used downstream in other analyses.  A basic explanation of how autoencoders work can be found here.

My inputs to the autoencoder were RGB images of 20,000 pass sequences selected randomly from the 2015-2017 seasons of the Big 5 European leagues.  The sequences were drawn as 420*272 pixel images, and then down-sized to  104*68 pixels. The start of a sequence was represented by a green blob, and the end of a sequence by a red blob (see Figure 1. below). The geometry of the  sequence was traced out by a white line, with thick strokes representing a pass and thinner strokes representing a dribble. The number of features in the input was 21216 (104*68*3), obviously a very large number for most statistical procedures to handle.

I built the encoder using convolutional layers, which are typically used in image-processing applications. Convolutional neural networks (CNNs) learn image features rather than image patterns; a regular network can be trained to recognise say the figure “2” at a particular location within an image, but would not recognise the same figure at a different location. A convolutional network on the other hand can be trained to recognise a figure “2” located anywhere in the image.  For an introduction to CNNs see here.

I experimented with various configurations of autoencoder.  For the technically inclined, the encoder producing the encoded image in Figure 1a was shallow, consisting of a single convolutional layer with four filters and relu activation.  The compressed layer had 1664 units, which was a substantial reduction in the input size, but as we can see the decoded output is rather rough. The encoder producing the image in Figure 1b was deeper, having two convolutional layers with 32 and four filters respectively, and 3328 units in its compressed layer.  Clearly, this model captured more information at the expense of lower compression.

The next step was to cluster the compressed data.  I tried both K-means clustering and Gaussian mixture modelling on both types of encoded data.  What we hope to see is some evidence for a preferred number of clusters, and we typically do this by examining how a fit measure such as the Bayesian Information Criterion (BIC) evolves as a function of the number of clusters.  The preferred number of clusters is the point at which the fit measure shows a minimum, or at least ceases to improve as the number of clusters increases, or when there is an “elbow” in the curve. However, under various fit measures, there was no clear clustering solution; model fit just kept improving smoothly as the number of clusters increased.  Figure 2 shows a typical result.

Figure 2. Example of a model fitting curve

To continue down this path, I would have needed to choose the number of clusters more or less arbitrarily, introducing some decision-making bias into the process which I wanted to avoid.  So I decided to abandon this method, and try a more traditional approach.

Clustering Sequences using Sequence Attributes

In this approach, instead of using abstract encoded features, I attempted to cluster the sequences on observable attributes. The attributes I used were:

  • Start location:  Pitch x,y co-ordinates

  • End location: Pitch x,y co-ordinates

  • Box width: Width of bounding box

  • Box Length: Maximum x coordinate – minimum x coordinate

  • Verticality: End x co-ordinate – start x co-ordinate

  • Number of passes

  • Total pass length: sum of individual pass lengths

  • Total dribble length: Sum of all dribble lengths

  • Number of different players involved

  • Average x,y coordinate

  • Zigzag: Sum of changes in pass angles

The pass sequences were clustered using both K-means and a Gaussian mixture model. No preferred number of clusters could identified using K-means, but as shown in Figure 3, the BIC fit curve for a Gaussian mixture model suggested 50 clusters would be a reasonable choice.

Figure 3. Fit measures for Gaussian Mixture Model

The 50 clusters are illustrated in the carousel below. Each image shows the three most typical pass sequences in the cluster (i.e. were identified by the clustering algorithm of having the highest membership score for their cluster.)  The images can be expanded for viewing.

Table 1. lists the characteristics of each cluster

Table 1. Cluster characteristics
ClusterPercent of SequencesPercent ShotsNo. PassesNo. PlayersStart x-LocationEnd x-LocationStart y-LocationEnd y-LocationBox LengthBox WidthVerticalityPass LengthDribble LengthZigzag
02.4%1.1%6.24.911.847.733.833.441.543.036.0102.819.51.6
12.1%10.0%4.44.069.074.88.856.122.753.35.870.913.01.2
22.2%8.8%6.44.567.970.755.352.627.927.22.878.315.42.0
32.7%0.8%10.66.641.639.035.934.941.752.6-2.6174.029.32.0
40.8%18.0%16.88.726.081.534.234.368.261.655.5283.859.71.9
51.4%23.6%12.07.716.887.333.537.177.256.570.5206.445.61.7
61.1%25.0%4.03.093.885.61.619.518.521.3-8.242.76.01.6
71.1%7.6%12.27.532.972.936.012.153.761.240.0202.843.11.8
81.8%15.0%4.13.716.676.737.450.762.430.560.178.519.11.1
91.9%0.3%3.53.247.421.047.639.429.422.6-26.546.96.71.6
101.3%20.3%6.65.422.383.034.151.066.345.860.7119.525.81.5
113.8%12.2%3.63.263.272.042.447.216.725.38.841.28.01.3
122.4%1.1%6.34.942.250.137.356.830.042.47.998.915.21.8
131.9%9.6%4.63.468.179.811.712.220.817.311.745.49.11.7
143.4%0.4%3.22.729.832.053.854.711.812.72.226.73.71.4
151.8%28.0%4.03.544.990.325.618.748.930.245.567.717.61.1
161.8%8.5%5.54.842.676.19.857.342.857.233.592.723.01.3
173.6%18.2%3.02.966.081.924.620.619.822.715.936.87.30.9
181.8%1.0%3.23.018.544.825.615.929.022.126.244.57.70.9
192.6%3.6%4.03.524.863.922.113.045.623.839.268.212.01.3
202.0%8.1%8.15.941.377.029.411.348.250.735.8132.429.11.7
211.7%11.1%10.86.959.174.522.533.635.856.015.3162.633.81.9
221.9%13.0%3.62.689.487.067.258.313.912.1-2.429.04.71.6
232.3%12.4%4.23.972.374.561.914.122.952.82.269.612.41.1
243.0%0.3%3.22.725.924.416.217.013.914.5-1.530.03.71.4
252.7%0.8%3.83.320.544.746.856.029.721.924.252.38.81.3
261.1%13.7%14.97.962.775.734.132.646.557.613.0234.941.62.0
271.2%12.5%8.66.518.280.635.512.367.754.262.4148.635.01.6
281.6%3.6%4.23.939.767.255.38.634.752.027.572.516.91.0
291.2%8.4%4.94.423.778.943.69.359.249.655.294.222.91.1
302.3%17.5%6.14.665.679.922.220.129.434.714.385.216.01.8
311.5%29.3%5.44.614.191.231.628.879.640.077.0105.229.21.2
321.9%1.6%6.15.051.555.339.812.426.946.63.893.714.61.7
330.2%13.7%18.18.656.373.535.031.052.663.717.2313.866.41.9
342.5%21.5%4.33.651.085.544.950.639.427.234.562.715.01.3
351.7%10.3%8.56.272.172.263.728.135.259.80.1135.325.71.7
363.1%4.0%3.22.762.971.259.659.015.710.58.226.75.21.3
372.0%0.8%7.65.637.443.535.624.929.453.76.1120.725.11.7
381.3%9.1%9.36.619.972.432.953.759.453.352.5152.636.41.7
390.9%2.1%15.27.943.748.633.838.347.261.64.9255.550.62.0
401.7%9.4%7.75.873.466.93.440.536.157.7-6.4124.521.61.8
412.5%0.3%5.54.552.018.231.031.641.635.6-33.986.912.92.0
421.7%0.6%3.83.636.342.511.657.720.449.56.361.811.70.9
431.3%17.6%10.87.044.181.440.051.752.755.637.3176.039.01.8
442.2%1.6%4.13.654.549.024.222.019.128.6-5.550.99.61.6
452.3%2.8%3.22.375.477.11.44.810.27.41.720.13.11.6
462.5%10.1%3.03.032.371.337.051.342.628.739.056.512.80.8
472.5%23.1%6.85.350.484.844.439.145.648.334.4114.223.91.6
483.1%0.4%4.33.626.530.340.921.219.037.53.862.912.21.4
492.3%1.2%3.32.947.054.613.613.316.013.07.630.34.61.4

Most of the columns in Table 1 are self explanatory, but the Percent Shots column is the percentage of sequences that are followed by a shot within 8 seconds. So for example, we can see that 2.4% of sequences belong to Cluster 0;  sequences in this cluster begin with the goalkeeper, or low down the pitch, and have moderate verticality; only 1% of sequences in this category are followed by a shot.  On the other hand,  the average sequence in Cluster 5, also originates low down the pitch  but has high verticality and is followed by a shot 23.6% of the time.

Many of the clusters illustrate recognisable trajectories – Cluster 1 is a switch of play in the final third, Cluster 5 is an end-to-end attack, Cluster 6 is corners, Cluster 8 is a movement down the flank. Clusters like 33 and 39 represent sustained periods of possession covering large areas of the pitch, while others like Cluster 13 contain compact sequences of a few passes quickly terminated by the opposition.

Validating the Clusters

The next step is validate the clusters, i.e. determine whether they have any utility.  We might expect that different teams display different mixes of clusters. To illustrate this, Table 2 below shows the frequency of Cluster 5 sequences for six selected teams. The differences in percentage are highly significant, and it certainly seems that this cluster at least carries some meaning.

Table 2. Population of Cluster 5 sequences for selected teams
TeamNo. of Cluster 5 SequencesTotal No. of SequencesPercentage
Bayern Munich17058942.9%
Barcelona15960732.6%
Manchester City14856192.6%
Burnley1620940.8%
West Brom.2433150.7%
Leicester1635960.4%

To validate the clusters more comprehensively, I conducted a multidimensional scaling (MDS) analysis on the cluster percentages. Essentially this encodes the team cluster usage in 2 dimensions (I used percentages because using the raw numbers would simply group teams together by amount of possession.)  I mapped the EPL teams in Figure 4. below.  Teams that are close together have similar scores on both dimensions, and hence similar patterns of cluster usage.

Figure 4. MDS of Cluster usage mapped against Performance

We can see that some teams preserve their location across seasons – others move around but generally stay in the same region of the map for at least two seasons.

More persuasively perhaps, the cluster mapping is congruent with performance.  The coloured background shows how goals/match varies with the dimension scores, and we see that the elite Premier teams are located in the high-performing region of the map.  This indicates a relationship between the dimensions (and therefore the pattern of cluster usage) and performance.  In fact goals scored per season increases strongly with increasing percentage of certain clusters (e.g. the cluster percentages for clusters 4, 5, 27 and 43 all correlate at .63 -.69 with goals scored) and a regression analysis shows that the two dimensions explain 60.0% of the variance.

The Bottom Line

Clustering sequences can help organize the passing patterns of teams into useful categories. However, it will be interesting to try other analytical approaches, which I will look at in a future post.

2019-04-16T17:52:02+00:00 April 16th, 2019|On the Pitch, Recent Posts|0 Comments

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.