[03] La date de Pâques

“Tiens, Pâques est tard l’année prochaine”. Vous avez peut-être aussi déjà entendu cette phrase à propos de Pâques 2017 ? Il est vrai que l’année prochaine, le dimanche pascal aura lieu le 16 avril, soit 20 jours plus tard qu’en 2016 (27 mars). Je me suis demandé quelle était la distribution des dates de Pâques, et quand on pouvait considérer que “Pâques tombe tôt/tard cette année”

Wikipedia nous informe que “Pâques est le dimanche qui suit le 14e jour de la Lune qui atteint cet âge le 21 mars ou immédiatement après” … ce qui ne nous avance pas beaucoup ! Heureusement, le fameux mathématicien Gauss a établi un algorithme permettant de calculer cette fameuse date avec une suite d’opérations plutôt simples. On obtient alors la fréquence de chaque date, résumée dans ce petit graphe:

Fréquence des dates de Pâques de l'année 1600 à l'année 100000
Fréquence des dates de Pâques de l’année 1600 à l’année 100000

Les résultats, en vrac :

  • Pâques ne peut avoir lieu qu’entre le 22 mars et le 25 avril
  • La date “moyenne” de Pâques est le 8 avril
  • 10% des dimanches de Pâques ont lieu avant le 27 mars (donc on pouvait bien dire qu’en 2016, Pâques tombait tôt !) et 10% après le 20 avril (la prochaine fois, ce sera en 2019, le 21 avril)
  • La date du 19 avril est légèrement plus fréquente que les autres (prochain Pâques à cette date en 2071)
  • Pâques a légèrement plus de chances de tomber un jour impair qu’un jour pair (52% contre 48%)

La distribution est résumée de façon plus imagée dans ce petit calendrier :

Distribution de la date de Pâques dans le calendrier. Plus la teinte est rouge, plus la date est fréquente
Distribution de la date de Pâques dans le calendrier. Plus la teinte est rouge, plus la date est fréquente

A demain pour un autre petit article du calendrier de l’avent !

Riddler and Voter Power Index

Oliver Roeder has a nice puzzle: the riddler. Just like last week, this week’s puzzle has an interesting application to the US Election and I enjoyed it really much, so I figured I might just write a blog post 🙂 In this article, we’ll solve this week’s riddler two different ways (just because :p) and discuss an indicator used on FiveThirtyEight’s prediction model for the election: the Voter Power Index.

Exact solution and Stirling approximation

I won’t write again the problem and notations, but you can find them here. We’ll also assume N is odd (as precised later by Ollie on Twitter). This assumption won’t matter much because we’ll only look at applications for large values of N. Let’s write:

\(\mathbb{P} = \Pr(you~decide~the~election)\)

 

Your vote is obviously going to be decisive if there is a tie between the N-1 other votes (convienently, N-1 is even). The votes are all independant with same probability p=1/2, so they are Bernoulli trials. Consequently, the probability we’re looking for is the probability that exactly half of these Bernoulli trial succeed, which is by definition the binomial distribution. Thus:

\(\mathbb{P} = {{N-1}\choose{\frac{N-1}{2}}} p^{\frac{N-1}{2}} {(1-p)}^{\frac{N-1}{2}} \)

 

As p=0.5, the exact value for the probability of your vote being decisive is thus:

\(\fbox{$\mathbb{P} = \frac{{{N-1}\choose{\frac{N-1}{2}}}}{{2}^{N-1}}$}\)

 

So, here is the exact solution, but it’s not super useful as is. Much more interesting is how this varies with N (with N sufficiently large). We can use Stirling’s approximation:

\(\log \mathbb{P} = \log {{N-1}\choose{\frac{N-1}{2}}} – (N-1) \log 2 \\
~~~~\sim N \log N – \frac{N}{2} \log \frac{N}{2} – \frac{N}{2} \log \frac{N}{2} + \frac{1}{2} \left( \log N – \log \frac{N}{2} \\~~~~~~~- \log \frac{N}{2} – \log 2\pi \right) – N \log 2 \\~~~~\sim – \frac{1}{2} \log N + \log 2 – \frac{1}{2} \log 2\pi \)

Thus for sufficiently large N, the probability your vote is the decisive vote varies like the inverse of the square root of N:

\(\fbox{$\mathbb{P}\sim \sqrt{\frac{2}{N\pi}} \approx \frac{0.8}{\sqrt{N}}$}\)

A very simple solution for large N

Actually, we could have obtained this result for large N much more simply. We know that asymptotically the binomial distribution is gonna converge to a normal distribution. The event that your vote is the decisive one is actually the most probable event, as probabilities that the other people vote for either candidates are equal to 1/2. So the solution to the riddler can be easily computed using the density of the normal distribution:

\(\mathbb{P} = \phi(0) = \frac{1}{\sqrt{2\pi \sigma^2}}\)

with:

\(\sigma^2 = Np(1-p) = \frac{N}{4}\)

(the variance of the binomial distribution), we get the same result as in the first paragraph:

\(\fbox{$\mathbb{P}\sim \sqrt{\frac{2}{N\pi}} \approx \frac{0.8}{\sqrt{N}}$}\)
Mode of normal distribution for various standard deviations. © W. R. Leo
Mode of normal distribution for various standard deviations. © W. R. Leo

Voter Power Index

Caption from FiveThirtyEight's model
Caption from FiveThirtyEight’s model

In the US Presidential election, voters don’t elect directly their preferred candidates, but “electors” who will eventually get to vote for the president. For example, California get 55 electors while Wyoming only get 3. But divided by the number of voters in each of these states, it appears that there are approximately 510 000 voters for each elector in California while only 150 000 voters get to decide an electoral vote in Wyoming. If we assumed that probabilities of voting for each candidate was equal in these states, we can use our formula to get the relative likelihood that one vote is going to change the outcome in the election in these two states:

\(\sqrt{\frac{510000}{150000}} \approx 1.8\)

So in a way, a vote by a Californian is nearly 2 times less important than a vote cast in Wyoming!

Of course, probabilities are far from being equal for this year’s 2 candidates in California and Wyoming. And as Michael Vartan noted, the value of this probability matters very much!

All parameters taken into account (also including the different configurations of the electoral college in other states), this is what Nate Silver call the Voter Power Index. For this year, the probabilities that one vote will change the outcome of the whole election is highest in New Hampshire and lowest in DC.

Featured image: Number of electoral votes per voter for each state. Made using the awesome tilegram app

[Sampling] Icarus et calage sur bornes minimales au 9ème colloque francophone sondages

Du 11 au 14 octobre dernier, nous étions à Gatineau (Québec) pour participer au 9ème colloque francophone sur les sondages de la SFdS. Un grand bravo à toute l’équipe organisatrice pour ce contenu scientifique de grande qualité et ce programme social très sympathique !

Nous avons donné les présentations suivantes :

Data analysis of the French football league players with R and FactoMineR

This year we’ve had a great summer for sporting events! Now autumn is back, and with it the Ligue 1 championship. Last year, we created this data analysis tutorial using R and the excellent package FactoMineR for a course at ENSAE (in French). The dataset contains the physical and technical abilities of French Ligue 1 and Ligue 2 players. The goal of the tutorial is to determine with our data analysis which position is best for Mathieu Valbuena 🙂

The dataset

A small precision that could prove useful: it is not required to have any advanced knowledge of football to understand this tutorial. Only a few notions about the positions of the players on the field are needed, and they are summed up in the following diagram:

Positions of the fooball players on the field
Positions of the fooball players on the field

The data come from the video game Fifa 15 (which is already 2 years old, so there may be some differences with the current Ligue 1 and Ligue 2 players!). The game features rates each players’ abilities in various aspects of the game. Originally, the grade are quantitative variables (between 0 and 100) but we transformed them into categorical variables (we will discuss why we chose to do so later on). All abilities are thus coded on 4 positions : 1. Low / 2. Average / 3. High / 4. Very High.

Loading and prepping the data

Let’s start by loading the dataset into a data.frame. The important thing to note is that FactoMineR requires factors. So for once, we’re going to let the (in)famous stringsAsFactors parameter be TRUE!

> frenchLeague <- read.csv2("french_league_2015.csv", stringsAsFactors=TRUE)
> frenchLeague <- as.data.frame(apply(frenchLeague, 2, factor))

The second line transforms the integer columns into factors also. FactoMineR uses the row.names of the dataframes on the graphs, so we’re going to set the players names as row names:

row.names(frenchLeague) <- frenchLeague$name
frenchLeague$name <- NULL

Here’s what our object looks like (we only display the first few lines here):

> head(frenchLeague)
                     foot position league age height overall
Florian Thauvin      left       RM Ligue1   1      3       4
Layvin Kurzawa       left       LB Ligue1   1      3       4
Anthony Martial     right       ST Ligue1   1      3       4
Clinton N'Jie       right       ST Ligue1   1      2       3
Marco Verratti      right       MC Ligue1   1      1       4
Alexandre Lacazette right       ST Ligue1   2      2       4

Data analysis

Our dataset contains categorical variables. The appropriate data analysis method is the Multiple Correspondance Analysis. This method is implemented in FactoMineR in the method MCA. We choose to treat the variables “position”, “league” and “age” as supplementary:

> library(FactoMineR)
> mca <- MCA(frenchLeague, quali.sup=c(2,3,4))

This produces three graphs: the projection on the factorial axes of categories and players, and the graph of the variables. Let’s just have a look at the second one of these graphs:

Projection of the players on the first two factorial axes (click to enlarge)
Projection of the players on the first two factorial axes (click to enlarge)

Before trying to go any further into the analysis, something should alert us. There clearly are two clusters of players here! Yet the data analysis techniques like MCA suppose that the scatter plot is homogeneous. We’ll have to restrict the analysis to one of the two clusters in order to continue.

On the previous graph, supplementary variables are shown in green. The only supplementary variable that appears to correspond to the cluster on the right is the goalkeeper position (“GK”). If we take a closer look to the players on this second cluster, we can easily confirm that they’re actually all goalkeeper. This absolutely makes a lot of sense: in football, the goalkeeper is a very different position, and we should expect these players to be really different from the others. From now on, we will only focus on the positions other than goalkeepers. We also remove from the analysis the abilities that are specific to goalkeepers, which are not important for other players and can only add noise to our analysis:

> frenchLeague_no_gk <- frenchLeague[frenchLeague$position!="GK",-c(31:35)]
> mca_no_gk <- MCA(frenchLeague_no_gk, quali.sup=c(2,3,4))

And now our graph features only one cluster.

Interpretation

Obviously, we have to start by reducing the analysis to a certain number of factorial axes. My favorite method to chose the number of axes is the elbow method. We plot the graph of the eigenvalues:

> barplot(mca_no_gk$eig$eigenvalue)

 

barplot
Graph of the eigenvalues

Around the third or fourth eigenvalue, we observe a drop of the values (which is the percentage of the variance explained par the MCA). This means that the marginal gain of retaining one more axis for our analysis is lower after the 3rd or 4th first ones. We thus choose to reduce our analysis to the first three factorial axes (we could also justify chosing 4 axes). Now let’s move on to the interpretation, starting with the first two axes:

> plot.MCA(mca_no_gk, invisible = c("ind","quali.sup"))
Projection of the abilities on the first two factorial axes
Projection of the abilities on the first two factorial axes

We could start the analysis by reading on the graph the name of the variables and modalities that seem most representative of the first two axes. But first we have to keep in mind that there may be some of the modalities whose coordinates are high that have a low contribution, making them less relevant for the interpretation. And second, there are a lot of variables on this graph, and reading directly from it is not that easy. For these reasons, we chose to use one of FactoMineR’s specific functions, dimdesc (we only show part of the output here):

> dimdesc(mca_no_gk)
$`Dim 1`$category
                      Estimate       p.value
finishing_1        0.700971584 1.479410e-130
volleys_1          0.732349045 8.416993e-125
long_shots_1       0.776647500 4.137268e-111
sliding_tackle_3   0.591937236 1.575750e-106
curve_1            0.740271243  1.731238e-87
[...]
finishing_4       -0.578170467  7.661923e-82
shot_power_4      -0.719591411  2.936483e-86
ball_control_4    -0.874377431 5.088935e-104
dribbling_4       -0.820552850 1.795628e-117

The most representative abilities of the first axis are, on the right side of the axis, a weak level in attacking abilities (finishing, volleys, long shots, etc.) and on the left side a very strong level in those abilities. Our interpretation is thus that axis 1 separates players according to their offensive abilities (better attacking abilities on the left side, weaker on the right side). We procede with the same analysis for axis 2 and conclude that it discriminates players according to their defensive abilities: better defenders will be found on top of the graph whereas weak defenders will be found on the bottom part of the graph.

Supplementary variables can also help confirm our interpretation, particularly the position variable:

> plot.MCA(mca_no_gk, invisible = c("ind","var"))
Projection of the supplementary variables on the first two factorial axis
Projection of the supplementary variables on the first two factorial axis

And indeed we find on the left part of the graph the attacking positions (LW, ST, RW) and on the top part of the graph the defensive positions (CB, LB, RB).

If our interpretation is correct, the projection on the second bissector of the graph will be a good proxy for the overall level of the player. The best players will be found on the top left area while the weaker ones will be found on the bottom right of the graph. There are many ways to check this, for example looking at the projection of the modalities of the variable “overall”. As expected, “overall_4” is found on the top-left corner and “overall_1” on the bottom-right corner. Also, on the graph of the supplementary variables, we observe that “Ligue 1” (first division of the french league) is on the top-left area while “Ligue 2” (second division) lies on the bottom-right area.

With only these two axes interpreted there are plenty of fun things to note:

  • Left wingers seem to have a better overall level than right wingers (if someone has an explanation for this I’d be glad to hear it!)
  • Age is irrelevant to explain the level of a player, except for the younger ones who are in general weaker.
  • Older players tend to have more defensive roles

Let’s not forget to deal with axis 3:

> plot.MCA(mca_no_gk, invisible = c("ind","var"), axes=c(2,3))
Projection of the variables on the 2nd and 3rd factorial axes
Projection of the variables on the 2nd and 3rd factorial axes

Modalities that are most representative of the third axis are technical weaknesses: the players with the lower technical abilities (dribbling, ball control, etc.) are on the end of the axis while the players with the highest grades in these abilities tend to be found at the center of the axis:

Projection of the supplementary variables on the 2nd and 3rd factorial axes
Projection of the supplementary variables on the 2nd and 3rd factorial axes

We note with the help of the supplementary variables, that midfielders have the highest technical abilities on average, while strikers (ST) and defenders (CB, LB, RB) seem in general not to be known for their ball control skills.

Now we see why we chose to make the variables categorical instead of quantitative. If we had kept the orginal variables (quantitative) and performed a PCA on the data, the projections would have kept the orders for each variable, unlike what happens here for axis 3. And after all, isn’t it better like this? Ordering players according to their technical skills isn’t necessarily what you look for when analyzing the profiles of the players. Football is a very rich sport, and some positions don’t require Messi’s dribbling skills to be an amazing player!

Mathieu Valbuena

Now we add the data for a new comer in the French League, Mathieu Valbuena (actually Mathieu Valbuena arrived in the French League in August of 2015, but I warned you that the data was a bit old ;)). We’re going to compare Mathieu’s profile (as a supplementary individual) to the other players, using our data analysis.

> columns_valbuena <- c("right","RW","Ligue1",3,1
 ,4,4,3,4,3,4,4,4,4,4,3,4,4,3,3,1,3,2,1,3,4,3,1,1,1)
> frenchLeague_no_gk["Mathieu Valbuena",] <- columns_valbuena

> mca_valbuena <- MCA(frenchLeague_no_gk, quali.sup=c(2,3,4), ind.sup=912)
> plot.MCA(mca_valbuena, invisible = c("var","ind"), col.quali.sup = "red", col.ind.sup="darkblue")
> plot.MCA(mca_valbuena, invisible = c("var","ind"), col.quali.sup = "red", col.ind.sup="darkblue", axes=c(2,3))

Last two lines produce the graphs with Mathieu Valbuena on axes 1 and 2, then 2 and 3:

Axes 1 and 2 with Mathieu Valbuena as a supplementary individual
Axes 1 and 2 with Mathieu Valbuena as a supplementary individual (click to enlarge)
Axes 2 and 3 with Mathieu Valbuena as a supplementary individual
Axes 2 and 3 with Mathieu Valbuena as a supplementary individual (click to enlarge)

So, Mathieu Valbuena seems to have good offensive skills (left part of the graph), but he also has a good overall level (his projection on the second bissector is rather high). He also lies at the center of axis 3, which indicates he has good technical skills. We should thus not be surprised to see that the positions that suit him most (statistically speaking of course!) are midfield positions (CAM, LM, RM). With a few more lines of code, we can also find the French league players that have the most similar profiles:

> mca_valbuena_distance <- MCA(frenchLeague_no_gk[,-c(3,4)], quali.sup=c(2), ind.sup=912, ncp = 79)
> distancesValbuena <- as.data.frame(mca_valbuena_distance$ind$coord)
> distancesValbuena[912, ] <- mca_valbuena_distance$ind.sup$coord

> euclidianDistance <- function(x,y) {
 
 return( dist(rbind(x, y)) )
 
}

> distancesValbuena$distance_valbuena <- apply(distancesValbuena, 1, euclidianDistance, y=mca_valbuena_distance$ind.sup$coord)
> distancesValbuena <- distancesValbuena[order(distancesValbuena$distance_valbuena),]

> names_close_valbuena <- c("Mathieu Valbuena", row.names(distancesValbuena[2:6,]))

And we get: Ladislas Douniama, Frédéric Sammaritano, Florian Thauvin, N’Golo Kanté and Wissam Ben Yedder.

There would be so many other things to say about this data set but I think it’s time to wrap this (already very long) article up 😉 Keep in mind that this analysis should not be taken too seriously! It just aimed at giving a fun tutorial for students to discover R, FactoMineR and data analysis.

 

[Sports] Fifa et analyse de données

Après un été chargé en sports, l’automne et la Ligue 1 reprennent peu à peu leurs droits. C’est l’occasion de détailler un sujet d’analyse de données élaboré pour un cours à l’ENSAE. Il s’agit d’analyser des données qualitatives (caractéristiques physiques, tactiques et aptitudes relatives à certains aspects techniques du jeu) décrivant les joueurs du championnat de France de football. Le but final est de déterminer “statistiquement” à quel poste faire jouer Mathieu Valbuena 🙂 On utilise le langage R et l’excellent package d’analyse de données FactoMineR.

Les données

Comme indiqué dans l’énoncé du TD, il n’est pas nécessaire de bien connaître le football pour pouvoir suivre cet article. Seule une notion de l’emplacement des joueurs sur le terrain en fonction de leur poste (correspondant à la colonne “position” du dataset) est souhaitable. Voici un petit schéma pour aider les moins avertis :

disposition_terrain

Les données sont issues du jeu vidéo Fifa 15 (les connaisseurs auront remarqué que les données datent donc d’il y a déjà deux saisons, il peut donc y avoir quelques différences avec les effectifs actuels !), qui donne de nombreuses statistiques pour chaque joueur, incluant une évaluation de leurs capacités. Les données de Fifa sont quantitatives (par exemple chaque capacité est notée sur 100) mais pour cet article on les a rendues catégorielles sur 4 positions : 1. Faible / 2. Moyen / 3. Fort / 4. Très fort. On verra l’intérêt d’avoir procédé ainsi un peu plus loin !

Préparation des données

Commençons par charger les données. Notez l’utilisation de l’option stringsAsFactors=TRUE (plus d’explications sur ce fameux paramètre stringsAsFactors ici). Eh oui, une fois n’est pas coutume, FactoMineR utilise des facteurs pour effectuer l’analyse de données !

> champFrance <- read.csv2("td3_donnees.csv", stringsAsFactors=TRUE)
> champFrance <- as.data.frame(apply(champFrance, 2, factor))

La deuxième ligne sert à transformer les colonnes de type int créés par read.csv2 en factors.

FactoMineR utilise le paramètre “row.names” des data.frame de R pour l’affichage sur les graphes. On va donc indiquer qu’il faut utiliser la colonne “nom” en tant que row.names pour faciliter la lecture :

> row.names(champFrance) <- champFrance$nom
> champFrance$nom <- NULL

Voilà à quoi ressemble désormais notre data.frame (seules les premières lignes sont affichées) :

> head(champFrance)
                      pied position championnat age taille general
Florian Thauvin     Gauche      MDR      Ligue1   1      3       4
Layvin Kurzawa      Gauche       AG      Ligue1   1      3       4
Anthony Martial      Droit       BU      Ligue1   1      3       4
Clinton N'Jie        Droit       BU      Ligue1   1      2       3
Marco Verratti       Droit       MC      Ligue1   1      1       4
Alexandre Lacazette  Droit       BU      Ligue1   2      2       4

Analyse des données

Nous avons affaire à un tableau de variables catégorielles : la méthode adaptée est l’Analyse des Correspondances Multiples, qui est implémentée dans FactoMineR par la méthode MCA. Pour le moment on exclut de l’analyse les variables “position”, “championnat” et “âge” (que l’on traite comme variables supplémentaires) :

> library(FactoMineR)
> acm <- MCA(champFrance, quali.sup=c(2,3,4))

Trois graphes apparaissent dans la sortie : la projection sur les deux premiers axes factoriels des catégories et des individus, ainsi que le graphe des variables. A ce stade, seul le second nous intéresse :

2_nuages_points_2
Projection des individus sur les deux premiers axes factoriels

Avant même d’essayer d’aller plus loin dans l’analyse, quelque chose doit nous sauter aux yeux : il y a clairement deux nuages de points ! Or nos méthodes d’analyse de données supposent que le nuage qu’on analyse est homogène. Il va donc falloir se restreindre à l’analyse de l’un des deux nuages que l’on observe sur ce graphe.

Pour identifier à quels individus le nuage de droite correspond, on peut utiliser les variables supplémentaires (points verts). On observe que la projection de la position goal (“G”) correspond bien au nuage. En regardant de plus près les noms des individus concernés, on confirme que ce sont tous des gardiens de but.

On va se concentrer pour le reste de l’article sur les joueurs de champ. On en profite également pour retirer les colonnes ne concernant que les capacités de gardien, qui ne sont pas importantes pour les joueurs de champ et ne peuvent que bruiter notre analyse :

> champFrance_nogoals <- champFrance[champFrance$position!="G",-c(31:35)]
> acm_nogoals <- MCA(champFrance_nogoals, quali.sup=c(2,3,4))

Et l’on vérifie bien dans la sortie graphique que l’on a un nuage de points homogène.

Interprétation

On commence par réduire notre analyse à un certain nombre d’axes factoriels. Ma méthode favorite est la “règle du coude” : sur le graphe des valeurs propres, on va observer un décrochement (le “coude”) suivi d’une décroissance régulière. On sélectionnera ensuite un nombre d’axes correspondant au nombre de valeurs propres précédant le décrochement :

> barplot(acm_nogoals$eig$eigenvalue)

 

barplot
Éboulis des valeurs propres

Ici, on peut choisir par exemple 3 axes (mais on pourrait justifier aussi de retenir 4 axes). Passons maintenant à l’interprétation, en commençant par les graphes des projections sur les deux premiers axes retenus pour l’étude.

> plot.MCA(acm_nogoals, invisible = c("ind","quali.sup"))
axes_1_2_modalites
Projection des modalités sur les axes factoriels 1 et 2 (cliquer pour agrandir)

On peut par exemple lire sur ce graphe le nom des modalités possédant les plus fortes coordonnées sur les axes 1 et 2 et commencer ainsi l’interprétation. Mais avec un tel de nombre de modalités, la lecture directe sur le graphe n’est pas si aisée. On peut également obtenir un résultat dans la sortie texte spécifique de FactoMineR, dimdesc (seule une partie de la sortie est donnée ici) :

> dimdesc(acm_nogoals)
$`Dim 1`$category
                         Estimate       p.value
finition_1            0.700971584 1.479410e-130
volees_1              0.732349045 8.416993e-125
tirs_lointains_1      0.776647500 4.137268e-111
tacle_glisse_3        0.591937236 1.575750e-106
effets_1              0.740271243  1.731238e-87
[...]
finition_4           -0.578170467  7.661923e-82
puissance_tir_4      -0.719591411  2.936483e-86
controle_balle_4     -0.874377431 5.088935e-104
dribbles_4           -0.820552850 1.795628e-117

Les modalités les plus caractéristiques de l’axe 1 sont, à droite, un niveau faible dans les capacités offensives (finition, volées, tirs lointains), et de l’autre un niveau très fort dans ces même capacités. L’interprétation naturelle est donc que l’axe 1 discrimine selon les capacités offensives (les meilleurs attaquants à gauche, les moins bons à droite). On procède de même pour l’axe 2, et on observe le même phénomène, mais avec les capacités défensives : en haut on trouvera les meilleurs défenseurs, et en bas les moins bons défenseurs.

Les variables supplémentaires peuvent aussi aider à l’interprétation, et vont confirmer notre interprétation, notamment la variable de position :

> plot.MCA(acm_nogoals, invisible = c("ind","var"))
var_sup_axes_1_2
Projection des variables supplémentaires sur les axes factoriels 1 et 2 (cliquer pour agrandir)

On trouve bien à gauche du graphe les les postes offensifs (BU, AIG, AID) et en haut les postes défensifs (DC, AD, AG).

Une conséquence de cette interprétation est que l’on risque de trouver les joueurs de meilleur niveau organisés le long de la seconde bissectrice, avec les meilleurs joueurs dans le quadrant en haut à gauche, et les plus faibles dans le quadrant en bas à droite. Il y a beaucoup de moyens de le vérifier, mais on va se contenter de regarder dans le graphe des modalités l’emplacement des observations de la variable “général”, qui résume le niveau d’un joueur. Comme on s’y attend, on trouve “général_4” dans en haut à gauche et “général_1” dans le quadrant en bas à droite. On peut observer aussi le placement des variables supplémentaires “Ligue 1” et “Ligue 2” pour s’en convaincre 🙂

A ce stade, il y a déjà plein de choses intéressantes à relever ! Parmi celles qui m’amusent le plus :

  • Les ailiers gauches semblent avoir un meilleur niveau que les ailiers droits (si un spécialiste du foot voulait bien m’en expliquer la raison ce serait top !)
  • L’âge n’est pas explicatif du niveau du joueur, sauf pour les plus jeunes qui ont un niveau plus faible
  • Les joueurs les plus âgés ont des rôles plus défensifs.

N’oublions pas de nous occuper de l’axe 3 :

> plot.MCA(acm_nogoals, invisible = c("ind","var"), axes=c(2,3))
axes_2_3
Modalités projetées sur les axes 2 et 3

Les modalités les plus caractéristiques de ce troisième axe sont les faiblesses techniques : les joueurs les moins techniques sont sur les extrémités de l’axe, et les joueurs les plus techniques au centre. On le confirme sur le graphe des variables supplémentaires : les buteurs et défenseurs centraux sont en effet moins réputés pour leurs capacités techniques, tandis que tous les postes de milieux se retrouvent au centre de l’axe :

sup_axes_2_3
Variables supplémentaires sur les axes 2 et 3 (cliquer pour agrandir)

C’est l’intérêt d’avoir rendu ces variables catégorielles. Si l’on avait conservé le caractère quantitatif des données originelles de Fifa et effectué une ACP, les projections de chaque caractéristique sur chaque axe auraient été ordonnées par niveau, contrairement à ce qui se passe sur l’axe 3. Et après tout, discriminer les joueurs suivant leur niveau technique ne reflète pas forcément toute la richesse du football : à certains postes, on a besoin de techniciens, mais à d’autres, on préférera des qualités physiques !

Mathieu Valbuena

On va maintenant ajouter les données d’un nouvel entrant dans le championnat de France : Mathieu Valbuna (oui je vous avais prévenu, les données commencent à dater un peu :p) et le comparer aux autres joueurs en utilisant notre analyse.

> columns_valbuena <- c("Droit","AID","Ligue1",3,1
 ,4,4,3,4,3,4,4,4,4,4,3,4,4,3,3,1,3,2,1,3,4,3,1,1,1)
> champFrance_nogoals["Mathieu Valbuena",] <- columns_valbuena

> acm_valbuena <- MCA(champFrance_nogoals, quali.sup=c(2,3,4), ind.sup=912)
> plot.MCA(acm_valbuena, invisible = c("var","ind"), col.quali.sup = "red", col.ind.sup="darkblue")
> plot.MCA(acm_valbuena, invisible = c("var","ind"), col.quali.sup = "red", col.ind.sup="darkblue", axes=c(2,3))

Les deux dernières lignes permettent de représenter Mathieu Valbuena sur les axes 1 et 2, puis 2 et 3 :

Axes factoriels 1 et 2 avec Mathieu Valbuena en point supplémentaire (cliquer pour agrandir)
Axes factoriels 1 et 2 avec Mathieu Valbuena en point supplémentaire (cliquer pour agrandir)
Axes factoriels 2 et 3 avec Mathieu Valbuena en point supplémentaire (cliquer pour agrandir)
Axes factoriels 2 et 3 avec Mathieu Valbuena en point supplémentaire (cliquer pour agrandir)

Résultat de notre analyse : Mathieu Valbuena a plutôt un profil offensif (gauche de l’axe 1), mais possède un bon niveau général (sa projection sur la deuxième bissectrice est assez élevée). Il possède également de bonnes aptitudes techniques (centre de l’axe 3). Enfin, ses qualités semblent plutôt bien convenir aux postes de milieu offensif (MOC) ou milieu gauche (MG). Avec quelques lignes de code, on peut trouver les joueurs du championnat dont le profil est le plus proche de celui de Valbuena :

> acm_valbuena_distance <- MCA(champFrance_nogoals[,-c(3,4)], quali.sup=c(2), ind.sup=912, ncp = 79)
> distancesValbuena <- as.data.frame(acm_valbuena_distance$ind$coord)
> distancesValbuena[912, ] <- acm_valbuena_distance$ind.sup$coord

> euclidianDistance <- function(x,y) {
 
 return( dist(rbind(x, y)) )
 
}

> distancesValbuena$distance_valbuena <- apply(distancesValbuena, 1, euclidianDistance, y=acm_valbuena_distance$ind.sup$coord)
> distancesValbuena <- distancesValbuena[order(distancesValbuena$distance_valbuena),]

# On regarde les profils des 5 individus les plus proches
> nomsProchesValbuena <- c("Mathieu Valbuena", row.names(distancesValbuena[2:6,]))

Et l’on obtient : Ladislas Douniama, Frédéric Sammaritano, Florian Thauvin, N’Golo Kanté et Wissam Ben Yedder.

Il y aurait plein d’autres choses à dire sur ce jeu de données mais je préfère arrêter là cet article déjà bien long 😉 Pour finir, gardez à l’esprit que cette analyse n’est pas vraiment sérieuse et sert surtout à présenter un exemple sympathique pour la découverte de FactoMineR et de l’ADD.

 

[Sports] What the splines model for UEFA Euro 2016 got right and wrong

UEFA Euro 2016 is over! After France’s heartbreaking loss to Portugal in the Final, it’s now time to assess the performance of our “splines model“. On the main page of the project you can now find the initial predictions we made before the start on our competition. I also added a link to the archives of the odds we updated after each day (EDIT: I realize I made a mistake with a match that was played on Day 2, I’ll correct this asap – results should not be altered much though.)

screenshot Euro 2016
Screenshot predictions Euro 2016

What went well (Portugal, Hungary, Sweden)

Let’s begin with our new European champions: Portugal. They were our 5th favorite, with an estimated 8.3% chance of winning the title. To everyone’s surprise (including ours to be honest 😉 ), they finished 3rd in group F. However, the odds of this happening were estimated at 20%, so we can hardly say the splines model was completely stunned by this outcome! In fact, except for the initial draw against Iceland, we had all calls correct for Portugal games!

Hungary were described by some as the weakest team of the tournament, so by extension as the weakest team of group F. But they won it! Our model didn’t agree with those pundits, estimating the chances of advancing to the second round for Gábor Király‘s teammates at almost 3 out of 4.

Sweden certainly had one of the best players in the world with Zlatan Ibrahimovic. But our model was never a fan of their squad, and they did end up at the last place in group E. Similarly, Ukraine was often referred to as a potential second-rounder but ended up at the last place (losing all their games), which was the most likely outcome according to the splines model.

What went wrong (Iceland, Austria, England)

Austria were seen by the splines model as outsiders for this competition (4.7% of becoming champs – for instance, Italy’s chances were estimated at 4.2%). We evaluated their chances of advancing to the second round to be greater than 70%. They ended up at the last place of Group F with a single point.

On the contrary, Iceland were seen as one of the weakest teams of the competition and a clear favorite for last place in Group F. Eventually, they were astonishingly successful! On their way to the quarter-finals, they eliminated England. Our model gave England a good 85% probability to win the match. But, surprising as it was, this alone does not prove our model was not reliable (more on upsets on the next paragraph). Yet we can’t consider the projections for the Three Lions other than a failure, because they also ended up second in group B when we thought they would easily win the group.

Spain lost in round of 16 to Italy and in the group phase to Croatia. The estimated probabilities for these events were 40% and 16%.

Hard to say

We almsot included Turkey in the previous paragraph: after all, we gave them the same chances as Italy for winning the tournament, and we estimated their odds of advancing to the round of 16 to more than 70%, yet they failed. In addition, their level was described by experts as rather poor. But paradoxically, the splines model had all calls correct for Turkey games! What doomed them was the 3-0 loss against the defending champions, Spain. With a final goal average of -2 and 3 points, they couldn’t reach the second round as one of the four best thirds.

Wales unexpectedly beat Belgium, one of our favorites, in quarter-finals. But is this a sign of a bad model or bad luck? Upsets happen, and they’re not necessarily a sign that a team’s strength was incorrectly estimated.

Home field advantage

Our model stood out from others (examples here, here or here) on predictions for France. As a matter of fact, it valued much less home field advantage than the other models. But France didn’t win the Euro! Similarly, nearly all models predicted a Brazil victory in World Cup 2014, mostly because of home field advantage… and we all know what happened!

To us, it is unclear whether home field advantage during Euro or the World Cup can compare to home field advantage for a friendly match or a qualifier. I hope someone studies this particular point in the future!

Conclusion

We had a lot of fun building this model and it helped us enjoy the competition! I hope you guys enjoyed it too!

[Sports] L’adversaire des bleus en 8èmes

Après la première place du groupe acquise par l’équipe de France, Baptiste Desprez de Sport24 se demandait aujourd’hui quel est l’adversaire le plus probable pour les Bleus en huitièmes.

Ça tombe bien, on dispose d’un modèle capable de calculer des probabilités pour les matches de l’Euro. Je vous laisse lire l’article de Sport24 si vous voulez comprendre toutes les subtilités concoctées par l’UEFA pour ce premier Euro à 24. Nous, on va se contenter de faire tourner le modèle pour obtenir les probabilités. On obtient (avec arrondis) :

Irlande du Nord : 72% ; République d’Irlande : 14% ; Allemagne : 8% ; Belgique : 4% ; Pologne : 2%

probas_huitiemes

Voilà, il est extrêmement probable que le prochain adversaire de l’équipe de France se nomme “Irlande” 🙂 . Curieusement, la probabilité de rencontrer l’Allemagne est bien plus forte que de rencontrer la Pologne, alors même que le modèle donne une forte probabilité pour que l’Allemagne termine première de son groupe devant la Pologne… C’est complexe un tableau de l’Euro ! On va quand même croiser les doigts pour ne pas croiser la route de Müller et cie aussi tôt dans le tableau !

Il est également amusant de constater que, bien que ce soit possible, un huitième contre une équipe du groupe D (Tchéquie, Turquie ou Croatie) est hautement improbable (<0.2% de chances d’après les simulations). Il semblerait que les configurations permettant à ces équipes de se qualifier en tant que meilleurs troisièmes sont incompatibles avec les configurations les envoyant en huitième contre la France. Si un opérateur vous proposait ce pari, je ne saurais trop vous conseiller de l’éviter 😉

[Sampling] Talk at INSPS – Avignon

I’m in the beautiful city of Avignon for the 3rd ISNPS conference, which is held in the extroardinary Palace of the Popes Convention center. I’ve been invited by Ricardo Cao to give a talk wednesday morning during on sampling methods for big graphs.

[Sports] UEFA Euro 2016 predictions – Comments

Last week we published the results of our prediction model for UEFA Euro 2016. Here are a few comments.

This Euro is undecided

Our model gives fairly close probabilities of winning. To us, this suggests that the competition is fairly open and that no team is a clear favorite before the competition starts (we really hope to see that change after a few matches!)

One could object that this merely shows that our model is unable to predict an outcome with adequate confidence, so we ran the simulations after injecting an artificially low variance in our model, and results for the top teams turned out to be very similar: no clear favorite emerged.

Historically, the European Championship has always been somewhat undecided. Whereas only 8 different teams ever became World Champions (20 editions of the World Cup were held), 9 different teams have already won a European Championship in just 14 editions! In some cases, complete underdogs eventually won the title (for example Denmark in 1992 or Greece in 2004).

France is one of the favorites

The home team is every other’s model favorite! Check out Goldman Sachs’ or a model built by Austrian researchers based on bookmakers’ odds. Clearly, the home advantage is key here, although recently France has proven able to score quite a number of goals, which is an important feature of our model. The model’s favorite is Belgium (who are, the start of the competition, 2nd in the Fifa rankings), but Germany, Spain and England are very close.

Interestingly, even if our model selects France as one of its favorite, it predicts that the group phase won’t be as easy as it seems. For example, the most likely scenario for the opening match is a draw. The probability of reaching the second round is high (86%), but it’s only the fifth highest of values (which might be surprising if you consider that France’s first round opponents are really far behind in the Fifa/Elo rankings). This is very different from other models and bookies, who make France a very heavy favorite to end up at the first place of the group.

As a supporter of France, this reminds us a few (good) memories. In 2000, France only finished second of its group to the Netherlands (and still won the competition), while in the 2006 World Cup, France barely qualified among relatively weak teams, and still managed to reach the final.

Zlatan may not be enough

Altough Sweden, partly thanks to its legendary striker Zlatan Ibrahimovic, is generally said to be a fairly good team, it has the lowest probability of reaching the round of 16 (24.1%), slightly behind Iceland and Albania. In fact, Sweden was very unlucky during the draw and ended up in the so-called “group of death” along with two top tier teams (Belgium and Italy), and an outsider (Ireland) that our model predicts not so bad.

Are Switzerland and Hungary undervalued?

The biggest difference between our model and the bookies’ odds (or the other models) is the relatively high probability we put on Switzerland’s win (6% for us, 1% for Goldman Sachs for instance). It’s hard to really say why our model predicts they’ll fare so well, but we’re definitely impatient to see if this checks out 🙂

Our model also says Hungary is generally under-estimated: to it, the heirs of the “Magikus Magyarok” might very well fight for second place in Group F, while in most predictions they finish dead last.

EDIT: a previous version of this article presented France as the model’s favorite, which was the consequence of a bug that occurred for the second-round probabilities. It is now corrected. Other conclusions are unchanged.

[Sports] UEFA Euro 2016 Predictions – Model

Today we’re launching our own predictions for UEFA Euro 2016 that starts next week.

A model for football: state of the art

There are many ways to build a model to predict football results. The Elo ranking system is commonly used. As it name indicates, it relies on a ranking of the international football teams, either the official FIFA ratings (which are widely known as poorly predictive of a team’s strength) or a custom made Elo ranking. A few Elo rankings are available on the Internet, so one possibility was to use one of these to compute probabilities for each game (via a very simple analytical formula). But we wanted to do something different.

When FiveThirtyEight created a nice viz of their own showing odds for the men’s and women’s world cups, their model was based on ESPN’s Soccer Power Index (SPI). The principle of the SPI is quite simple: compute expected scored and against goals for each team under the assumption it plays against an “average” football squad. Then run a logistic regression to predict the outcome of any two teams, based on their expected performance against the “average” team. SPI takes an impressive amount of relevant parameters into account (including player performance), and has generally proven itself reliable (although FiveThirtyEight’s predictions always seemed a tad overconfident to me!).

Our very own model

For our model, we liked the principle of the SPI very much, but we wanted to try our own little variation. So we kept the core feature of the SPI: computing the expected goals scored and against for each team, but we chose to directly plug these results into our simulations (i.e without the logistic regression). Of course, due to lack of time and resources, our model will be way less sophisticated than ESPN’s (there was no way we could include player performance for example), but still the results might be worth analyzing!

So, for each one of the 24 teams competing, we’re trying to predict the quantitative variable that is the number of goals scored (and against) for each game. Of course, we’re going to use all our knowledge of machine learning to achieve this 😉 Our training data is composed of the 1795 international games played by the 24 teams that qualified for UEFA Euro 2016 between 2008 and 2016 (excluding the Olympics, which are too peculiar in football to be relevant).

We dispose of 1795 observations, for each of which we know: the location of the game, the teams that played, the final score and the type of the game (friendly, world cup qualifier, etc.). We matched each team to its Fifa ranking at the closest date available, and determine which team had home court advantage (if any).

Then we ran the simplest of regression models: a linear regression on year, team (as a categorical variable), dummy variable indicating if team plays at home or away, type of match and FIFA rankings of both teams. Before even thinking of using this model for simulations, we have to look at how it performs. And a lot of think indicates that it is too unsophisticated. The most telling example might be the prediction of large number of goals. Let’s plot the number of goals scored vs. the FIFA ranking of the opposing team.

linear_model
Number of goals scored with respect to strength of the opponent. Black points: observed ; red points: modeled (linear regression).

You can see on the right side of the plot that it’s not that rare that a large number of goals is scored, especially when playing a very weak team. However, the linear model is unable to predict more than 4 goals scored in a game. This can be a huge problem for simulations as ties are broken by number of goals scored at Euro.

The idea is thus to combine several linear models to get a more sensible prediction. This can be done using regression splines, for which the parameters are chosen using cross-validation.

model_splines_2
Number of goals scored with respect to strength of the opponent. Black points: observed ; green points: modeled (regression splines).

On number of ways, this model is much more satisfying than the first one. Regarding the large values of number of goals scored, the above plots show that our model is now able to predict them 🙂

Simulations and results

Our model gives us expected values for number of goals scored and against, as well as a model variance. We then simulate the number of goals with normal error around the expected value. We do this 10000 times for each match and finally get Monte-Carlo probabilities for the outcome of each group phase match, as well as odds for each team to end at each place in its group and to qualify to each round of the knockout phase.

The results can be found here, and I will post another article later to comment them (which really is the fun part after all!).