Maquereaux et départements

Cette semaine, l’énigme “classique” de FiveThirtyEight (qu’on peut retrouver ici) demande de trouver des mots n’ayant aucune lettre en commun avec un et seul état américain. Par exemple, “mackerel” (le maquereau) a des lettres en commun avec tous les états sauf l’Ohio.

Ce problème peut s’adapter au cas français : quels sont les mots n’ayant aucune lettre en commun avec un et un seul département français ?

En reprenant la liste de mots utilisés pour notre article sur Motus et sur “Des Chiffres et des Lettres”, qui sont de 10 lettres au maximum (selon les règles des jeux), on trouve plus de 15 000 mots qui répondent à notre définition ! L’un d’entre eux est d’ailleurs “MAQUEREAUX”, qui partage (au moins) une lettre en commun avec tous les départements, sauf le Lot.

C’est d’ailleurs ce département qui est le plus souvent à l’origine de la présence des mots dans la liste ; c’est assez logique, car il ne contient ni A, ni E, ni I (lettres qui sont très présentes dans les autres départements), et ne contient que trois lettres.

En utilisant le dictionnaire Lexique, on peut chercher parmi des mots de plus de 10 lettres ceux qui répondent à notre définition. Le mot le plus long répondant à la définition est alors MULTIDIMENSIONNELLE, avec 19 lettres, grâce au département du Var.

Il est possible d’aller plus loin, si l’on accepte de compter tous les caractères des mots et non uniquement les lettres. Dans ce cas, avec 22 signes, le mot répondant à la définition est SUIVEZ-MOI-JEUNE-HOMME (oui, ce mot existe ; non, aucun lien avec les autres maquereaux) car il n’a aucune lettre en commun avec le Gard.

Comment trouver ces mots : description de l’algorithme

Le programme permettant de trouver les mots satisfaisant à la condition demandée (aucune lettre en commun avec un et un seul département) se décompose en plusieurs étapes.

Tout d’abord, on construit des matrices (des tableaux) dont chaque colonne est une lettre de l’alphabet et chaque ligne correspond à, soit un mot du dictionnaire, soit un département. Au croisement d’une ligne et d’une colonne se trouve une indicatrice TRUE/FALSE (qu’on recodera 1/0 après) indiquant si oui ou non la lettre se trouve dans le mot concerné. Pour cela, on utilise la fonction str_detect du package stringR.

sapply(LETTERS,FUN=str_detect,string="AIN")
A     B     C     D     E     F     G     H     I     J     K     L     M     N     O     P 
TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE TRUE FALSE FALSE 
Q     R     S     T     U     V     W     X     Y     Z 
FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE 

On a ainsi construit un tableau pour le dictionnaire tableau_dico et un tableau pour les départements tableau_dep. La commande suivante va calculer pour chaque mot du dictionnaire le nombre de département ne partageant aucune lettre en commun.

f <- function(x,ajout) {
  return(max(x + ajout))
}

sum(apply(tableau_dep,MARGIN = 1,f,ajout=tableau_dico[mot,]) == 1))

En décomposant un peu le code :

  • la fonction f construite ici prend deux vecteurs x et ajout et calcule le maximum de la somme des deux vecteurs ;
  • ici, ces deux vecteurs sont la ligne de tableau_dico associée au mot choisi et une ligne de tableau_dep, soit des vecteurs de 0 et 1 suivant si les lettres appartiennent aux mots considérés ;
  • f peut ici renvoyer 1 ou 2 : s’il renvoie 1, cela veut dire qu’il n’y a aucune lettre en commun, sinon, la valeur associée à cette lettre serait de 2, car somme de 1 et 1 dans chacun de deux vecteurs, et donc f calculerait un maximum de 2 ;
  • on applique cette fonction à l’ensemble des départements dans tableau_dep (apply avec le choix de margin=1 pour indiquer que l’on travaille sur les lignes) ;
  • on compte alors le nombre de départements pour lesquels cette sortie est 1.

Une fois qu’on a appliqué cette commande à l’ensemble des mots du dictionnaire, on peut enfin isoler les “maquereaux” en filtrant sur ceux pour lesquels le nombre de départements n’ayant aucune lettre en commun est exactement de 1.

Image : Atlantic mackerels (Petar Milošević)

Rolling some dices

Today, a quick post trying to provide an answer to this week Riddle Classic on FiveThirtyEight :

The fifth edition of Dungeons & Dragons introduced a system of “advantage and disadvantage.” When you roll a die “with advantage,” you roll the die twice and keep the higher result. Rolling “with disadvantage” is similar, except you keep the lower result instead. The rules further specify that when a player rolls with both advantage and disadvantage, they cancel out, and the player rolls a single die. Yawn!

There are two other, more mathematically interesting ways that advantage and disadvantage could be combined. First, you could have “advantage of disadvantage,” meaning you roll twice with disadvantage and then keep the higher result. Or, you could have “disadvantage of advantage,” meaning you roll twice with advantage and then keep the lower result. With a fair 20-sided die, which situation produces the highest expected roll: advantage of disadvantage, disadvantage of advantage or rolling a single die?

Extra Credit: Instead of maximizing your expected roll, suppose you need to roll N or better with your 20-sided die. For each value of N, is it better to use advantage of disadvantage, disadvantage of advantage or rolling a single die?

This problem is quite similar to some issues we’ve already tackled on this website, such as Tennis vs Badminton, or archery points (FR), or even how Eurovision rules impacts french rankings (FR). In order to answer we’re going to use simulations with R. The main set of functions needed is described here:

dice <- function() {return(sample(1:20,1))}

advantage <- function() {
  a <- dice()
  b <- dice()
  return(max(a,b))
} 

disadvantage <- function() {
  a <- dice()
  b <- dice()
  return(min(a,b))
} 

disadvantage_of_advantage <- function() {
   a <- advantage()
   b <- advantage()
   return(min(a,b))
 }

advantage_of_disadvantage <- function() {
   a <- disadvantage()
   b <- disadvantage()
   return(max(a,b))
 } 

Then, we just need to simulate a fair amount (say, n = 10 000 or 100 000) of every dice rolling method. The first question to answer is: how to get the best rolls on average? The following graph answers it nicely, showing that the “disadvantage of advantage” method leads to better results than just rolling a dice, and even better ones than “advantage of disadvantage”.

Will this method be efficient for the other problem, which is maximising our chances to roll a value higher than a set threshold (resulting in harming the dragon, for instance, instead of failing miserably in our attempt)? The answer is yes… but only for lower values of the threshold!

As shown by the graph, as long as the threshold is 14 or higher (and killing a dragon might be that hard!), it’s better to just roll a dice. Why? That’s because our advantage and disadvantage modifiers tend to concentrate the distributions of values obtained: getting a 20 on “disadvantage of advantage” means that you need to get a 20 on both previous rollings with advantage, meaning that you need at least one 20 during each one ; this is extremly unlikely!

Eurovision 2020 – « prédictions »

L’Eurovision 2020, comme bon nombre d’événements culturels et sportifs, n’aura pas lieu cette année, pour cause de pandémie. Les chansons proposées par les pays participants ont néanmoins été mises en ligne : on peut les retrouver ici.

Même si cela n’a aucun intérêt (personne ne gagnera un concours qui n’aura pas lieu), il est donc possible de mettre en oeuvre notre modèle de prédictions (comme les années précédentes, en 2018 et 2019) utilisant les données associées à chaque vidéo sur Youtube : nombre de vues, “likes” – pouces vers le haut, “dislikes” – pouces vers le bas.

Le modèle utilise les données (Youtube et résultats lors du concours) pour les années précédentes et prédit ensuite le score qu’obtiendrait chaque pays. L’idée est que le nombre de personnes regardant un clip en ligne (et leurs réactions, via les boutons dédiés de Youtube) est un bon moyen de prédire le succès d’une chanson à l’Eurovision ; c’est évidemment très limité et ce pour de nombreuses raisons : existence d’un double vote (public et jury), différences entre les clips et la prestation en live, etc.

Passons donc aux « résultats » : our twelve points go to… la Russie !

En effet, comme en 2018 avec Israël, leur vidéo est un outlier (une observation avec des caractéristiques très différentes des autres) qui récolte plus de 85 millions de vues largement supérieur aux autres ; il s’agit de la chanson “Uno”, que vous pouvez (et je vous y encourage) écouter ci-après :

Sans surprise, ce sont donc eux qui sont les favoris pour notre modèle ! Le reste du top 5 (avec beaucoup moins d’écarts et donc de certitude) se compose de l’Arménie, l’Azerbaïdjan, l’Ukraine et la Bulgarie.

On peut comparer ces résultats (à défaut de ne jamais pouvoir savoir si le modèle aurait eu raison !) avec ceux des bookmakers (voir ici), évidemment gelés depuis l’annonce de l’annulation du concours. Ils ne plaçaient pas la Russie en premier, mais dans les cinq favoris néanmoins.

Causal Inference cheat sheet for data scientists

Being able to make causal claims is a key business value for any data science team, no matter their size.
Quick analytics (in other words, descriptive statistics) are the bread and butter of any good data analyst working on quick cycles with their product team to understand their users. But sometimes some important questions arise that need more precise answers. Business value sometimes means distinguishing what is true insights from what is incidental noise. Insights that will hold up versus temporary marketing material. In other terms causation.

When answering these questions, absolute rigour is required. Failing to understand key mechanisms could mean missing out on important findings, rolling out the wrong version of a product, and eventually costing your business millions of dollars, or crucial opportunities.
Ron Kohavi, former director of the experimentation team at Microsoft, has a famous example: changing the place where credit card offers were displayed on amazon.com generated millions in revenue for the company.

The tech industry has picked up on this trend in the last 6 years, making Causal Inference a hot topic in data science. Netflix, Microsoft and Google all have entire teams built around some variations of causal methods. Causal analysis is also (finally!) gaining a lot of traction in pure AI fields. Having an idea of what causal inference methods can do for you and for your business is thus becoming more and more important.

The causal inference levels of evidence ladder

Hence the causal inference ladder cheat sheet! Beyond the value for data scientists themselves, I’ve also had success in the past showing this slide to internal clients to explain how we were processing the data and making conclusions.

The “ladder” classification explains the level of proof each method will give you. The higher, the easier it will be to make sure the results from your methods are true results and reproducible – the downside is that the set-up for the experiment will be more complex. For example, setting up an A/B test typically requires a dedicated framework and engineering resources.
Methods further down the ladder will require less effort on the set-up (think: observational data), but more effort on the rigour of the analysis. Making sure your analysis has true findings and is not just commenting some noise (or worse, is plain wrong) is a process called robustness checks. It’s arguably the most important part of any causal analysis method. The further down on the ladder your method is, the more robustness checks I’ll require if I’m your reviewer 🙂

I also want to stress that methods on lower rungs are not less valuable – it’s almost the contrary! They are brilliant methods that allow use of observational data to make conclusions, and I would not be surprised if people like Susan Athey and Guido Imbens, who have made significant contributions to these methods in the last 10 years, were awarded the Nobel prize one of these days!

causal_cheat_sheet
The causal inference levels of evidence ladder – click on the image to enlarge it

Rung 1 – Scientific experiments

On the first rung of the ladder sit typical scientific experiments. The kind you were probably taught in middle or even elementary school. To explain how a scientific experiment should be conducted, my biology teacher had us take seeds from a box, divide them into two groups and plant them in two jars. The teacher insisted that we made the conditions in the two jars completely identical: same number of seeds, same moistening of the ground, etc.
The goal was to measure the effect of light on plant growth, so we put one of our jars near a window and locked the other one in a closet. Two weeks later, all our jars close to the window had nice little buds, while the ones we left in the closet barely had grown at all.
The exposure to light being the only difference between the two jars, the teacher explained, we were allowed to conclude that light deprivation caused plants to not grow.

Sounds simple enough? Well, this is basically the most rigorous you can be when you want to attribute cause. The bad news is that this methodology only applies when you have a certain level of control on both your treatment group (the one who receives light) and your control group (the one in the cupboard). Enough control at least that all conditions are strictly identical but the one parameter you’re experimenting with (light in this case). Obviously, this doesn’t apply in social sciences nor in data science.

Then why do I include it in this article you might ask? Well, basically because this is the reference method. All causal inference methods are in a way hacks designed to reproduce this simple methodology in conditions where you shouldn’t be able to make conclusions if you followed strictly the rules explained by your middle school teacher.

Rung 2 – Statistical Experiments (aka A/B tests)

Probably the most well-known causal inference method in tech: A/B tests, a.k.a Randomized Controlled Trials for our Biostatistics friends. The idea behind statistical experiments is to rely on randomness and sample size to mitigate the inability to put your treatment and control groups in the exact same conditions. Fundamental statistical theorems like the law of large numbers, the Central Limit theorem or Bayesian inference gives guarantees that this will work and a way to deduce estimates and their precision from the data you collect.

Arguably, an Experiments platform should be one of the first projects any Data Science team should invest in (once all the foundational levels are in place, of course). The impact of setting up an experiments culture in tech companies has been very well documented and has earned companies like Google, Amazon, Microsoft, etc. billions of dollars.

Of course, despite being pretty reliable on paper, A/B tests come with their own sets of caveats. This white paper by Ron Kohavi and other founding members of the Experiments Platform at Microsoft is very useful.

Rung 3 – Quasi-Experiments

As awesome as A/B tests (or RCTs) can be, in some situations they just can’t be performed. This might happen because of lack of tooling (a common case in tech is when a specific framework lacks the proper tools to set up an experiment super quickly and the test becomes counter-productive), ethical concerns, or just simply because you want to study some data ex-post. Fortunately for you if you’re in one of those situations, some methods exist to still be able to get causal estimates of a factor. In rung 3 we talk about the fascinating world of quasi-experiments (also called natural experiments).

A quasi-experiment is the situation when your treatment and control group are divided by a natural process that is not truly random but can be considered close enough to compute estimates. In practice, this means that you will have different methods that will correspond to different assumptions about how “close” you are to the A/B test situation. Among famous examples of natural experiments: using the Vietnam war draft lottery to estimate the impact of being a veteran on your earnings, or the border between New Jersey and Pennsylvania to study the effect of minimum wages on the economy.

Now let me give you a fair warning: when you start looking for quasi-experiments, you can quickly become obsessed by it and start thinking about clever data collection in improbable places… Now you can’t say you haven’t been warned 😜 I have more than a few friends who were lured into attracted by a career in econometrics for the sheer love of natural experiments.

Most popular methods in the world of quasi-experiments are: differences-in-differences (the most common one, according to Scott Cunnigham, author of the Causal Inference Mixtape), Regression Discontinuity Design, Matching, or Instrumental variables (which is an absolutely brilliant construct, but rarely useful in practice). If you’re able to observe (i.e. gather data) on all factors that explain how treatment and control are separated, then a simple linear regression including all factors will give good results.

Rung 4 – The world of counterfactuals

Finally, you will sometimes want to try to detect causal factors from data that is purely observational. A classic example in tech is estimating the effect of a new feature when no A/B test was done and you don’t have any kind of group that isn’t receiving the feature that you could use as a control:

Slightly adapted from CausalImpact‘s documentation

Maybe right new you’re thinking: wait… are you saying we can simply look at the data before and after and be allowed to make conclusions? Well, the trick is that often it isn’t that simple to make a rigorous analysis or even compute an estimate. The idea here is to create a model that will allow to compute a counterfactual control group. Counterfactual means “what would have happened hadn’t this feature existed”. If you have a model of your number of users that you have enough confidence in to make some robust predictions, then you basically have everything

There is a catch though. When using counterfactual methods, the quality of your prediction is key. Without getting too much into the technical details, this means that your model not only has to be accurate enough, but also needs to “understand” what underlying factors are driving what you currently observe. If a confounding factor that is independent from your newest rollout varies (economic climate for example), you do not want to attribute this change to your feature. Your model needs to understand this as well if you want to be able to make causal claims.

This is why robustness checks are so important when using counterfactuals. Some cool Causal Inference libraries like Microsoft’s doWhy do these checks automagically for you 😲 Sensitivity methods like the one implemented in the R package tipr can be also very useful to check some assumptions. Finally, how could I write a full article on causal inference without mentioning DAGs? They are a widely used tool to state your assumptions, especially in the case of rung 4 methods.

(Quick side note: right now with the unprecedented Covid-19 crisis, it’s likely that most prediction models used in various applications are way off. Obviously, those cannot be used for counterfactual causal analysis)

Technically speaking, rung 4 methods look really much like methods from rung 3, with some small tweaks. For example, synthetic diff-in-diff is a combination of diff-in-diff and matching. For time series data, CausalImpact is a very cool and well-known R package. causalTree is another interesting approach worth looking at. More generally, models carefully crafted with domain expertise and rigorously tested are the best tools to do Causal Inference with only counterfactual control groups.

Hope this cheat sheet will help you find the right method for your causal analyses and be impactful for your business! Let us know about your best #causalwins on our Twitter, or in the comments!

Comment expliquer la baisse de participation aux municipales 2020 ?

Dimanche dernier, le 15 mars 2020, la France a organisé le premier tour des élections municipales, après avoir annoncé une fermeture des écoles puis des restaurants et commerces non essentiels. La participation à ce scrutin s’établit à 44,64 %, en chute de 20 points par rapport à 2014, date des précédentes élections municipales (voir une très belle carte du Monde ici, assez illustrative de la situation)

Ce rapide billet ne s’attardera pas sur la question de savoir s’il fallait ou non organiser ces élections (le second tour est, lui, reporté à plus tard) ; nous cherchons ici à identifier quels sont les facteurs explicatifs de la baisse de participation aux municipales, et si ces facteurs peuvent avoir favorisé un ou plusieurs partis politiques.

Un sondage “jour de vote” réalisé par IFOP [modifié : je parlais dans la version initiale par erreur d’un sondage IPSOS ; celui-ci est consultable ici, et qui donne d’autres résultats encore, avec une plus forte participation à droite qu’à gauche sur l’échiquier politique] (consultable ici) montrait une importance du paramètre Covid-19 sur les raisons de ne pas aller voter (plus de 50% des sondés n’ayant pas voté jugeant que c’était une des raisons déterminantes), mais aussi une disparité entre les différentes familles politiques, avec une plus forte abstention chez les électeurs d’EELV (60 %) et une plus faible abstention chez les partisans d’En Marche (37 %).

Une analyse fine des résultats, bureau de vote par bureau, permet d’identifier les bureaux de vote pour lesquels l’évolution de l’abstention a été la plus forte entre 2014 et 2020 (on se limite au même scrutin des municipales), et, une fois ces bureaux de vote identifiés, analyser les résultats politiques obtenus au premier tour de l’élection présidentielle de 2017. Comme toujours, les données sont sur data.gouv.fr (ici pour les municipales 2020), merci à eux !

Le graphique ci-après résume les résultats obtenus :

On constate que les résultats ne sont pas les mêmes que ceux du sondage du jour du vote. Il semblerait que le vote Macron ou Le Pen, au premier tour en 2017, soit un bon indicateur d’une plus forte abstention aux municipales 2020. Cela ne veut cependant pas dire que les électeurs ayant choisi ces deux candidats sont plus sensibles au risques liées au Covid-19 ; peut-être est-ce plutôt lié à une séquence politique qui, pour les municipales 2020, n’était pas favorable à En Marche par exemple, même en l’absence de pandémie.

Méthodologie : les données relatives aux premiers tours des élections municipales de 2014 et 2020 ainsi que celles de la présidentielle 2017 sont agrégées au niveau du bureau de vote (on exclut ici les bureaux de vote ayant disparu, ayant fusionné ou ayant été créés). On calcule ensuite sur les un peu plus de 60 000 bureaux restants un différentiel de participation entre 2014 et 2020, qu’on régresse sur le taux parmi les votants pour chacun des candidats au premier tour de la présidentielle 2017.

Micromorts – how much risk of death would you accept?

tl;dr
 – A 1% death risk is deceptively high
 – Micromorts (1/10000 th of a percent) are a useful scale to model death risks
 – Statistical models for human life use ligh-tailed distributions. High values are extremely rare

Today a short post that I had in my drafts for a long time. I didn’t expect that it would (unfortunately) be so relevant to today’s context.

Life is finite and all human activities are risky. Although we all face a certain (hopefully low) risk of dying each time we breathe, it’s not enough reason to prevent us from doing any activities and live isolated in bubbles. But exactly how much risk of dying is acceptable? How much risk on your own life would you be willing to accept?

Living in a bubble would be fun! – xkcd.com

Turns out most people, sometimes even trained scientists, are bad at estimating death risk probabilities. They often underestimate how bad even seemingly low probabilities of dying can turn out to be. During a dramatic time of my life where I cared for a child who became suddenly sick. The head surgeon told us she had an 85% chance of making it.
So maybe you’re thinking oh it wasn’t that bad! And I mean I understand, 85 is pretty close to 100, situation’s looking fairly good, right? 

I was terrified.

To put this number into perspective, imagine if all patients admitted faced such a risk. Let’s say doctors see 15 patients per hour, work 10 hours a day and that the department has 10 doctors. This represents approximately 35’000 patients per year, which seems to fit this UK data. With a 15% death rate, this department would have to deal with more than 5000 deaths over the course of the year, which is the number of people who died in the entire city of San Francisco in 2018! This is one death every 1 hour 40 minutes!

An activity with a 99% chance of survival would certainly kill you in less than a year

In fact, routine surgical procedures with risks greater than 5% are classified high risk. Even a 99% chance of survival doesn’t look so good. If you were enough of a daredevil to engage every day in an activity that exposes you to “only” 1% death probability, then you’d be almost certainly dead within a year.
(You can see this easily by using this easy rule of thumb: consider a random event occurring with probability p. Then there is a 95% chance that the event will occur in less than 3/p tries. In this case, this would be 3 / 0.01 = 300 days, which is less than a year)

Micromort – the right scale for death risks

As it turns out, percents are not the right risk scale to talk about death risks. Ronald A. Howard realized this in 1979 and created the notion of micromort. A micromort represents a one-in-a-million chance of dying. Wikipedia has a list of how much risk some activities expose you to:

A micromort is one in a million chance of dying – it is equivalent to tossing 20 coins and getting 20 heads

Wikipedia has a list of how much risk some activities expose you to:

One day alive at age 20 – 1 micromort
Skydiving (one jump) – 10 micromorts
One day alive at age 90 – 400 micromorts
Being infected by the Spanish flu – 3000 micromorts
An ascent to Mt Everest – 40000 micromorts

Using this scale, my child’s illness exposed her to 15000 micromorts… which suddenly looks much frightening, and a much more intuitive representation of the risk she was actually exposed to.
(Side note if you’re wondering: my kid is fine, and I am really grateful for this 🙏 Diane, if you ever read this you are the sunshine of my life! ❤️☀️)

If this starts feeling scary, don’t worry too much. A risk of one micromort is equivalent to tossing 20 coins and getting only heads. It’s pretty unlikely! The problem is that you’re playing this game every day, and that every once in a while you have to remove some coins. By the time you’re 50, you only have 17 coins left, by the time you’re 90, you only have 11 left.

Statistical models of death

A neat thing about Micromorts is that they also make a good and intuitive statistical model for age at death for humans. Let’s consider this very simple model based on the “game” described in the previous paragraph. Every day you play this game, with a certain risk of dying (for sake of simplicity, let’s forget about modelling childhood and only concentrate on adult life):

  • Between the ages of 20 and 80, the risk is Floor(age / 10) – 1 micromorts (for example all days of your 26th year, you face a 1 micromort risk, all days in you 63rd year, you face 5 micromorts)
  • At age 80, the risk jumps to 100 micromorts, and then each year you have to add 50 additional micromorts (which means 150 micromorts at age 81, 200 micromorts at age 82, etc.)

We can run a few simulations in R to see what life expectancy looks like with this simple model. First let’s define a vector of the risks that match the model we just described:

max_age <- 300
min_age <- 20
age_cut <- 80
risks1 <- rep(1:7, each=365*10)
risks2 <- rep(((age_cut : max_age) - age_cut) * 50 + 100, each=365)
risks <- c(risks1, risks2) / (1e6)

Then we can run a few simulations to get a vector of age at death for 10 000 people playing this “game”. Note that all simulated values use the vectorization capabilities of the function rbinom:

N <- length(risks)
N_sims <- 1e4

days_sims <- matrix(rbinom(N_sims*N, 1, risks), ncol = N_sims, byrow = F)

death_ages_days <- apply(days_sims, 2, function(x) { 
    day_death <- which.max(x) 
    if(day_death > 1) {
      return(day_death / 365 + min_age)
    } else {
      return(max_age)
    } 
  })

The mean and max age at death are:

> mean(death_ages_days)
[1] 84.9348
> max(death_ages_days)
[1] 103.0274

Not too far from what we observe in most Western countries! For example, life expectancy for Canadian women in 2018 was 84.3 years, and that same year, the oldest Canadian man alive was 109 years old.

We can plot the distribution of age at death:

library(ggplot2)

model_plot <- 
   ggplot(data.frame(age=death_ages_days)) +   
     geom_histogram(aes(x=age, y=..density..), fill="#4b86b4") + 
     geom_density(aes(x=age, y=..density..), colour="#2a4d69") +
     xlab("Age at death") +
     ylab("Frequency") +
     theme_bw()
 print(model_plot)

In addition to being close to the actual demographic values, the most interesting property of this model is that it was able to generate a light tailed distribution.

A light tail distribution is one that quickly falls down for the highest values. Contrary to statistical distributions like revenue where extreme values (i.e. values that are several standard deviations greater than the mean) are routinely observed, extreme values are very unlikely when it comes to human life. The oldest person we know of reached 122 years of age and people who make it past 100 years old are still a tiny minority. It is extremely unlikely that we would observe someone living to be 200 or even 150 years old.
Even a good old normal distribution would yield extreme values more often (a distribution is said to have “light” and “fat” tails if extreme values are less (resp. more) likely to happen than with a normal distribution). We can see this on this plot from a StackExchange post where life expectancy distribution is plotted against the best normal fit:

One of the best statistical distributions one can use to model human life is the Weibull distribution. It is a variant of the exponential distribution, that is often studied in high school and is typically used to model memory-less failures (i.e. a probability of failing that is independent of time). The Weibull distribution is very similar except the failure rate increases with time, mimicking an aging process. The statistical model I used for this article is in fact very close to how the Weibull distribution works.

Do you like data science and would be interested in building products that help entrepreneurs around the world start and grow their businesses? Shopify is hiring 🎉 for many locations in Canada 🇨🇦 and around the world 🌎! Feel free to reach out directly to me 🙂

Eurovision 2019 – prédictions

Sur le même modèle que l’année dernière (et, nous l’espérons, avec autant de succès !), nous allons tenter de faire nos prédictions pour l’Eurovision 2019, avec toujours un modèle basé sur les statistiques des vidéos publiées sur Youtube (la liste des vidéos en lice cette année est ici).

Les données

Rappel : nous utilisons les informations disponibles sur les vidéos Youtube : nombre de vues, nombre de “Like” et nombre de “Dislike”. Nous récupérons ces informations grâce au package R tuber, qui permet d’aller faire des requêtes par l’API de Youtube et ainsi de récupérer pour chacune des vidéos d’une playlist les informations nécessaires pour le modèle. Ces informations sont ensuite complétées (pour les années 2016, 2017 et 2018) avec le nombre de points obtenus et le rang du classement final.

Par rapport à l’année dernière, on dispose donc d’une année supplémentaire pour l’apprentissage du modèle (pour rappel, les modalités de calcul des points ont changé en 2016, donc on ne peut pas facilement utiliser des éditions antérieures).

Le modèle

L’objectif principal est d’estimer le score que chaque pays va avoir, afin de pouvoir construire le classement final. La méthode utilisée est toujours la régression linéaire sur les variables disponibles (nombre de vues, nombre de “Likes”, nombre de “Dislikes”)

Pour le modèle utilisant uniquement les données 2016 et 2017, calculé l’année dernière, on rappelle que le nombre de vues ne joue pas significativement, le nombre de Likes de façon très mineure et le nombre de Dislikes très nettement, avec un lien positif : plus il y a de pouces baissés, plus le score est important. Ce résultat atypique peut s’expliquer par le fait que la vidéo ukrainienne en 2016, gagnante, a plus de 40 000 pouces baissés.

(Mise à jour : une première version de ce post contenait des erreurs sur le second modèle) Le modèle intégrant les données 2018 (prises quelques semaines avant la finale) en plus change légèrement. Il accorde moins de valeur au nombre de Dislikes (cela peut s’expliquer par le fait que la chanson russe en 2018 avait un nombre de pouces baissés très important, mais n’a pas accédé à la finale, donc n’a pas obtenu un résultat très bon), mais associe désormais positivement le nombre de vues et le score (ce qui n’était pas le cas avant, étonnement).

Ce modèle, construit à partir de plus de données, est celui privilégié ; on comparera tout de même les résultats obtenus par les deux modèles à la fin de cet article !

Les résultats

Voici les prédictions obtenues, en utilisant le modèle comprenant les données de 2016 à 2018 et appliqué aux données Youtube 2019 :

Le grand gagnant serait Malte. Ils ne sont pas dans les favoris des bookmakers : voir ici, par exemple, ou plus largement avec les données des paris ici : à la date d’extraction des données, le 30 avril, Malte était classée 8ème.

Voici la vidéo proposée par Malte pour l’Eurovision 2019 (dont l’artiste Michela Pace illustre cette publication) :

Les Pays-Bas sont seconds dans notre modèle. Ce sont eux les grands favoris pour l’instant, avec “Arcade” :

Et pour la France ? 10ème selon notre modèle, 11ème selon les bookmakers, nous ne serons pas a priori les Rois de la compétition :

Mais, quand on sait que la vidéo de Bilal sur sa chaîne Youtube personnelle enregistre presque 6 millions de vues, peut-on penser qu’il y a un biais à ce niveau-là ? À suivre…

Mise à jour au 7 mai : le modèle avec les nouvelles données Youtube donne des résultats identiques :

Et avec l’autre modèle ?

En utilisant le modèle comprenant les données de 2016 et 2017 uniquement (et donc, sans 2018), on obtient les résultats suivants, avec, en bleu, les pays pour lesquels le score prédit aurait été plus fort, et en rouge ceux pour qui il aurait été plus faible :

Ce modèle conduit à favoriser les Pays-Bas, et renvoie Malte bien plus bas dans le classement ; mais il se base fortement sur les Dislikes, ce qui n’avait pas été pertinent en 2018… À suivre également !

The Mrs. White probability puzzle

tl;dr -I don’t remember how many games of Clue I’ve played but I do remember being surprised by Mrs White being the murderer in only 2 of those games. Can you give an estimate and an upper bound for the number of games I have played?
We solve this problem by using Bayes theorem and discussing the data generation mechanism, and illustrate the solution with R.

The characters in the original game of Clue. Mrs White is third from the left on the first row (and is now retired!)

Making use of external information with Bayes theorem

Having been raised a frequentist, I first tried to solve this using a max likelihood method, but quickly gave up when I realized how intractable it actually was, especially for the upper bound.
This is a problem where conditioning on external knowledge is key, so the most natural way to tackle it is to use Bayes theorem. This will directly yield an interpretable probability for what we’re looking for (most probable number of and uncertainty interval)
 
Denote an integer n>3 and:
Our notations for the problem
 
What we want writes as a simple product of quantities that we can compute, thanks to Bayes:
Bayes theorem gives us directly the probability we are looking for
Notice that there is an “proportional to” sign instead of an equal. This is because the denominator is just a normalization constant, which we can take care of easily after computing the likelihood and the prior.
Likelihood
The likelihood indicates the odds of us observing the data (in this case, that k_Mrs_White = 2) given the value of the unknown parameter (here the number of games played). Since at the beginning of each game the murderer is chosen at uniform random between 6 characters, the number of times Mrs White ends up being the culprit can be modeled as a binomial distribution with parameters n and 1/6.
 
This will be easily obtained using the dbinom function, which gives directly the exact value of P(x = k), for any x and a binomial distribution of parameters n and p. Let’s first import a few useful functions that I put in our GitHub repo to save some space on this post, and set a few useful parameters:
library(tidyverse)
source("clue/clue_functions.R")

## Parameters
k_mrs_white <- 2 # Number of times Mrs. White was the murderer
prob <- 1/6 # Probability of Mrs. White being the murderer for one game
Note that we can’t exactly obtain the distribution for any game from 1 to infinity, so we actually compute the distribution for 1 to 200 games (this doesn’t matter much in practice):
x <- 1:200 # Reduction of the problem to a finite number of games

## Likelihood
dlikelihood <- dbinom(k_mrs_white, x, prob)
easy enough 🙂
 
Side note: when I was a student, I kept forgetting that the distribution functions existed in R and whenever I needed them I used to re-generate them using the random generation function (rbinom in this case) 🤦‍♂️
Prior
There are a lot of possible choices for the prior but here I’m going to consider that I don’t have any real reason to believe and assume a uniform probability for any number of games between 3 and 200:
dprior1 <- dunifdisc(x,3,100)
plot_clue_prior(x, dprior1)
Uniform prior for all number of games between 3 and 100
First posterior
Using the likelihood and the prior, we can easily compute the posterior, normalize it and plot it:
dposterior1 <- dlikelihood * dprior1
dposterior1 <- dposterior1 / sum(dposterior1)
plot_clue_posterior(x, dposterior1)
Plot of the first posterior computed
We can also compute directly the estimates we’re looking for. The most probable number of games played is 11:
> which.max(dposterior1)
[1] 11
And there is a 95% chance that the number of games is less than 40:
> threshold_val <- 0.975
> which(cumsum(dposterior1) > (threshold_val))[1]
[1] 40

A more realistic data generation mechanism

I find this result very unsatisfying. It doesn’t “feel” right to me that someone would be surprised by only 2 occurrences of Mrs White being guilty in such a small number of games! For example, I simulated 40 games, a number that was supposedly suspiciously high according to the model:
set.seed(9)
N_sim_games <- 40
sim_murderer <- runifdisc(N_sim_games, 6)

plot_murderer <- ggplot(tibble(x=1:N_sim_games, y=sim_murderer), aes(y, stat(count))) +
  geom_histogram(aes(y =..count..),
                 bins=6, fill="white",colour="black") +
  ylab("Frequency - murderer") +
  xlab("Character #") +
  scale_x_continuous(breaks = 1:6)
print(plot_murderer)
Simulating 40 games of Clue. Character #4 and #5 were the murderer in respectively only 2 and 3 games
We observe that characters #4 and #5 are the murderers in respectively only 2 and 3 games!
 
In the end I think what really counts is not the likelihood that *Mrs White* was the murderer 2 times, but that the *minimum* number of times one of the characters was the culprit was 2 times!
 
I think it’s a cool example of a problem where just looking at the data available is not enough to make good inference – you also have to think about *how* the data was generated (in a way, it’s sort of a twist of the Monty Hall paradox, which is one of the most famous examples of problems where the data generation mechanism is critical to understand the result).
 
I wrote a quick and dirty function based on simulations to generate this likelihood, given a certain number of games. I saved the distribution directly in the GitHub (and called it Gumbel since it kinda looks like an extreme value problem) so that we can call it and do the same thing as above:
gumbelclue_2 <- readRDS("clue/dcluegumbel_2.rds")
gumbelclue_2 <- gumbelclue_2[x]

dposterior_gen <- gumbelclue_2 * dprior1
dposterior_gen <- dposterior_gen / sum(dposterior_gen)

plot_clue_posterior(x, dposterior_gen)
Posterior with the new data generation mechanism
The new posterior has the same shape but appears shifted to the right. For example N_games = 50 seems much more likely now! The estimates are now 23 for the number of games:
> which.max(dposterior_gen)
[1] 23
And 51 for the max bound of the uncertainty interval
> threshold_val <- 0.975
> which(cumsum(dposterior_gen) > (threshold_val))[1]
[1] 51

Credits for title image: Yeonsang

Ranking places with Google to create maps

Today we’re going to use the googleway R package, which allows their user to do requests to the GoogleMaps Places API. The goal is to create maps of specific places (restaurants, museums, etc.) with information from Google Maps rankings (number of stars given by other people). I already discussed this in french here to rank swimming pools in Paris. Let’s start by loading the three libraries I’m going to use : googleway, leaflet to create animated maps, and RColorBrewer for color ranges.

library(googleway)
library(leaflet)
library(RColorBrewer)

First things first. To do API request to Google, we need an API key ; you can ask for one here. We’ll use this key for the rest of the program, so let’s declare a global variable :

api_key <- "YourKeyHereIWontGiveYouMine"

Retrieving Google Data

We’re going to use the google_places function to get a list of places matching a description, called research in my program (for instance : “Restaurant, Paris, France”). The output are multiple, and I’m going to store the place ID and the rating. I’ll also store the research token ; that’ll be explained later.

gmaps_request <- google_places(search_string = research, language = language, key = api_key)
  gmaps_data <- gmaps_request$results
  
  place_id <- gmaps_data$place_id
  rating <- gmaps_data$rating
  
  token <- gmaps_request$next_page_token

This function returns up to 20 places associated to the research by Google. If you want more than 20, you need to use the token previously stored in order to ask the Google Places API to give you the next results, by tweaking the function this way :

gmaps_request <- google_places(search_string = research, language = language, key = api_key, page_token = token)

There are tow caveats to this function. Firstly, the token can be NULL. In this case, there isn’t any further research results you can get. This happens automatically as soon as you reach 60 results. Secondly, the API needs time to refresh the token research (see here) ; that’s why we’re going to make R wait a few seconds, using Sys.sleep(time) between our requests. Our complete function is therefore :

gmaps_request <- google_places(search_string = research, language = language, key = api_key)
  gmaps_data <- gmaps_request$results
  
  place_id <- gmaps_data$place_id
  rating <- gmaps_data$rating
  
token <- gmaps_request$next_page_token
Sys.sleep(5)
continue <- TRUE

  while (continue) {
    
    gmaps_request <- google_places(search_string = research, language = language, key = api_key, page_token = token)
    gmaps_data <- gmaps_request$results
    
    if (!is.null(gmaps_request$next_page_token)) {
      place_id = c(place_id,gmaps_data$place_id)
      rating = c(rating,gmaps_data$rating)
      token <- gmaps_request$next_page_token
      Sys.sleep(5)
    }
    else{continue <- FALSE}
  }

Now we’re going to search for the spatial coordinates of the places we found. To this extent, we’re going to use the google_place_details function of the packages, and retrieve latitude and longitude with these two functions :

get_lat <- function(id, key, language) {
  id <- as.character(id)
  details <- google_place_details(id, language = language, key=key)
  return(details$result$geometry$location$lat)
}

get_lng <- function(id, key, language) { 
  id <- as.character(id)
  details <- google_place_details(id, language = language, key=key)
  return(details$result$geometry$location$lng) 
}

All these blocks add up to build the complete function :

get_gmaps_data <- function(research, api_key, language) {
  
  gmaps_request <- google_places(search_string = research, language = language, key = api_key)
  gmaps_data <- gmaps_request$results
  
  place_id <- gmaps_data$place_id
  rating <- gmaps_data$rating
  
  token <- gmaps_request$next_page_token
  Sys.sleep(5)
  continue <- TRUE
  
  while (continue) {
    
    gmaps_request <- google_places(search_string = research, language = language, key = api_key, page_token = token)
    gmaps_data <- gmaps_request$results
    
    if (!is.null(gmaps_request$next_page_token)) {
      place_id <- c(place_id, gmaps_data$place_id)
      rating <- c(rating, gmaps_data$rating)
      token <- gmaps_request$next_page_token
      Sys.sleep(5)
    }
    else{continue <- FALSE}
  }
  
  lat = sapply(place_id, get_lat, key=api_key, language=language)
  lng = sapply(place_id, get_lng, key=api_key, language=language)
  
  return(data.frame(place_id, rating, lat, lng))
}

Map plot

The next part is more classical. We’re going to order the ratings of the data frame built by the previous function in order to arrange the places in differents groups. Each of the groups will be associated to a color on the data plot. If we want to make number_colors groups with the color scheme color (for instance, “Greens”), we are using the following instructions :

color_pal <- brewer.pal(number_colors, color)
pal <- colorFactor(color_pal, domain = seq(1,number_colors))

plot_data <- gmaps_data
plot_data$ranking <- ceiling(order(gmaps_data$rating)*number_colors/nrow(plot_data))

The definitive function just needs the addition of the leaflet call :

show_map <- function(number_colors, gmaps_data, color="Greens") {
  
color_pal <- brewer.pal(number_colors,color)
pal <- colorFactor(color_pal, domain = seq(1,number_colors))

plot_data <- gmaps_data
plot_data$ranking <- ceiling(order(gmaps_data$rating)*number_colors/nrow(plot_data)) leaflet(plot_data) %>% addTiles() %>%
  addCircleMarkers(
    radius = 6,
    fillColor = ~pal(ranking),
    stroke = FALSE, fillOpacity = 1
  ) %>% addProviderTiles(providers$CartoDB.Positron)
}

Examples

I just need to combine these two functions in one, and then to do some food-related examples !

maps_ranking_from_gmaps <- function(research, api_key, language, number_colors=5, color="Greens") {
  show_map(number_colors, get_gmaps_data(research, api_key, language), color)
}

maps_ranking_from_gmaps("Macaron, Paris, France", api_key, "fr")
maps_ranking_from_gmaps("Macaron, Montreal, Canada", api_key, "fr")

maps_ranking_from_gmaps("Poutine, Montreal, Canada", api_key, "fr", 5, "Blues")
maps_ranking_from_gmaps("Poutine, Paris, France", api_key, "fr", 5, "Blues")

which returns the following maps :

Macaron in Paris, France

Macaron in Montreal, Canada

Poutine in Montreal, Canada

Poutine in Paris, France (I guess French people are not ready for this)