Tuesday, 11 August 2020

Cleaning up the tail

I saw an interesting post on a Facebook cricket group recently, where a Pakistani fan said that they thought that Pakistan were the worst team at cleaning up the tail in world cricket. A bunch of Indian fans jumped in saying that India was, in fact, the worst. Then some English fans decided that England was actually the worst at cleaning up the tail. 

It led me to run a small poll, and I found that roughly 2/3 of respondents felt that their team was the worst at cleaning up the tail. Most who commented were adamant that not only was their team the worst at it, they were the worst by some margin.

There seemed to be a general cricket fan type one error. A type one error is an error of seeing a pattern that does not exist (or, more generally coming to an incorrect conclusion based on evidence that seems conclusive but is not). Perhaps this was caused by the fact that when we watch our team struggle to clean up the tail, it takes a long time, while a team cleaning up the tail efficiently does not take as long, so uses up less of our memory space. Or perhaps it is just because cricket teaches us to think negatively. Mark Richardson even wrote a whole book about the power of negative thinking in cricket.

That led me to a question. What team is actually the worst?

Tuesday, 4 August 2020

Changes in test performance

I had a go at using animation to visualise the changes in team's performances in tests over time.

I really enjoyed making this - I hope you enjoy watching it!


Tuesday, 28 July 2020

T20i batsmen charts updated 2017-2020

A while ago I put together some charts of batsmen's scoring rates and how they scored their runs in various formats.

I thought that it would be time to look at those again.

The two rates that I mention are as follows: scoring rate is the proportion of balls scored off and boundary rate is the proportion of balls hit for a 4 or 6.

In previous graphs I used activity rate (which gave a bonus for players who ran twos and threes more often). I decided against that this time, as most other analysts tend to use scoring rate, and I'm seeing some value in consistency.

I've also coloured by the balls per dismissal.

I've started with the overall results.


The issue with this is that there's different times in the innings call for different levels of risk.

So I've also broken it down based on the time. I had to put very low limits on the balls faced at the death to have enough batsmen to actually put together a chart, so there's considerable room for sampling error in the proportions with some of the samples as low as 78 balls.









Tuesday, 21 July 2020

When to declare?

One of the unique things about test cricket is the prospect of declarations.

While there's situations in motor racing where a driver might go slow in order to conserve fuel or in sailing where a racer might give up the lead in order to get in a more favourable position, there's not really any other sport where a team or individual can opt to stop scoring to ensure that they get a win.

The prospect of a draw encourages positive play from the team that is on top, and allows an out for teams who are losing.

Deciding when to declare also provides interesting talking points for fans and commentators alike, and the decisions are much easier in hindsight.

However, it's essentially a statistics problem. There are two variables, and a whole lot of historical data. The key variables are the overs left at the start of the innings and the target to win. 

While there are other issues (the "throw out a carrot" theory - if the target is close enough, teams will take more risks) and no two teams are the same, we can build a model based on that data and use that to predict the chances of winning, based on when a team declares.

I've built a very basic model, based on the 130 most recent matches where the target was under 400.

Given that data, the optimum declaration point changes based on the runs per over that the team scored.

In the end they set West Indies a target of 312 in 85 overs, which was too much, and the model suggested that it would be. It output the probabilities of 9.8% for a West Indies victory, 27.5% for a draw and 62.6% (all values rounded - which is why they don't add to 100%). The most likely outcome is what happened.

Part of the fact that England had such a good chance of success was that they scored so quickly. 92 runs in 11 overs gave them an excellent chance of success. However, a declaration a couple of overs earlier might have given them an even higher chance of success. They would have had a 64.4% chance of winning if they had declared a couple of overs earlier.

I put together a graph showing the impact of the scoring rate on the chance of winnings and the optimum time to declare. It's often said that strike rate isn't relevant in test matches. But this match proved the value of scoring quickly in tests.


Tuesday, 14 July 2020

Good years as an all rounder

Over the last three years, Jason Holder has produced some incredible numbers. His bowling stats are like something out of the 1800's, and he has also averaged over 40 with the bat at the same time.

It made me wonder about how well he fitted in compared to other all rounders from history.

Hot on the heels of my first ever animated graph last week, I have another one today, showing every player who had at least 5 three year periods where they were in the top 23% of run scorers and wicket takers, and where they had a batting average over 17. (That last condition was necessary due to a period where the batsmen were interchanged regularly in the 1880's, resulting in the top run scorers including some bowlers who averaged below 10 with the bat.)

Tuesday, 7 July 2020

Towards more useful metrics for test bowling - part 2 - defensiveness

In the last article I introduced a replacement for average - wickets per hundred runs. 

This is more intuitive than bowling average for three reasons. Firstly it means that the higher the number, the better the performance. 

Secondly, it's putting the most rare event (a wicket) as the numerator, rather than the denominator. In mathematical terms, changing the numerator has a smaller impact than changing the denominator, so it means that the numbers are less probe to massive swings after a few matches.

Thirdly it can allow a quick estimate of average over a time period by just looking at series averages.

I was challenged on that final point, so I ran some simulations with 2000 pretend bowlers, each with 25 series giving them series averages based on the distribution of averages from the last 600 completed series by bowlers. The graphs below shows the result of finding the mean of the series averages and then looking at basing it off the wickets per hundred runs. The top graph shows that the estimates were generally reasonable, although they tended to overstate a bowler's ability while the bottom graph shows that using the series average often got a result that did not really resemble the bowler's actual average, and was always worse.


The next statistic that I wanted to get was one that talked about the style of bowler that they were.

I tried a number of formulations to get this, but I had three main criteria: The result needed to be sensible. 

1. It had to show that Darren Gough, Waqar Younis and Malcolm Marshall were attacking bowlers, while the likes of Morne Morkel, Lance Gibbs and Ewen Chatfield needed to come out as defensive. 

2. It had to be able to be worked out with a cellphone calculator - nothing like finding eigenvalues, z-scaling or any complicated calculations with logs or exponents. One of the beauties of the bowling average is that a player with reasonable mental arithmetic skills can calculate a reasonable estimate of it in their head.

3. It had to separate the players based on style not ability. I wanted to find a metric that distinguished the approach of the bowler, not just how successful it was. That's impossible to do without using ball by ball data (and still very difficult to do with ball by ball data) but it is possible to get as close as possible.

The formulation that I ended up using was the following: Balls per run - wickets per hundred balls - 0.5.

I subtracted 0.5 because when I subtracted those two numbers I got a range (from the top wicket-takers) of -0.7 to 2.6, with a median at roughly 0.5. By subtracting 0.5 it meant that the normal player was at 0, and the number represented how far away from normal/average they were.

This stat really comes into it's own when plotted against wickets per hundred runs.


For this graph, I've coloured the points based on the strike rate (balls per wicket) with bright red being a strike rate over 100 and bright green being a strike rate under 50. That gives an indication using more familiar metrics. The colour gradient indicates how strike rate is a measure of both style (attacking or defensive) and effectiveness. 

An obvious distinction to look at is the type of bowler in terms of pace. Just using the basic separator of spin/pace gives this graph:


While there's a reasonable overlap, there's a clear difference. The spin bowlers tended to be more defensive and also be less effective in general, in terms of wickets per 100 runs.

That can be shown better on these box and whisker graphs:


(I've removed Sobers, Grieg and Johnston due to them bowling a mixture of pace and spin, and 3 points does not make a sensible box and whisker graph)

It's worth remembering here that these are only the 183 bowlers who have taken the most wickets since 1945. This only represents bowlers who were both good enough, durable enough and who played for teams that had enough matches scheduled to make it into this group. The overall numbers for all players will be different.

That issue feeds into the next set of graphs - looking at the player's eras. I've grouped the players by the decade that their middle year was in. So if a player played from 1999 to 2018, (ie Rangana Herath) they would be counted as a 2000s player, while Shane Warne (1992-2007) is counted as a 1990s player. This is not a perfect way of grouping players, but it gives a reasonable indication of their era.


There are roughly three times as many players in the chard from the 2010's as there are from the 1960's. This is more an indication of the greater proliferation of test matches played, rather than a comment on the ability of the players. Very few players from before 1970 played more than 40 matches, so for most players to make this list from that era they had to take about 3.5 wickets per test. There have been almost three times as many players play that number of tests since 2000, so a number of players have been able to make the list without being stand out bowlers in their generations. As an example Jonny Wardle played for 9 years, dominated almost every team he played against and yet ended up with fewer test wickets than Paul Harris who played for 4 years and never truly established himself in his role.

As a result, we would expect the groups from the earlier eras to have taken more wickets per 100 runs, because they were top 5%, as opposed to the top 12%. And that certainly shows for the 1950's, but it is not as evident for other decades.


The key difference is how much more attacking the bowling is (at least in terms of results - it could be argued that the batting is more reckless).

Basically the bowlers are getting similar figures, but they're getting them in fewer overs. This might be the impact of one day cricket (note the drop in the 1970's when ODIs started) but it's possible that the changes in pitch preparation techniques have also had a lot to do with it.

Finally I thought I'd have a go at producing an animated gif showing where all the players on this are.


There's a whole heap more that I'd love to explore with this, so keep tuned this time next week.