Verified Commit 16be2141 authored by Markus Shepherd's avatar Markus Shepherd 🙊
Browse files

improvements

parent 4f1bc26a
......@@ -30,23 +30,23 @@ The easiest fix is filtering out games with less than a certain number of rating
Notably, those are all very recent games with relatively few ratings.[^jotl] Some might consider this a feature, not a bug, but when your intention is to create a list of the best board games, you probably do want to give a nod to proven classics, and not just the latest hotness. How to balance out these ends of the spectrum is in the end a choice you have to make, and no matter what it is, People on the Internet™ will not like it.
The way both IMDb and BGG chose to tackle this issue is by essentially not trusting the ratings – at least not too much. The method boils down to assigning a new item in the database (be it movie or game) a predefined average by default, and only gradually trusting the ratings' average as thousands and thousands of users have cast their votes. More concretely the rankings are calculated by adding a number of dummy votes with a chosen average value, say \\(5.5\\), to each game's regular ratings. The result is that initially each game will have a score close to \\(5.5\\), but as more users rate the game, that score will move closer and closer to the conventional mean.
The way both IMDb and BGG chose to tackle this issue is by essentially not trusting the ratings – at least not too much. The method boils down to assigning a new item in the database (be it movie or game) a predefined average by default, and only gradually trusting the ratings' average as thousands and thousands of users have cast their votes. More concretely the rankings are calculated by adding a number of dummy ratings with a chosen average value, say \\(5.5\\), to each game's regular ratings. The result is that initially each game will have a score close to \\(5.5\\), but as more users rate the game, that score will move closer and closer to the conventional mean.
BGG calls this their **geek score**. Mathematically speaking, it is a *Bayesian average*, and calculates as follows:
\\[ \textrm{geek score} = \frac{\textrm{sum of ratings} + \textrm{number of dummies} \cdot \textrm{dummy value}}{\textrm{number of ratings} + \textrm{number of dummies}}, \\]
where \\(\textrm{sum of ratings}\\) can be calculated either by, well, summing up all ratings or via \\(\textrm{number of ratings} \cdot \textrm{average rating}\\). Don't worry too much about the details though – *adding dummy votes* is really all you need to understand.
where \\(\textrm{sum of ratings}\\) can be calculated either by, well, summing up all ratings or via \\(\textrm{number of ratings} \cdot \textrm{average rating}\\). Don't worry too much about the details though – *adding dummy ratings* is really all you need to understand.
OK, so that's the concept, but crucially that's not all the details. You still need to choose *how many* dummy votes you want to add and *what value* they should take. Since People on the Internet™ who disagree with your ranking will try to manipulate it in whatever way they can, sites are usually very cagey about said details. [IMDb used to be more transparent](https://en.wikipedia.org/wiki/IMDb#Rankings), [as was BGG](https://www.boardgamegeek.com/thread/103639/new-game-ranking-system), but now we have to dig a little deeper.
OK, so that's the concept, but crucially that's not all the details. You still need to choose *how many* dummy ratings you want to add and *what value* they should take. Since People on the Internet™ who disagree with your ranking will try to manipulate it in whatever way they can, sites are usually very cagey about said details. [IMDb used to be more transparent](https://en.wikipedia.org/wiki/IMDb#Rankings), [as was BGG](https://www.boardgamegeek.com/thread/103639/new-game-ranking-system), but now we have to dig a little deeper.
Let's start from the easier of the two, the value of the dummy votes. It is commonly chosen to represent some *prior mean*, i.e., some decent estimate of the rating a new game in the database would have. A frequent choice would be to use the average rating across *all* games. It's a fair assumption – without further information about a game, we don't know if it's any better or worse than the average game. However, Scott Alden actually gave away the answer in that interview from the beginning: BGG chose the dummy value to be **\\(5.5\\)**. Their rationale is that ratings range from \\(1\\) through \\(10\\), so \\(5.5\\) is the midpoint. Of course, people tend to rather play and rate much more the games they like, and so the average rating is around \\(7\\). Opting for the lower value here is part of the design of the ranking: it means a new game would enter the ranking rather at the end of the pack. On the other hand, using the mean as the dummy value means a new game is placed more or less in the middle. It is worth mentioning that IMBd does use the mean (or at least used to), but they only ever publish the top 250 movies, and don't care about the crowd behind.
Let's start from the easier of the two, the value of the dummy ratings. It is commonly chosen to represent some *prior mean*, i.e., some decent estimate of the rating a new game in the database would have. A frequent choice would be to use the average rating across *all* games. It's a fair assumption – without further information about a game, we don't know if it's any better or worse than the average game. However, Scott Alden actually gave away the answer in that interview from the beginning: BGG chose the dummy value to be **\\(5.5\\)**. Their rationale is that ratings range from \\(1\\) through \\(10\\), so \\(5.5\\) is the midpoint. Of course, people tend to rather play and rate much more the games they like, and so the average rating is around \\(7\\). Opting for the lower value here is part of the design of the ranking: it means a new game would enter the ranking rather at the end of the pack. On the other hand, using the mean as the dummy value means a new game is placed more or less in the middle. It is worth mentioning that IMBd does use the mean (or at least used to), but they only ever publish the top 250 movies, and don't care about the crowd behind.
The other value, the *number* of dummy votes, requires more work. Because some of the details and data are unknown, we cannot actually pin down the exact number that BGG is using. Instead, we'll try three different approaches, and compare their results.
The other value, the *number* of dummy ratings, requires more work. Because some of the details and data are unknown, we cannot actually pin down the exact number that BGG is using. Instead, we'll try three different approaches, and compare their results.
# Formula
On the surface, this should be super easy to solve: in the formula above, we know every single value but the number of dummy votes. BGG publishes the number of ratings, their arithmetic mean, and the "geek score" or Bayesian average for every game, and we know that the dummy value is \\(5.5\\). With a little high school algebra we solve the formula for *number of dummies*:
On the surface, this should be super easy to solve: in the formula above, we know every single value but the number of dummy ratings. BGG publishes the number of ratings, their arithmetic mean, and the "geek score" or Bayesian average for every game, and we know that the dummy value is \\(5.5\\). With a little high school algebra we solve the formula for *number of dummies*:
\\[ \textrm{number of dummies} = \textrm{number of ratings} \cdot \frac{\textrm{average rating} - \textrm{geek score}}{\textrm{geek score} - \textrm{dummy value}} \\]
......@@ -56,7 +56,7 @@ Now we should be able to plug in those values for any given game, say {{% game 1
So, there's about \\(1830\\) dummy ratings, end of story. Right? Unfortunately, not quite. When computing this formula for different games, the results vary *wildly*, as you can see from this histogram over the results for the same calculation with other games:
{{< img src="num_dummies_hist" alt="Histogram over the number of dummy votes calculated by explicit formula" >}}
{{< img src="num_dummies_hist" alt="Histogram over the number of dummy ratings calculated by explicit formula" >}}
And this plot is even cropped, the results vary from \\(-1.4\\) million to \\(+810\\) thousand, though some \\(90\\%\\) lie within the above range, with a mean of around \\(1604\\) and a median of around \\(1590\\).
......@@ -76,7 +76,7 @@ The best correlation of around \\(0.996\\) is achieved with **\\(1488\\) dummy r
# Optimisation
What we have here at hand is actually a classic optimisation task: a real valued function in one unknown (or two if we allow a variable dummy value as well) which we'd like to maximise. This is a well-studied field, with many fast and simple implementations that provide us the solution in no time. Unsuprisingly, we get almost the same result as above: the best possible correlation is \\(0.996\\) with around **\\(1488\\) dummy ratings**.
What we have here at hand is actually a classic optimisation task: a real valued function in one unknown (or two if we allow a variable dummy value as well) which we'd like to maximise. This is a well-studied field, with many fast and simple implementations that provide us the solution in no time. Unsuprisingly, we get the same result as above: the best possible correlation is \\(0.996\\) with around **\\(1488\\) dummy ratings**.
But since we made it this far, let's take it one step further. So far, we tried to optimise the correlation in order to recreate BGG's ranking. However, we can also try to recreate the actual *geek scores*. That is, we can look for the number of dummy ratings that will yield the closest to the actual geek score with our calculations. What exactly we mean by "closest" is up to us to define. A common metric is the *mean squared error*.[^root] It's not worth getting into the maths here either, but the general idea is that we want to punish outliers in our estimates more (qudratically so) the further away they lie from the actual datapoint. Long story short, this yields a minimum for around **\\(1636\\) dummy ratings**.
......@@ -85,7 +85,7 @@ Let's take one last swing and see what happens if we don't fix the dummy value a
* the best correlation with **\\(1942\\) dummy ratings of \\(5.554\\)**, and
* the least squared error with **\\(1616\\) dummy ratings of \\(5.494\\)**.
Either of those improvements are hardly noticable (in fact insible after rounding), but they do confirm nicely a dummy value of \\(5.5\\).
Either of those improvements in the performance metrics are hardly noticable (in fact insible after rounding), but they do confirm nicely a dummy value of \\(5.5\\).
# Conclusion
......@@ -131,7 +131,7 @@ Just like IMDb publishes only their top 250 movies, we can consider the same and
9. {{% game 31260 %}}Agricola{{% /game %}}
10. {{% game 120677 %}}Terra Mystica{{% /game %}}
The most recent release on this list is {{% game 174430 %}}Gloomhaven{{% /game %}}, but we also meet again the old BGG #1's, {{% game 3076 %}}Puerto Rico{{% /game %}} and {{% game 31260 %}}Agricola{{% /game %}}.
The most recent release on this list is {{% game 174430 %}}Gloomhaven{{% /game %}}, but we also meet again old BGG #1's: {{% game 3076 %}}Puerto Rico{{% /game %}} and {{% game 31260 %}}Agricola{{% /game %}}.
## Combining both!
......@@ -152,4 +152,4 @@ The effects of more, but higher dummy ratings seem to almost cancel each other o
[^min-votes]: Throughout this article I only considered games with at least \\(100\\) ratings, mostly to ensure that the very long tail of games with few ratings won't unduely skew the results. However, most of the calculations would only change in some negligible decimals when including all games.
[^jotl]: {{% game 291457 %}}Jaws of the Lion{{% /game %}} is something of an exception here and will undoubtably shoot into the BGG top 10 very soon. In fact, it might be the only game with the potential to unseat {{% game 174430 %}}Gloomhaven{{% /game %}} as the number 1.
[^root]: It's probably even more common to use the *root* mean squared error, but for boring mathematical reasons, it doesn't make a difference when it comes to optimisation. In fact, we could even drop the word *mean* from our metric and still obtain the same optimal point, but then we'd have to implement it ourselves, so there's no point.
[^root]: It's probably even more common to use the *root* mean squared error, but for boring mathematical reasons, it doesn't make a difference when it comes to optimisation. In fact, we could even drop the word *mean* from our metric and still obtain the same optimal point, so let's not dwell on this.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment