English
Gamereactor
news

A new study finds how much inflated scores are in video game reviews

More than 16,000 texts were re-reviewed by a Machine Learning algorithm.

Subscribe to our newsletter here!

* Required field
HQ

It's a common thought that scores in video games reviews are kind of inflated and that, in a scale 1-10, the middle point is not 5, but it's around 7. Therefore, readers had to adjust their comprehension skills for a better understanding of what is a good, bad, mediocre, excellent or regular game. Now, a study conducted in Spain by Gamereactor's former editor, Sergio Figueroa, shed light on the issue and brings the opportunity to test some of the writers you know.

The research retrieved data from more than 16,000 reviews from five relevant Spanish outlets (including Gamereactor) and platform(s), developer, genre(s), author and, of course, human score. Then, a NLP (Natural Language Processing) algorithm read and evaluated the same texts and scores it according to the sentiment it conveys to the reader. Do the words match with the scores?

The first conclusion is that, as expected, average scores are above 7, both in human and robots reviews. But there is also a deviation in every multiplatform website, meaning that human scores are inflated between 3.7% and 5.4%. The analysis goes on isolating different factors, as you can see in the interactive visualisation online TCC. The second conclusion revolves around how human scores tends to extremes, and what if means for every company.

A funny feature of the web-app let you look for any writer included in the study and check what his/her average score is and how it fares against the algorithm.

A new study finds how much inflated scores are in video game reviews


Loading next content