Forget About Ratings

Whenever an item is described by very few keywords, we need to find a way to collect additional information about it. The easiest approach is usually to go to some movie website/database and look for the average rating of the movie. However, as noted earlier several times, ratings are not calibrated and thus, the mean is only a very rough indicator. For instance, a sequel that recently came into the cinema got very mixed reviews. We visited different sites, with very different results. Two sites, where the ratings are created by ‘professional’ reviewers, looked this this: 1.5/5 and 4.2/10. If we normalize the values, we get 0.3 and 0.42 which are close (enough) and it is safe to say that the tendency of both is negative. For sites where ratings are created by users, there is a different picture: 3.2/5 and 6.2/10 which means 0.64 and 0.62 after normalization. The user ratings are also not stunning, but far more positive than those of the reviewers. And as noted earlier, it is very likely that users and reviewers have different criteria to rate movies. However, at the end, a user needs a way to decide how to treat and utilize those ratings.

A closer look at the user ratings revealed that about 76% rated the movie with 6 stars or higher and only 12% gave a rating below 5 stars. The fact that the final user ratings from different sites roughly agree, indicates that there is a positive consensus for users who watched the movie. At the end, it is the old story again: It seems that most users who were in doubt about the movie did not rate/see it and those who rated/saw it, already had a slightly positive bias. In other words, for an unbiased user that knows nothing about the movie and is just relying on the mean ratings, created by users, the mean values might not be very useful.

This brings us to the question if disappointed users tend to not rate movies they have seen, at least not if too many steps are required for the actual rating. With the latter, we mean that a user has to go to a website, login it, find the movie, rate it and leave. Of course this could be done with an app, but even then noticeable time is needed for each step. In case of a streaming portal the situation is totally different, because a single platform provides the whole experience: find movies, watch movies, rate movies and get new suggestions. On these sites, the reward of a rating is -hopefully- an improved suggestion while there is no such reward on a general movie website.

Bottom line, quantity is no replacement for quality if there is a lack of diversity for the ratings, because for a mean to be expressive, both positive and negative ratings are required. In other words, we believe that the quality of the ratings is correlated with the integration of users on a website and that a reward-based approach is required to avoid biased distribution of ratings.

Leave a comment