Imagine that there will be a nice movie on TV today. The crowd surely has an opinion about it and there are sites that provide a digested version of it. Usually a condensed rating of some kind. For the movie we have in mind, without saying the name, it looks like that: 5.0/10.0, 6.6/10.0 and 5.3/10. We also checked some TV magazines for comparison and they all gave a full star rating. That makes it clear that there is no real consent about the movie.
An analysis of the results is quite complex because of the difference of the users who contributed to the final rating and the situation is further complicated because there is no common interpretation of the rating levels. In other words, we doubt that a 5 or a 10 means the same for all users.
That means a rating cannot be right or wrong since it is a personal preference for a specific movie by some user. Some hate it, some love it and probably lots of people think something in between. At the end, the mean rating is an indicator that reflects the opinion of all users who contributed to it. But what does it mean for us? Not much, because even if all users gave a low rating, maybe a specific user likes this kind of movies and on the other hand, a blockbuster that got high ratings is maybe not the kind of movie the user likes.
Bottom line, as a baseline such information is valuable, for instance, if there are no other information available yet, but as long as we do not know how “compatible” a user with the crowd is, the value of condensed ratings is pretty low.