A Nostalgic Summary of 2015

For ML in general, the last year was definitely a good year because the community came up with a lot of cool ideas, nifty algorithms and new methods to squeeze valuable information out of raw data. Especially the introduction of GPUs lead to a huge step forward because the amount of data that can be processed increased a lot. However, if we focus on personalization and recommendation the improvement seems more modest, at least for the domain of ‘product’ recommendations which includes movies.

This is a bit surprising since we are drowning in data and with all the mobile devices, there is a constant flow of user data that can be assigned to individuals very easily to collect preferences. However, in contrast to the classic approach to rate items, movies for instance, and to use those data to train a model, a more plausible scenario would be to learn user preferences in absence of explicit negative data by using the session stream of users. For example, a lot of social networks have a “like”(+1) button, but there is no “dislike”(-1) button available. Therefore, only positive feedback is collected and maybe weak negative feedback with click-through data.

In other words, the problem is not not really supervised, but rather weakly supervised at best which is a huge problem because most used methods, like collaborative filtering (CF), require explicit labels to work. But even if label data is available, the cold start problem and the long tail often limits the benefits of such models, at least in earlier stages. The result of insufficient data might lead to rather bizarre “suggestions” that people who enjoyed the Hobbit also liked Iron Man 3, which is definitely true for some people, but not very useful in general. That brings us to the next problem.

There are different goals for recommender systems. In case of on-line shopping there is definitely a bias towards maximizing the profit which might be in conflict with optimal results according to the true preferences of a user. Stated differently, larger websites with access to huge amounts of data to train good models are usually driven by monetary motives, while niche websites often do not have enough data for the training of a reasonable model to improve user satisfaction. Plus, for some domains there are no content-based features which means, sites have to resort to CF or come up with something entirely new.

Especially for personalized TV, the landscape has changed a lot because streaming services advance and fewer people watch ordinary TV. The good thing is that those services usually come with a recommendation engine which is able to utilize all existing user and meta data from the service. However, the bad thing is that there is usually no way to extract user data, like ratings or play lists from the service. But without this feature, which is unlikely to happen, users have to rely on a single service without a chance for a manual tweaking.

Regardless of how little encouraging the future looks like, we will continue our research because the insights are universal and are not limited to a specific feature set or algorithm. Furthermore, we believe that preference-based learning can really improve the user experience and is also a good example that machine learning does not require a big computational machinery to be successful in practice.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s