The more we are doing research in the area of recommendation systems the more it becomes clear that a real solution cannot be accomplished by any “classical” algorithm. What we are are trying to say is that we doubt that any existing classifier, regardless how powerful the handcrafted features are, will ever be sufficient to deliver a satisfying performance over the whole time. The main problem is that such a model only captures current trends with the data known so far and even with on-line learning, it hardly evolves or reacts to trends that pop-up frequently but then soon vanish into thin air. What we need is a model that contains a component to address short-term events as well as long-term events. In other words, we need some kind of memory to fight the problem of catastrophic forgetting that can happen in neural nets. Like in the brain, we want something that remembers recent events, but also encodes regular patterns and is able to get rid of older memories by overwriting them. Such a dynamic component would allow a life-long learning which is perfect for preference-based learning.
Thus, we decided to change our direction of research towards adding external memory to networks, but on a much smaller scale than compared to what other researchers did and will do. The next post will shed some light on the issue, but we still need to figure out how we can fully utilize the power of such a component and of course how we can phrase our problem as a learning task for such a network.