Just Getting ML Things Done

It’s true that at some point, you might need full control of the situation, with access to the loss function and maybe even the gradients. But sometimes, all you need is a hammer and a pair of nails without fine-tuning anything. Why? Because you just want to get the job done, ASAP. For example, we had a crazy idea that is likely to NOT work, but if we just need 10 minutes coding and 30 minutes waiting for the results, why not trying it? Without a proper framework, we would need to adjust our own code which can be a problem if there are lots of “best practices” one need to consider, like for instance for recurrent nets. Then it’s probably a better idea to use a mature implementation to avoid common pitfalls, so you can just focus on the actual work. Otherwise frustration is only a matter of time and spending hours on a problem that someone else has already solved will lead you nowhere.

Long story short, what we needed is a front-end for Theano with a clean interface and without the need to write tons of boilerplate code. After some investigations, and by focusing on support for recurrent nets, we decided to give Keras a try. The actual code to build and train the network has less than 20 lines, because of the sophisticated design of the framework. In other words, it allows you to get things done in a nice, but also fast way, if you are willing to to sacrifice the ability of controlling every aspect of the training.

After testing the code with a small dataset, we generated the actual dataset and started the training. What can we say? The whole procedure was painless. Installation? No problem. Writing the code? Piece of cake. The training? Smooth without any fine-tuning. Model deployment? After installing h5py it worked without any problems ;-).

Bottom line, we are still an advocate of Theano, but writing all the code yourself can be a bit of a burden, especially if you throw all the stuff away after a single experiment. Furthermore, if your experiment uses a standard network architecture without the necessity to tune or adjust it, mature code can avoid lots of frustration in form of hours of bug hunting. Plus, it’s likely that the code contains some heuristics to work around some known problems that you might not be aware of.

For clarification, we do not say that Keras is just a high-level front-end and does not allow any customization, what we say is that it does a great just to provide one in case you don’t need all the expert stuff! And last but not least, it allows you to switch the back-end in case you want something else than Theano. We like the concept a lot provide a unified API for different back-ends, because it’s possible that back-ends have different strengths and weaknesses and the freedom to choose allows you, to write code with one API but switching
back-ends dynamically as you need it.

Predicting The Next ‘Thing’

It is the dream of all machine learning guys to find a way to use the available data in combination with some unsupervised learning algorithm to train a useful representation of the data. Yes, we drastically simplifying things here, but the point is to learn without the necessity to label the data which is very expensive.

For example, there are tons of documents available which could be used for learning, but the problem is what cost function do we want to optimize? In case of word2vec and friends, we try to predict surrounding or center words without explicit labels. This works very good, but the result is an embedding of words and besides simple aggregation methods, there is no general way to represent documents with a learned embedding in a meaningful way. However, it is still a simple, but powerful approach that can easily utilize huge amounts of unlabeled text data to learn a useful representation.

Another example is a recently published paper [arxiv:1704.01444] that is also using a large text corpus without labels, at least for the first model, to just predict the next character of the data block. So far, this is nothing new, but it is remarkable that a single unit learned to predict the sentiment of a data block. In other words, all those models learn by predicting the next “thing” which can be, for instance, a word, a character, or some other token.

The interesting part is that such an “autoregression” model can be learned by just taking a sequence, removing the last item and try to predict it, given the previous data. This also works for sets, but the process is not straightforward since sets are not ordered. Furthermore, it is not obvious how to select the item, since there is no “previous” data.

Bottom line, it has been demonstrated several times that it is possible to learn a good representation of data by just predicting the next token. Nevertheless, often such methods are limited to short texts, since processing longer texts require to remember lots of data or context, especially for models based on RNNs.

However, since we are usually dealing with short descriptions of items, with the exception that we handle sets and no sequences, we adapted the method and trained a model to predict a keyword from the set, given the rest of the set, with moderate results. Despite the problems we encountered, we still believe that no (strongly) supervised model will ever be able to learn powerful but also general representation of data. Thus, it seems a good idea to follow the research track and address the existing problems one by one, until we
eventually find a method that addressed the major hurdles.

Top-K-Gating With Theano

In a recently published paper [arxiv:1701.06538], the authors introduced a mixture of experts which is not new. However, the twist is to use only a small subset of those experts which cannot be done with an ordinary softmax, since the output of a softmax is always -slightly- positive. The idea is to keep only the truly top-k experts by setting the values, before applying the softmax operation, of all non-top-k experts to a large negative value. The result is that the actual output value at the corresponding position is zero.

With numpy, x is a vector, this is actually straightforward:

def keep_topk(x, k, neg=-10):
 rest = x.shape[0] - k
 idx = np.argsort(x)[0:rest]
 x[idx] = neg
 return x

We just sort the values of x, getting the indicies for the |x|-k positions and set the values to -10.

But since we want to use all the nice features of Theano, we need to port the code to the tensor world. Frankly, this is no big deal either, but it requires a tiny adaption since we cannot assign values to tensors directly.

def keep_topk(x, k, neg=-10):
 rest = x.shape[0] - k
 idx = T.argsort(x)[0:rest]
 return T.set_subtensor(x[idx], neg)

And that’s it.

The reason why we spent some time with the porting is that we also had the idea to use soft attention to model the final prediction as a decision of a small set of experts. The experts might have different opinions and with the gating, we can blend different confidence levels with different outputs.

Adaptable Data Processing And Presentation

When we began to rewrite our TV recommender application from scratch some patterns repeated a lot which are not new, but extremely useful. The basic pipeline is like that: we have chunked id/key/value data that is converted into a coherent CSV file which is then typified, evaluated and imported into some back-end.

The chunked key-value data is nothing special and we only need to keep track of the current ID to combine all pairs into one item. However, streaming makes our life a lot of easier, because then we can treat the encoding as a pipeline with adaptable stages. In the pythonic world, we are using iterators which can be easily chained to model a pipeline. To conclude the process, need two operations: (1) filter out items that do match a certain criteria and (2) map items by modifying existing values or adding new ones.

In this setting, ML procedures can be often seen as an additional mapping step. For instance, we could use the method to build data that is required for a special model, but we could also directly apply the raw data on existing models. Like to estimate the preference of movies, to determine an “interaction” score with series or to annotate items with tags/concepts. And last but not least, it is also possible to build data for ML step-wise, or to model dependencies from one stage to another.

With this in mind, a simple pipeline could look like that:

csv = ChunkedReader(sys.stdin)
pipe = Pipeline(csv.__iter__())
pipe.add(TypifyMap()) # convert strings to data type
pipe.add(ExpireFilter()) #filter out expired items
pipe.add(DedupFilter()) #ignore duplicates
pipe.add(PrefModelMap()) #score movies
pipe.add(IndexMap()) # tokenize relevant data

The chunked data is read from stdin and as soon as there is a complete item, it is passed to the next stage. In this case, a module that converts the strings into more useful data type. Like datetime() for timestamps, integer for IDs and lists for keys that have multiple values. Next, we want to skip items that have been already aired, or if the primary ID has been already seen in the stream of items. At the end, we use some ML model to access the meta data -if available- of movie items and assign a preference score to them. Finally, all values of relevant keys of the item will be tokenized, unified and indexed to allow to search the content.

What is important is that the sequence of the stages might be internally reorganized since filters need to be executed first to decide if any mapping should be done at all. In other words, if any filter rejects an item, the pipeline skips it altogether and requests a new item from the stream.

So, the basic ingredients are pretty simple: We have a stream of items -the iterator- and we have filters to skip items and maps to adjust items. At the end of the pipeline a transformed item emerges that can be stored into some back-end. The beauty is that we can add arbitrary stages with new functionality without modifying existing modules, as long as the design-by-contract is not violated.

Later, when we use dialogs to present aggregated data to the user, we can use the same principle. For instance, a user might request all western movies with a positive score which is nothing more than two filters to reject non-western movies and all items with a negative score. Another example is when a user clicks on a tag to get a list of all items that share the same tag and there are dozens of other examples like that.

The pattern can be described like that: we need a basic iterator to access the data, probably with a strong filter to reduce the amount, and then we need to adapt the data with combinations of filter/map to access only the relevant parts of it. This approach has the advantage that we can implement basic building blocks that are reused all over the place and new functionality can be built by combining elementary blocks.

However, despite the flexibility we gain with such an approach, it is very challenging to integrate this into a GUI since many views require customized widgets to preserve this flexibility. Furthermore, recommenders should also “learn” the optimal layout with respect to the preferences of users which introduces dynamics that further increase the complexity of the graphical interface.

Bottom line, the success of a recommender system mostly depends on the implementation of the graphical user interface and requires both an intuitive, but also flexible application to avoid frustrated user. Hopefully we can find some time to elaborate on it in a new post.

Joint Representation Learning of Attributes and Items

Learning dense embeddings for graph-like data is still tremendously popular. For instance there is word2vec, pin2vec, node2vec, doc2vec or tweet2vec and there is no end in sight. The idea to capture semantic information of tokens in a feature representation is very versatile and despite the simplicity also very powerful. However, it is not obvious how to appropriately(!) convert items, for example a document which is a sequence of tokens, into a single representation. The average of all tokens, also the sum, works well, but does not consider the order of the tokens and also neglects other possible structural information. To be clear, our proposal does not address the whole issue but at least allows to capture the statistics of items from the dataset.

As our domain is not text, but movies, there is no clear notion of a sequence for meta data, but we can treat the problem as a bipartite graph with the items on the “left” side and the attributes on the other side. In this graph, items are not directly connected, but by meshed by common attributes. In other words, the length of the shortest path from item A to B is 2 which means A->some_node->B. A simple example is that A,B are both sci-fi movies with a common theme of {spaceship,alien} along with other individual attributes and thus they should be treated at least latently similar.

In this setting, item nodes can be seen as anchors that are used to shape the feature space by using the local neighborhood, but also by walks from a source node to an arbitrary node that is reachable from the source. The power of the embedding lies in the sampling, but for now let’s just focus on the objective: min -log(P(N(u)|u) where u is the source node and N(u) is the set of all neighbors of u. With
P(n_i|u) = exp(f(n_i)*f(u)) / sum(i, exp(f(i)*f(u))) for each neighbor n_i of N(u) with respect to u. In plain English, we want to maximize the probability to observe the neighbors N(u) for the source node u. By using the softmax, we are pushing all pairs of (n_i, u) closer together while we are pulling the other nodes apart.

This is closely related to the word2vec objective with an adapted method to generate training samples. In the original setup, we select a word from a sentence and try to predict the surround words, while we select a node from the graph and try to predict the selected neighborhood. By customizing sampling strategies for the neighborhood, we can model different aspects of the graph and thus guide the learned representation.

Bottom line, instead of learning an embedding just for the attributes, we jointly learn an embedding for movies and attributes. This combines a transductive setting, since new movies cannot be embedded without re-training, but also an inductive one, since we can at least approximate the embedding of a new movie if we know its tags.

When Unsuper Is Actually Super, Or Not?

It is no secret that most of the energy has been put into advancing supervised approaches for machine learning. One reason is that lots of problems can be actually phrased as predicting labels and often with very good results. So, the question is, especially for commercial solutions where time and resources are very limited, if it isn’t better to spend some time to label data and train a classifier to get some insights about the data. We got some inspiration from a recent twitter post that suggested a similar approach.

For instance, if we want to predict if an “event” is an outlier or not, we have to decide between supervised and unsupervised methods. The advantage of the latter is that we have access to lots of data, but we have no clear notion of “outliers”, while for the former, we need events that are labeled with the risk that the data is not very representative and therefore, the trained model might be of limited use.

In other words, it is the old story again: A supervised model is usually easier to train, if we have sufficient labeled data at the expense that we get what we “feed”. Thus, more labeled data is likely to improve the model but we can never be sure when we captured all irregularities. On the other hand, unsupervised learning might be able to (fully) disentangle the explaining factors of the data and thus leads to a more powerful model, but coming up with a proper loss function and the actual training can be very hard.

Bottom line, there is some truth in it that if you cannot come up with a good unsupervised model, but you can partly solve the problem with an supervised one, you should start with it. With some luck, the simple model will lead to additional insights that might eventually lead to an unsupervised solution.

Forming New Memory

Augmenting neural networks with external memory is definitely a hot topic these days. However, if the method aims for large-scale learning, it is usually not fully differentiable and if it is, the scale is usually not large-scale. Furthermore, often the memory is only applicable for situations where training is done based on stories or sequences.

What we have in mind is a memory that is formed by on-line training, with support for one-shot learning, and that has some kind of structure with a focus on minimal memory footprint. In other words, we do not want to fix the layout before training but we want it to evolve over time. In terms of known concepts, we want something like competitive learning related to winner-takes-all approaches to distribute knowledge among memory nodes.

Similar to support vectors, we are looking for examples that lie on the boundary “line” to separate the feature space. For example, we have two nodes with different classes and the edge between them crosses the boundary of those two classes. We were a little disappointed because the idea has been already described by [arxiv:1505.02867]. However, their approach assumes a fixed feature representation which is not suited for our needs. After we experimented with ways to learn a good features representation for the nodes, we stumbled about a new paper [arxiv:1702.08833] that enhances Boundary Trees with backprop to learn the features.

But let’s start with an intuitive example first. We have a set of binary ratings from a user for movies which should be used to estimate a function to predict the class of unseen examples. The idea is to combine elements from memory networks with boundary trees to learn something like a decision tree that allows to classify unseen examples with minimal memory footprint. So let’s start.

At the begin we have nothing, so we need to sample a rating from the dataset which becomes the root of our tree. We measure the “match” of a query, an unseen sample, with a node by using the l2 distance between the query and the node: T.sum((query – node)**2). To find the best match, we traverse the tree iteratively by considering the best local match and continue with the children until we reach a leaf node or the parent node is a better match than any children. If the final node has already the same class as the selected sample, we discard it, otherwise we add the sample as a child to the final node. In case of a fixed representation the method is straightforward, but it is only logical to a assume that a learned epresentation is more efficient since we do not know a priori what is actually useful. And now comes the trick part where we have learn a representation.

In case you cannot learn something end-to-end it has some tradition in optimization to alternating between two steps. For instance, to keep one parameter fix and update the other and then do the opposite. We can use the same approach here and build the memory tree with a fixed feature representation from the current model parameters. Then we fix the tree and use random queries to update the representation, the model parameters. The two steps are repeating until convergence.

We hopefully find some time to elaborate on details in later posts, but the core idea is very simple: Build a tree, re-fine it and continue as long as the model improves. At the end, we hopefully have a model -a tree- that consists of a manageable number of nodes and allows us to correctly predict the class of unseen example by traversing the tree to the corresponding leaf node. At least for the domain of images it has been shown that trained models have an interpretable structure and we hope to show the same for the domain of high dimensional sparse input data.