Category: Uncategorized

The Power Of Linear Models

Without a doubt Deep Learning is extremely successful, but is it also always required, to solve a particular problem? Stated differently, if we can solve a problem with a linear model, we surely can solve it also with a neural net, since the latter is more expressive. However, in such a case, we likely waste resources, time + space, and it might be even possible that we need to tune the net to achieve the same performance as the linear model. Of course some domains are biased, like images where linear models are known to lack the expressiveness to solve even simple classification problems. But, if we take for instance a classification problem with thousands of human-generated features, even a linear SVM is likely to suffice, since the high-dimensional space increases the chance a lot that the data is already linearly separable or that at least the data is not as entangled as in the case of images.

So, let’s start with a linear SVM by minimizing the hinge loss which is max(0, 1 – y*y_hat). Because there are lots of off-the-shelf solutions available, all we have to take care of is the sparsity in the input. In case of cross-features it can easily happen that we have to cope with 250,000 features but each sample only has about 25. In case of Python support, there is a pretty good chance that there is support for sparse matrices via scipy/numpy and we are done. Nevertheless, to demonstrate how little effort is required to train such a model without any sophisticated libraries, we started from scratch.

The method is straightforward: First, we create a lookup L to map each word to a unique feature ID. The model W is an empty defaultdict of type float. We draw a random sample (x, y), y in [-1, 1], and sum up all features to get y_hat:
y_hat = sum([W[L[word]] for word in x])
First the model is empty and y_hat equals 0. Thus, the loss is: max(0, 1 – y*y_hat) = 1. Next, we derive the gradient, which is extremely simple for the hinge loss, and update the parameters accordingly.

for word in x:
 W[L[word]] -= lrate * (-y)

Since we treat all words as equally important, for the sake of simplicity, the update rule just increases/decreases a feature proportional to the learning rate according to the label:

w_i = w_i - lrate * (-1) => w_i += lrate
w_i = w_i - lrate * (1) => w_i -= lrate

This is neither magical nor unexpected, since we want features that are related to +1 samples to be positive and vice versa negative for -1 samples.

The beauty of this method is that we only need to update the model parameters, features, that are present in the drawn sample which is very efficient. Furthermore, with the lookup and the dictionary, every actual parameter update can be also performed very efficiently and there is no scalability problem in case of millions of features. It is also possible to perform on-line learning, because of the modest model size, which means we just update the model parameters continuously for each new sample (x, y). The advantage is that we can automatically cope with drifts in the preferences over time which means nothing more than to slowly shift the weight from some features to others.

Once the model W is trained, the parameters can be easily serialized as a Python dictionary, along with the lookup L. Then, at test time, the prediction just requires us to sum the weights for the present features in a new sample x, which is very fast:

y_hat = sum([W[L[word]] for word in x])

Bottom line, it is no secret that the power of a linear model lies in the features. However, if features are already available and there is no need to handcraft them, a simple feature combination and a linear model might already provide a very strong baseline. In other words, even if Deep Learning is awesome, we should always start with the simplest model and only increase the capacity if really needed.

Advertisements

fastText: Tinkering With Character N-Grams

Inspired by the recent advances in multi-task learning we also wanted to investigate the capabilities of those methods for simpler tasks. As stated by researchers, learning your own word embedding can easily lead to overfitting which is why we also wanted to use pre-trained embeddings. However, to address out-of-vocab words, it is mandatory that the model can also embed words that were unknown during training. For instance, fastText comes with n-gram support that allows exactly this and furthermore, it also provides models for other
languages than English.

However, with a vocab of 2M words and 300 dimensions, those models can be a bit cumbersome to work with. You require approximately 8 GB RAM just to hold the data in memory which is no problem for modern servers, but at your machine at home, it might be quite a lot since you also need to run other services. A further question is about the interface between the data and your favorite language. There is a very nice API for python, but it requires to load the full binary model and also introduces some minor overhead.

So, we decided to analyze the binary format of the model, to see if we can somehow represent the model more compactly. The good thing is that the format is quite simple: there is a dictionary, additional model parameters and at the end of the file there is the input and the output matrix.

For the pre-trained models, we have 2M n-gram buckets and 2M words, and each row has 300 dimensions (float32). A matrix is represented by a fixed 16 octet header: int64 rows, int64 cols and each vector is a sequence of 300 float32 values. Between the input and output matrix is a single octet that indicates if quantization has been used. With this numbers it is easy to go to the correct offset in the file to read the data.

The size of a matrix is: rows*cols*sizeof(float32) which is 2M * 300 * 4 ~ 2.2 GB. Actually, the input matrix is a concatenation of the vocabulary words and the buckets for the n-grams. Thus, the index 0…2M references the words and 2M…4M references the buckets which is why 2M is added to the index returned by the hash function. So, we have three matrices with a total size of ~7 GB.

Now, we come to the coding part which is pretty easy with python. You just need to calculate the total size of the data + 1 octet as a quantization flag and seek to this position relative from the end: lseek(fd, -total_size, 2). After, you parsed the header, you can simply read each vector by vec = unpack('300f', fd.read(300 * sizeof(float32))) and store it in a numpy array. It is also possible to use float16 for the array since the loss of precision should not be noteable for similarity lookups. Since there is no direct lookup for each n-gram, we also need to port the hash function from c++ to python which is used to retrieve the row ID for each n-gram. Example:

h = 2166136261
for i in xrange(len(str)):
 h = h ^ int(ord(str[i]))
 h = h * 16777619
 h = h & 0xffffffff
return h % bucket_size

Because we cannot model a int32_t type, we need to ensure the boundaries with the AND mask. And bucket_size is 2M. To retrieve the embedding of a specific ngram, we use the hash function:

v_emb = W_ngrams[hash("%wh")]

To convert a new word into a lookup vector, we first generate all n-grams with a size between 3 and 6. Then we convert each n-gram to a row ID and sum all vectors up and dividing it by the length of the n-gram list, which is nothing more than the average.

id_list = [hash(x) for x in ngrams("%where%")]
emb = np.sum(W_ngrams[id_list], 0) / len(id_list)

It is not complicated to wrap the whole procedure into a single class that outputs an embedding vector for an arbitrary word. So, instead of fiddling with 7 GB we just have 2.2 GB as a single numpy array, in case we need to generate vectors for unknown words.

But there is a minor issue we need to address. The representation of words from the vocabulary is a combination of the actual embedding vector plus the sum of its n-grams. In other words, to perform a similarity lookup, we require both matrices to perform it. Since the accumulated representation does not change and the actual embedding vector is never use stand-alone, we decided to perform the pre-processing step and store the result as a separate matrix. Now, we can use this matrix to directly output word embeddings and/or to perform nearest neighbor queries.

W_words = [..]
W_grams = [..]
id_list = ngrams(vocab[0])
W_words = np.vstack((W_words[0], W_grams[id_list]))
W_word = np.sum(W_words, 0) / W.shape[0]

Finally, we have two matrices: (1) the accumulated vocabulary matrix, which can be compared to the output of a word2vec with n-gram support and (2) one for the 2M n-grams which can be used with the hash function for lookups.

Bottom line, it is very helpful to have access to pre-trained embeddings which were trained on a large-scale corpus since very often, your data at hand is not sufficient to train ‘unbiased’ embeddings that generalize well. Furthermore, the n-gram support allows you to embed also oov words which will definitely occur and neither random inits nor UNK token are really appropriate to address them.

How to Handle Out-of-Vocabulary Words Pragmatically

Using pre-trained word embeddings can be a huge advantage if you don’t have enough data to train a proper embedding, but also if you are afraid to overfit to the task at hand if you train it jointly with your model. In case of English you find quite a lot word embedding models that are publicly available, but it’s different for other languages like French, German or Spain. There are some available, like the ones provided with fastText, but depending on the language, it can be challenging. So, let’s assume that you have some luck and there is a pre-trained model, then the next challenge waits just around the corner. The problem is that for specific tasks, there are definitely words that are not present in the vocabulary. There are some dirty solutions like mapping all those words to a fixed token like ‘UNK’ or using random vectors, but none of these approaches is really satisfying.

In case of fastText there is a clever, built-in, way to handle oov words: n-grams. The general idea of n-grams is to also consider the structure of a word. For instance, with n=3 and word=’where’: [%wh, whe, her, ere, re%]. The % are used to differ between the word “her” and the ngram her, since the former is encoded as “%her%” which leads to [%he, her, er%]. In case of a new word, the sum of n-grams is used to encode the word which means as long as you have seen the n-grams, you can encode any new word you require. For a sufficiently large text corpus, it is very likely that a large portion, or even all, required n-grams are present.

Since most implementions, also fastText, is using the hashing trick, you cannot directly export the mapping n-gram vector, however, there is a function to query n-grams for a given word:

$ fasttext print-ngrams my_model "gibberish"

There is an open pull request (#289), to export all n-grams for a list of words, but right now to call fasttext for each new word which is very inefficient in case of huge models. Without knowing about the pull request, we did exactly the same. First we slightly modified the code to accept a list of words from stdin and then we performed a sort with duplicate elimination to get a distinct list of all n-grams which are present in the model.

Now, we have a pre-trained model for the fixed vocabulary, but additionally, we also have a model that allows us to map oov words to the same vector space as long as the n-grams are known. We also evaluated the mapping with slightly modified or related words which are not in the vocabulary with very good results.

Bottom line, without a doubt n-grams are not the silver bullet but they help a lot if you work with data that is dynamically changing, which includes spelling errors, variations and/or made-up words. Furthermore, publicly available models often deliver already solid results which takes the burden from you to train a model yourself which might overfit to the problem at hand or is not satisfying at all because you don’t have enough data. In case a model does not come with n-gram support, there is also a good chance to transfer the knowledge encoded in the vectors into n-grams by finding an appropriate loss function that preserves this knowledge in the sum of n-grams.

An Addendum to Batch Processing With Variable Sequences

We said that there is no example code for the whole processing steps which seemed a little rash since there seem to be some gist snippets and we want to give at least credit to one:

gist.github.com/Tushar-N/dfca335e370a2bc3bc79876e6270099e

which is minimal but at the same time very well documented.

Just a quick note which might lead to the next blog post: After we trained our network, we wanted to do a under-the-hood analysis of the reset and forget gates of the GRU cells in case of the few errors the network makes. However, due to stacking the parameters, for performance reasons, a straightforward analysis needs some more preparation. In general the question is, if we use pre-defined modules, how can we debug the internal states of individual steps and units?

PyTorch – Batching With Recurrent Nets

Implementing RNNs in PyTorch works like a charm thanks to the dynamic graph computation. All you have to do is a loop where you feed the input to the network and keep track of the new hidden state. At the end, you can feed the last state (or the average of all) into a new layer which can be a classifier or whatever is required for the loss function at hand. This was the easy part. However, working with RNNs in single-batch mode is incredible inefficient when you need to train a very large dataset. The problem is the sequential nature of the RNNs which does not allow to process input in parallel. With mini batches, we can at least use hardware parallelism to speed up the pipeline and we might get a more stable gradient because we use multiple inputs to estimate it.

What is astonishing is that PyTorch provides functionality to help you with the issue, but there is no tutorial or example code that contains all the steps. Sure, there are blogs and snippets on the web that explain it, but often a stand-alone, fully working, example allows to retrace the whole process more easily. Indeed, once you know all the details it is fairly simple to implement, since the PyTorch team did a very good job to hide all the nasty details from the users.

So, let’s start to describe the actual problem: During training, RNNs deal with sequences of different lengths which is no problem in single batch mode. However, if you want to use batching, you have to use padding to convert all samples to the same length as a first step. This can be done by using an extra “dummy” entry (“padding_idx”) in the nn.Embedding module which is added to each input at the end
until all inputs in the batch have the same length. But that is only the first step, since the RNN must ignore all those padded tokens for each input sequence while deriving the gradient w.r.t to the loss function.

This sounds a bit complicated because we have to fiddle with the computational graph, but kindly, there are helper functions for this to avoid to get your hands dirty. But let us start at the begin. Let us assume that we have an input X = [A, B, C] and the length of each sequence X_len = [4, 2, 8]. First, we need to pad each sequence to get a uniform length which requires to sort the input in decreasing order:

X = [torch.ones(4), torch.ones(2), torch.ones(8)]
X.sort(key=lambda x: x.shape[0], reverse=True)
X_pad = pad_sequence(X, batch_first=True, padding_value=0).long()
tensor([[ 1, 1, 1, 1, 1, 1, 1, 1], [ 1, 1, 1, 1, 0, 0, 0, 0], [ 1, 1, 0, 0, 0, 0, 0, 0]])
X_len = torch.LongTensor(map(lambda x: x.shape[0], X))

The option “batch_first” just ensures that the shape is always (batch, seq, feature).

As we can see, each sequence has now a length of 8 with “0”s as padding whenever required. Since we need the unpadded length of each sequence later, we also calculate X_len. With X_pad we can already perform a lookup in an nn.Embeding module:

emb = nn.Embedding(2, 5, padding_idx=0) #n_vocab, n_dim
X_emb = emb(X_pad)

Now, we are ready to feed the input to the RNN:

# setup network and initialize hidden states to zero
net = nn.GRU(5, 10, batch_first=True) #n_dim, n_units
hidden = torch.zeros(1, X_emb.shape[0], 10)
# pack batch
X_packed = pack_padded_sequence(X_emb, X_len, batch_first=True)
# forward step
out, hidden = net(X_packed, hidden)
# unpack batch
out, _ = pad_packed_sequence(out, batch_first=True)
# retrieve the last hidden state w.r.t to the original length for each sequence
idx = torch.arange(0, len(X_len)).long()
out_final = out[idx, X_len - 1, :]

The required steps can be easily wrapped into some class that hides all the nasty details and allows to get the output of an arbitrary recurrent network for a batch of (text) sequences in a straightforward way.

However, there is a drawback we need to take care of. For example, if we train a classifier and we sample a mini batch and the corresponding labels (X, Y), the procedure described above changes the order of X, while Y remain the same. The problem arises because of the sorting step that is only applied to X which makes the solution obvious: we also have to sort Y, but by X_len to get the identical order. The following code is not very nifty, but it works:

Y = [-1, 1, -1]
Y_ = zip(Y, X_len)
Y = map(lambda x: x[0], sorted(Y_, key=lambda x: x[1], reverse=True))

Bottom line, single-batch use of RNNs is a piece of cake, but the performance neither let you allow to train bigger networks or larger datasets, nor is the inference performance sufficient for real-world use. Despite the fact that the pad/pack/unpack scheme by PyTorch is not very complicated, it still needs some time to get used to it. But once one mastered it, the performance gain is more than noteworthy and allows to use RNNs at a much larger scale.

How to Learn Good Features?

Since quite some time we fiddle with movie data to learn a representation of the meta data that is both universal and powerful enough for various tasks. The major problem is that the data is horribly incomplete, bias towards popular items and due to human nature not always objective. But let’s assume for the moment that we can learn features based on the data, then the question is how to shape the learning process to get features as versatile as possible?

There is a very nice post about feature transformation on distill that can be summarized as conditional layer normalization. For example, to answer relational questions about an image, the query is used as a context that guides the learning of the conv net. To be a bit more precise, depending on the question the output of the filters is adapted by scaling and shifting. This way, units can be turned on/off or the magnitude can be adjusted. The idea to not assume a strong prior about the data is a clever trick to avoid to manually engineer to explicit knowledge into a network.

With text for the queries and images to describe the content the task is still challenging, but at least the data is complete in the sense that it is possible to answer the questions with a correctly learned net (whatever this looks like). In our case, we have at least two problems: First, we don’t know if the data at hand is sufficient even for simple tasks and second, we need to find an appropriate
context that is always available.

So, with all this in mind a better question might be if we can somehow measure if the input data is powerful enough to solve the formulated problem at all. And let’s assume for the moment that we have all the computational power we need. In other words, even with the most powerful network and a millions of GPUs to train it, it is still possible that no such function f(x) exists that gives the correct answer given the particular input data x.

This is related to neural architecture search where one tries to find the best network architecture for the data, but here the assumption is that such a function f(x) exists. And such an assumption is reasonable, since usually images contain enough details to let a network give the correct answer. In case of human-engineered textual features, there is no such guarantee that they generalize beyond the purpose that they were created for which is usually purely informative.

At the end, we are back to where we started: Without external knowledge it seems almost hopeless to train networks with such data that can do more than simple classification, at best. But the problem are not the networks, but the lack of proper data, or better how to enhance and incorporate such data. With Wikipedia a lot of knowledge is available, but it is not a trivial task to extract relevant information and assembly them so it can be feed into the network.

There is a recent trend to make larger data sets publicly available, but it is wishful thinking that even big companies have all the data you need and/or the will to release it. Maybe it’s time to work harder on an Open Data Initiative (ODI) for machine learning, or come up at least with a community based service like a model zoo, but for data.

PyTorch: Convolutional Autoencoders Made Easy

Since we started with our audio project, we thought about ways how to learn audio features in an unsupervised way. For instance, in case of speaker recognition we are more interested in a condensed representation of the speaker characteristics than in a classifier since there is much more unlabeled data available to learn from. However, without supervision there is always the risk that the learned representation does not help in the task at hand. Still, it’s worth a try since the data is available and so are suited network architectures.

Autoencoders (AEs) have a long history in machine learning and since some years, the convolutional variant became also more and more popular. However, since conv AEs use inverse operations and some advance stuff to recover lost information during the forward-propagation step, we thought it is a good idea to provide a clean, minimal example with some additional hints which help to understand the workflow. Without a doubt there are other examples around, but we did not find one that was exactly matching our domain (audio + conv1d), or at least not a minimal one that does not involve studying lots of unrelated code.

The conv AE consists of two modules, an encoder and a decoder which is not different to the vanilla AE. The encoder part looks a lot like a common convnet with some minor, but important variations:

c1 = nn.Conv1d(in_size, 16, 3)
m1 = nn.MaxPool1d(2, return_indices=True)
i1 = None
c2 = nn.Conv1d(16, 16, 3)

The first layer c1 is an ordinary 1D convoluation with the given in_size channels and 16 kernels with a size of 3×1. The next layer m1 is a max-pool layer with a size of 2×1 and stride 1×1. Additionally the indices of the maximal value will be returned since the information is required in the decoder later. The last layer is again conv 1d layer.

The forward step looks like that:

_c1 = c1(x_in)
_m1, i1 = m1(_c1)
return c2(_m1)

Again this should look pretty familiar, except for the pooling call because it returns both the output and the indices of the maximal value.

Then comes the decoder that uses the input from the encoder step:

d1 = nn.ConvTranspose1d(16, 16, 3)
u1 = nn.MaxUnpool1d(2)
d2 = nn.ConvTranspose1d(16, in_size, 3)

The architecture is reversed which means the last layer of the encoder fits into the first layer in the decoder. Thus, every layer is the inverse operation of the encoder layer: conv->transpose conv, pool->unpool. At the end, the full input is reconstructed again.

With the forward step as follows:

_d1 = d1(x_in)
_i1 = encoder.i1 # pool positions from encoder
_u1 = self.u1(_d1, _i1)
return self.d2(_u1)

Here we can see, that the unpooling uses the position information from the encoder. This is required since after the max pooling is done, no reversing is possible with the index information.

For example: x = (5, 10), maxpool(x, size=2) = 10 but we have no longer the information at which position the value was located: (10, ?) or (?, 10)? With the index from the encoder step, we can at least recover the position of the maximal value, but we still have to set all other values to 0 since this data is not available any longer: (0, 10). As a result, we still lose information but we can at least undo the maxpool step.

The workflow is easier to understand if we analyze the shape of each step:

Encoder: x_in=(1, 128, 44), c1=(1, 16, 42), m1=(1, 16, 21), c2=(1, 16, 19)
Decoder: x_hat_in=(1, 16, 19), d1=(1, 16, 21), u1=(1, 16, 42), d2=(1, 128, 44)

We can see that every shape in the decoder has a matching counterpart in the encoder: d2 x_in, u1 c1, d1 m1, x_hat_in c2.

Now, equipped with this knowledge, which can be also found in the excellent documentation of PyTorch, we can move from this toy example to a real (deep) conv AE with as much layers as we need and furthermore, we are also not limited to audio, but we can also build 2D convolutional AEs for images or even videos.