Category: Uncategorized

How to Learn Good Features?

Since quite some time we fiddle with movie data to learn a representation of the meta data that is both universal and powerful enough for various tasks. The major problem is that the data is horribly incomplete, bias towards popular items and due to human nature not always objective. But let’s assume for the moment that we can learn features based on the data, then the question is how to shape the learning process to get features as versatile as possible?

There is a very nice post about feature transformation on distill that can be summarized as conditional layer normalization. For example, to answer relational questions about an image, the query is used as a context that guides the learning of the conv net. To be a bit more precise, depending on the question the output of the filters is adapted by scaling and shifting. This way, units can be turned on/off or the magnitude can be adjusted. The idea to not assume a strong prior about the data is a clever trick to avoid to manually engineer to explicit knowledge into a network.

With text for the queries and images to describe the content the task is still challenging, but at least the data is complete in the sense that it is possible to answer the questions with a correctly learned net (whatever this looks like). In our case, we have at least two problems: First, we don’t know if the data at hand is sufficient even for simple tasks and second, we need to find an appropriate
context that is always available.

So, with all this in mind a better question might be if we can somehow measure if the input data is powerful enough to solve the formulated problem at all. And let’s assume for the moment that we have all the computational power we need. In other words, even with the most powerful network and a millions of GPUs to train it, it is still possible that no such function f(x) exists that gives the correct answer given the particular input data x.

This is related to neural architecture search where one tries to find the best network architecture for the data, but here the assumption is that such a function f(x) exists. And such an assumption is reasonable, since usually images contain enough details to let a network give the correct answer. In case of human-engineered textual features, there is no such guarantee that they generalize beyond the purpose that they were created for which is usually purely informative.

At the end, we are back to where we started: Without external knowledge it seems almost hopeless to train networks with such data that can do more than simple classification, at best. But the problem are not the networks, but the lack of proper data, or better how to enhance and incorporate such data. With Wikipedia a lot of knowledge is available, but it is not a trivial task to extract relevant information and assembly them so it can be feed into the network.

There is a recent trend to make larger data sets publicly available, but it is wishful thinking that even big companies have all the data you need and/or the will to release it. Maybe it’s time to work harder on an Open Data Initiative (ODI) for machine learning, or come up at least with a community based service like a model zoo, but for data.

Advertisements

PyTorch: Convolutional Autoencoders Made Easy

Since we started with our audio project, we thought about ways how to learn audio features in an unsupervised way. For instance, in case of speaker recognition we are more interested in a condensed representation of the speaker characteristics than in a classifier since there is much more unlabeled data available to learn from. However, without supervision there is always the risk that the learned representation does not help in the task at hand. Still, it’s worth a try since the data is available and so are suited network architectures.

Autoencoders (AEs) have a long history in machine learning and since some years, the convolutional variant became also more and more popular. However, since conv AEs use inverse operations and some advance stuff to recover lost information during the forward-propagation step, we thought it is a good idea to provide a clean, minimal example with some additional hints which help to understand the workflow. Without a doubt there are other examples around, but we did not find one that was exactly matching our domain (audio + conv1d), or at least not a minimal one that does not involve studying lots of unrelated code.

The conv AE consists of two modules, an encoder and a decoder which is not different to the vanilla AE. The encoder part looks a lot like a common convnet with some minor, but important variations:

c1 = nn.Conv1d(in_size, 16, 3)
m1 = nn.MaxPool1d(2, return_indices=True)
i1 = None
c2 = nn.Conv1d(16, 16, 3)

The first layer c1 is an ordinary 1D convoluation with the given in_size channels and 16 kernels with a size of 3×1. The next layer m1 is a max-pool layer with a size of 2×1 and stride 1×1. Additionally the indices of the maximal value will be returned since the information is required in the decoder later. The last layer is again conv 1d layer.

The forward step looks like that:

_c1 = c1(x_in)
_m1, i1 = m1(_c1)
return c2(_m1)

Again this should look pretty familiar, except for the pooling call because it returns both the output and the indices of the maximal value.

Then comes the decoder that uses the input from the encoder step:

d1 = nn.ConvTranspose1d(16, 16, 3)
u1 = nn.MaxUnpool1d(2)
d2 = nn.ConvTranspose1d(16, in_size, 3)

The architecture is reversed which means the last layer of the encoder fits into the first layer in the decoder. Thus, every layer is the inverse operation of the encoder layer: conv->transpose conv, pool->unpool. At the end, the full input is reconstructed again.

With the forward step as follows:

_d1 = d1(x_in)
_i1 = encoder.i1 # pool positions from encoder
_u1 = self.u1(_d1, _i1)
return self.d2(_u1)

Here we can see, that the unpooling uses the position information from the encoder. This is required since after the max pooling is done, no reversing is possible with the index information.

For example: x = (5, 10), maxpool(x, size=2) = 10 but we have no longer the information at which position the value was located: (10, ?) or (?, 10)? With the index from the encoder step, we can at least recover the position of the maximal value, but we still have to set all other values to 0 since this data is not available any longer: (0, 10). As a result, we still lose information but we can at least undo the maxpool step.

The workflow is easier to understand if we analyze the shape of each step:

Encoder: x_in=(1, 128, 44), c1=(1, 16, 42), m1=(1, 16, 21), c2=(1, 16, 19)
Decoder: x_hat_in=(1, 16, 19), d1=(1, 16, 21), u1=(1, 16, 42), d2=(1, 128, 44)

We can see that every shape in the decoder has a matching counterpart in the encoder: d2 x_in, u1 c1, d1 m1, x_hat_in c2.

Now, equipped with this knowledge, which can be also found in the excellent documentation of PyTorch, we can move from this toy example to a real (deep) conv AE with as much layers as we need and furthermore, we are also not limited to audio, but we can also build 2D convolutional AEs for images or even videos.

Let’s Make Some Noise

Sometimes it is a good idea to try a new direction when you are stuck. In other words, we needed some new inspiration and we thought it’s worth to turn to a very different domain, in our case audio. Furthermore, since quite some time we toyed with the idea to tag a specific voice in an audio signal by somehow learning a representation of the speaker, so it felt like the way to go.

A possible scenario looks like this: We record a movie via DVB-S and extract the audio stream. Then we convert the raw audio into a more suitable representation and classify all time frames, or time windows, with our learned model with +1/-1. At the end, we have time markers where the trained voice has been detected: [at min 3.1, at min 37.3, ..]. So far for the theory, now let’s turn to reality.

For us it was settled, that PyTorch is our framework of choice. Thus, as a first step we needed audio support. We hoped that in the spirit of torchvision, there is also torchaudio and we were not disappointed. The “load” function allows us to load arbitrary audio files in raw format and return the data as a tensor. However, this format requires a lot of computational resources, since every second is encoded as rate (e.g. 41,000) float values, per channel. Thus, the shape of the tensor is (rate * seconds, channels), which is huge for a full-length movie.

So we are interested in a more compact representation and as a first step, we converted stereo signals to mono (“transforms.DownmixMono”) which reduces the shape to (rate * seconds, 1). But since this is still a lot of data, we did some research to get an overview of popular transformations and we decided to use MEL spectrograms, also because there is an interface in the torchaudio package (“transforms.MEL”). With default values from papers, and re-sampling to 22,1000 Hz, each second of raw audio is now encoded as a (128, 22) matrix. In this setting, the rows are the frequency axis and the columns are the time axis. We further apply a log transformation on the data to avoid exploding gradients, since the magnitude of the spectrogram data can be very high.

Now the question is how to encode this information into a new representation to model the similarity between frames? There are several approaches possible. For instance, we could train an ordinary classifier one-vs-rest that outputs +1 if the frame is spoken by the speaker or -1 otherwise. But we opted for a triplet-based method to better model local neighborhoods. The drawback is that we cannot directly classify unseen frames, but we need some kind of nearest neighbor lookup to decide if the frame is a positive match. Thus, it makes sense that the positive data from training forms a memory component that in combination with a threshold acts like a classifier.

Next, we need to design our network architecture. With the chosen MEL transformation, we could easily train a feed-forward neural net, the input dim would be just 128*22=2816, but dense layers are not invariant to shifts in frequency[arxiv:1709.04396] and thus, a minor change in the input can lead to a larger change in the feature space. Thus, we decided to follow the steps of the early papers that uses convolution over the time axis to learn a representation which is a 1d convolution. The architecture is heavily inspired by the convnets from vision, with the exception that pooling and convolution just uses one channel, not two.

Thanks to PyTorch we have everything we need and a prototype consists just of a few lines of Python. Here is a sketch of the network:

import torch
from torch.nn import Conv1d
from torch.nn import MaxPool1d
from torch.nn import Linear
from torch.autograd import Variable
from torch.nn import functional as F

x = Variable(torch.randn(1, 128, 22))
c1 = Conv1d(in_channels=128, out_channels=32, kernel_size=3)
c2 = Conv1d(in_channels=32, out_channels=32, kernel_size=3)
m1 = MaxPool1d(2)
l1 = Linear(32, 16, bias=False)
h_2d = c2(m1(c1(x)))
h = F.adaptive_avg_pool2d(h_2d, (32, 1)).squeeze()
out = l1(h)

First, there is a convolution, followed by max-pooling, followed by a convolution and at the end, a global average pooling, that returns the mean of each filter map, followed by an affine transformation that represents the final embedding space. Additional blocks like normalization and non-linear activation functions are omitted for clarity. Such an architecture has a lot of benefits: First, we can stack blocks of conv/norm/relu/pool to form a deep network, second the network has also very few trainable parameters and last but not least, the forward step is computationally very efficient.

The training of the network is also pretty straightforward. The data set consists of spoken audio material by the person to recognize, as positive examples and arbitrary audio from other persons as negative examples. Without a doubt the selection of “the rest” impacts the performance of the network, since if all samples are already sufficiently far away from the speaker samples, no learning is done. This issue requires more research, but even our naive selection of negative samples lead to a solid performance.

Next, all audio files are pre-processed and split into frames of ~2 seconds on which the transformation is applied. The order of the frames is not preserved, since the “classification” works on single frames. A learning step consists of a sampling of an anchor and a positive sample and an arbitrary negative sample. Each input to the network represents a single time frame with the possibility to feed a batch of frames to the network. We l2 normalize all network output and use the cosine similarity to determine the triplet loss:
loss = torch.clamp(margin=0.3 + dot(anchor, negative) - dot(anchor, positive), min=0)
In other words, if the negative sample is sufficiently far away from the anchor (>= margin) no learning is required, otherwise the parameters are adjusted to push the negative sample away from the anchor.

However, it can be challenging to find good negative samples, since at later stages of the training, most samples are already well separated and thus have a loss of zero. This means, we need to find violators, outside the batch, to further improve the model. This can be computationally expensive, since we need to calculate the loss on many samples until we find enough of them. However, the procedure is required to ensure that we learn a good model and that the learning converges.

When the model is trained, the positive samples are fed to the network and the representation is stored as some kind of “memory”. As a baseline, new frames are classified by performing a nearest neighbor lookup (cosine similarity) on the memory and a frame is marked as “positive” if the mean of the top-5 scores from memory are above a threshold, like 0.7. Astonishingly, this baseline is pretty robust and already allows to reliably mark relevant time windows of audio material without too many false positives.

Bottom line, regardless of the domain, the machine learning pipeline stays pretty much the same. We have a problem, data, cleansing, optional a transformation and we need a good network architecture and a proper loss function to learn a good model. The next steps are more experiments to evaluate the model and to come up with a better way to classify unseen data based only on positive examples.

PyTorch: Identifying Computational Bottlenecks

It might happen that if we start with a new idea, we focus on the clarity of the code but not on the overall performance. Of course the model should not be slow as a snail, but often there is room for improvement. Still, first it is more important to get it working than to be super fast. When everything works well, it’s time to take a closer look at the code and to identify possible bottlenecks.

In our case, we often calculate dot products between vectors and matrices and there are different ways to do the math. For example:
(1) torch.sum(anchor * examples, 1) # shape: (1, dim) x (n, dim)
(2) examples.mm(anchor.view(-1, 1)) # shape: (n, dim) x (dim, 1)
For both methods there is not much overhead, at least not function-wise, however, after we did some profiling, we found out that method (2) is about 40% faster than the first one. This is probably related to hardware utilization since (2) feels more “batched”.

Frankly, this is nothing new, but it just reminded us that for large-scale learning, using optimal numeric calculation can save you a day or week, or it can give you the opportunity to train a little longer. In our case, by introducing padding we reduced the time by almost 50% and now with the batched dot product, we got another 40%.

PyTorch: Faster Embedding Lookups With Padding

There are quite a few helper functions when it comes to recurrent nets, but in our case we just wanted to speed up the forward step of a model that is just using Embedding layers. Maybe there is also a helper for our problem, but in any case it’s a good idea to manually implement these steps to see how it works under the hood and to learn about possible side effects. Our setup is pretty simple: We have a batch of lists that contain individual tokens and our network shall return the sum of the corresponding embeddings for each sample.

The naive implementation only works if all those token lists have the same size, otherwise we are not able to build a LongTensor:

torch.LongTensor([[0, 5, 10], [3, 33, 333]]) [okay]
torch.LongTensor([[0, 5, 10], [3, 33]]) [error]

Since this is a common problem, the nn.Embedding module of PyTorch supports padding with “padding_idx=PAD”. Whenever PAD is found in the long tensor, the output is filled with zeros:

torch.LongTensor([3, 33, PAD]):
x_3_0 ... x_3_d
x_33_0 ... x_33_d
0 ... 0

In other words, this acts like a dummy embedding that does not change the gradient because no actual parameters are used. With this approach, we are able to return the aggregated embeddings (sum) for a batch of samples with different lengths, instead of forwarding each sample separately through the network.


batch = torch.LongTensor([
[0, 5, 10, 15],
[3, 33, PAD, PAD],
[17, PAD, PAD, PAD]])
batch_emb = net(Variable(batch))

We measured the runtime for both approaches and as expected, there is a notable performance gain by using batching: naive=14863 msecs. vs. batched=8294 msecs. which is an improvement of more than 40%.

Actually there is not any magic involved and you just need to make sure that you are working with the correct axis if you perform per-sample transformations. In our case, we normalized each aggregated vector (sum) so it has a unit-norm.

As a last step, let’s go through an example: If we assume that our embed_dim is 10 and we use batch as the input to the network, we get the following output shape: (3, 4, 10) which means we have 3 samples, each with 4 embeddings and each with 10 dimensions. Now, we want to calculate the sum of the embedding for each sample in the batch: batch_emb_sum = torch.sum(batch_emb, 1) with a resulting shape of (3, 10) and finally the normalization step: batch_emb_final = batch_emb_sum / batch_emb_sum.norm(dim=-1, keepdim=True) and that’s it. Thanks to the padding, the zero vectors do not interfere with any steps, since adding zero to something does not change anything.

But we need to be careful when we use an operation that depends on the number of elements, like torch.mean since the padding changes the size of the shape. To be more concrete, if we only have one token, but three PAD entries, the shape is (4, 10) and the mean would be: torch.sum(x, 1) / 4 even if the last three entries do not hold any values. Thus, we need to re-calculate the shape if padding has been used: actual_len = #rows – #pad_rows.

Ask Your Neighbors For Advice

Since we have a rather unusual setup, ordinary classification often delivers a performance that is not satisfying. We tried to address the issue with a large-nargin approach that uses a triplet loss to model local neighborhoods, but even this approach fails to solve our problems for some kind of data. To put it simple, some samples with a rare combination of features might not find their place in the learned embedding space and thus, probably end up at some “random” place. To guide the way of those little rascals, we introduced a memory component like in [arxiv:1703.03129].

First, the memory is filled uniformly with samples with different labels. After that for each query, we find the nearest neighbor with the same label, but also one with a different label. The idea is to ensure that memory slots with different labels are well separated in the embedding space. Since we do not backprop through the memory slots, we adjust the embedding parameters of the query to ensure the margin. This is done by a simple triplet loss:
max(0, margin + dot(query, nn_neg) - dot(query, nn_pos))
where all data is unit-normed.

The memory slot with the matching label is then updated:
mem[idx_pos] = l2_norm(query + mem[idx_pos])
where idx_pos is the position of the memory slot.

The argumentation why this helps to improve the performance is similar to the one in the paper: The additional memory helps to remember combinations of features that are rarely seen and thus is often able to infer the correct label even if the embedding has not been “settled” at all.

Furthermore, the memory can also help to improve the embedding space by concatenating embeddings of samples and slots which leads to a cleaner separation of class boundaries: h_new = hidden||nn(mem, hidden).

Still, there are quite a few questions that need to be addressed by further research:
– The more a memory slot gets updated, the more likely it is that it will be closer to an arbitrary query. This will likely lead to lots of orphaned memory slots. How can we ensure that distinct feature combinations won’t get mixed into those slots?

– Shall we introduce some some non-determinism to introduce some noise to improve the utilization of the memory (to better preserve some rare patterns)?

– Shall the memory be either a circular buffer or shall we average memory slots to convert to a stable state?

As long as there is no way to learn in an reliably, but unsupervised way from sparse input data, we believe that external memory is a promising path to pursuit. However, even with this adjustment there are still lots of challenges that need to be addressed before we can come up with a working solution.

Don’t Push Things Into Neural Corners

The last year was quite a ride with lots of new ideas, controversial discussions and real-world examples that large-scale machine learning is actually more than a flash in the pan. However, we still have the feeling that more basic research should be done. For instance, a while ago we stumbled about an excellent blog[1] that explains neural nets in a very descriptive but nevertheless still formal way. Since manifolds are a very powerful tool to explain what the representation that a net learned looks like, we tried to find similar, but introductory literature. It was a bit surprising that we did not find much. Maybe we didn’t try hard enough, but it’s more likely that relevant literature is buried somewhere and cannot be easily accessed through search engines, or, in the worst case, there is not much at all.

Nevertheless, the post inspired us to rethink the way how we tackle our current problems. For instance, we try to build a preference-model that ranks items according to the known preferences of users. But instead of simply classifying new items, we would like to capture latent topics in known item pairs to be able to perform something like a k-NN search at test time to better explain why a user might like an item or not. The reason why we are working on alternatives is that because the input space is very sparse and heterogeneous the generalization often fails. In other words, we get good scores for train/test but unseen items still land on the wrong side of the decision boundary way too often.

To get a better understanding of what’s going wrong, we worked on ways to visualize decision boundaries. But it is not hard to believe that a hyperplane is not always the best way to support fine-grained classification. A problem is that all +1 items are not really equal, like some image category, but might contain items from very different categories and the same is true for -1 items. Thus, it might be much easier for the network to learn a local neighborhood instead. As a baseline, we could use k-NN, but it is well known that the accuracy of the algorithm highly depends on the feature representation and since our input space consists of highly entangled data, it does not work in the original space.

As a result, we need a neural net to disentangle the representation with some loss function and then we can use k-NN to perform a prediction that is more robust compared to an end-to-end classification. For example, let’s assume that we trained a classification model and we get an item x that lies on the wrong side of the decision boundary. However, there is also a wrongly classified -positive- item x’ that is pretty close to x and the next negative x” item is farther away than x’: dist(x, x”) > dist(x, x’). Therefore, while the classification of x would be -1, the 1-NN model would still get it right because we consider the neighborhood of x.

Of course it’s not always that trivial, but instead of pushing every category into a single corner of the feature space, we can make the life of the neural net easier by just separating positive and negative items by a fixed margin. This is nothing more than the good old triplet loss, but we still have one challenge to address!

The problem is that not all +1 items come from the same distribution. Thus, if we just sample uniformly, we likely sample items with different topics and force them to be close together in the feature space. This still works, but at the end, we probably just learn a representation that is linearly separable again. So, the question is how to preserve the local neighborhood in each +1/-1 category? We could cheat by using tags or meta-data, or we could also extract topics with a NMF.

Of course the new expressive power won’t come for free. The beauty of a classifier is that we can make predictions but just feeding the item to the network and get the score for it. In case of a k-NN-based method, we also need to store labeled training data and perform a lookup each time a new item needs to be labeled. But depending on how much labeled training data is required for a good score, the overhead is often negligible, since we just need to store a matrix of the size (N x dim) and perform a single matrix multiplication, following by an argument sorting. And if all items have unit-norm, this corresponds to the cosine score.

Of course we are not done here, but we have a strong feeling that pursuing this way will lead somewhere, even if it takes a lot of steps until we arrive somewhere.

[1] colah.github.io/posts/2014-03-NN-Manifolds-Topology/