30

If I like to write a LSTM network and feed it by different input array sizes, how is it possible?

For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?

I am using Keras implementation of LSTM.

user3486308
  • 1,270
  • 5
  • 18
  • 28

2 Answers2

47

The easiest way is to use Padding and Masking.

There are three general ways to handle variable-length sequences:

  1. Padding and masking (which can be used for (3)),
  2. Batch size = 1, and
  3. Batch size > 1, with equi-length samples in each batch.

Padding and masking

In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then

X = [

[[1, 1.1], [0.9, 0.95]], # sequence 1 (2 timestamps)

[[2, 2.2], [1.9, 1.95], [1.8, 1.85]], # sequence 2 (3 timestamps)

]

will be converted to

X2 = [

[[1, 1.1], [0.9, 0.95], [-10, -10]], # padded sequence 1 (3 timestamps)

[[2, 2.2], [1.9, 1.95], [1.8, 1.85]], # sequence 2 (3 timestamps) ]

This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.

For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.

model.add(LSTM(units, input_shape=(None, dimension)))

this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).

I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.

model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))

where first dimension of input_shape in Masking is again None to allow batches with different lengths.

Here is the code for cases (1) and (2):

from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np

class MyBatchGenerator(Sequence): 'Generates data for Keras' def init(self, X, y, batch_size=1, shuffle=True): 'Initialization' self.X = X self.y = y self.batch_size = batch_size self.shuffle = shuffle self.on_epoch_end()

def __len__(self):
    'Denotes the number of batches per epoch'
    return int(np.floor(len(self.y)/self.batch_size))

def __getitem__(self, index):
    return self.__data_generation(index)

def on_epoch_end(self):
    'Shuffles indexes after each epoch'
    self.indexes = np.arange(len(self.y))
    if self.shuffle == True:
        np.random.shuffle(self.indexes)

def __data_generation(self, index):
    Xb = np.empty((self.batch_size, *self.X[index].shape))
    yb = np.empty((self.batch_size, *self.y[index].shape))
    # naively use the same sample over and over again
    for s in range(0, self.batch_size):
        Xb[s] = X[index]
        yb[s] = y[index]
    return Xb, yb


Parameters

N = 1000 halfN = int(N/2) dimension = 2 lstm_units = 3

Data

np.random.seed(123) # to generate the same numbers

create sequence lengths between 1 to 10

seq_lens = np.random.randint(1, 10, halfN) X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens]) y_zero = np.zeros((halfN, 1)) X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens]) y_one = np.ones((halfN, 1)) p = np.random.permutation(N) # to shuffle zero and one classes X = np.concatenate((X_zero, X_one))[p] y = np.concatenate((y_zero, y_one))[p]

Batch = 1

model = Sequential() model.add(LSTM(lstm_units, input_shape=(None, dimension))) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) print(model.summary()) model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

Padding and Masking

special_value = -10.0 max_seq_len = max(seq_lens) Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value) for s, x in enumerate(X): seq_len = x.shape[0] Xpad[s, 0:seq_len, :] = x model2 = Sequential() model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension))) model2.add(LSTM(lstm_units)) model2.add(Dense(1, activation='sigmoid')) model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) print(model2.summary()) model2.fit(Xpad, y, epochs=50, batch_size=32)

Extra notes

  1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.
Stephen Rauch
  • 1,783
  • 11
  • 22
  • 34
Esmailian
  • 9,312
  • 2
  • 32
  • 48
  • Wow a great answer! It's called bucketing, right? – Aditya Apr 08 '19 at 03:39
  • 1
    @Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points. – Esmailian Apr 08 '19 at 11:23
  • It looks like for #1 Padding and Masking, in your code, you pad to the right, by adding -10 (the padding character) to the end. Keras's sequence.pad_sequences function pads to the left or the beginning by default. I'm wondering if it matters whether we pad to the left or the right...would you know? – flow2k Aug 19 '19 at 00:14
  • 1
    @flow2k It does not matter, pads are completely ignored. Take a look at this question. – Esmailian Aug 19 '19 at 16:33
  • Thanks @Esmailian - just what I was looking for. On another note, I was investigating how to work this if there is an Embedding layer before the LSTM layer. It seems we can't use the Masking layer before that, since Embedding must be the first layer. But: it turns out Embedding supports using the integer 0 as a special value, with the mask_zero argument: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding#aliases – flow2k Aug 21 '19 at 06:57
  • I have desperately been trying to implement this with batch seize > 1 and have not been able to. Similarly, I have tried tensorflow.keras's data.experimental.bucket_by_sequence_length and have not been able either. Would you be willing to expand on this answer for batch size > 1 if I post a question? – funmath Mar 09 '21 at 18:40
4

We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.

Padding the sequences:

You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.

The values are padded mostly by the value of 0. You can do this in Keras with :

y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
  • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.

  • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.

Shubham Panchal
  • 2,190
  • 9
  • 21
  • 3
    Padding everything to a fixed length is wastage of space. – Aditya Apr 08 '19 at 03:39
  • I agree with @Aditya, and it incurs computation cost, too. But is it not the case that simplistic padding is still widely used? Keras even has a function just for this. Perhaps this is because other, more efficient and challenging solutions do not provide significant model performance gain? If anyone has experience or has done comparisons, please weigh in. – flow2k Aug 19 '19 at 00:22
  • Actually padding is the most efficient way, cause in that way Keras can allocate tensors of fixed length and do everything on the GPU without misalignments of memory. To keep the sequences with different length would be less efficient.

    The best way is to use padding + masking as explained by Esmailian.

    – Steve3nto Jul 28 '20 at 16:06