Processing before deep neural networks
Before feeding data into any neural network, we must first tokenize the data and then convert the data to sequences. For this purpose, we use the Keras Tokenizer
provided with TensorFlow, setting it using a maximum number of words limit of 200,000 and a maximum sequence length of 40. Any sentence with more than 40 words is consequently cut off to its first 40 words:
Tokenizer=tf.keras.preprocessing.text.Tokenizerpad_sequences=tf.keras.preprocessing.sequence.pad_sequences tk=Tokenizer(num_words=200000)max_len=40
After setting the Tokenizer
, tk
, this is fitted on the concatenated list of the first and second questions, thus learning all the possible word terms present in the learning corpus:
tk.fit_on_texts(list(df.question1)+list(df.question2)) x1=tk.texts_to_sequences(df.question1) x1=pad_sequences(x1,maxlen=max_len) x2=tk.texts_to_sequences(df.question2) x2=pad_sequences(x2,maxlen=max_len) word_index=tk.word_index
In order to keep track of the work of...