TensorFlow
TensorFlow is a computation library developed by Google. It is quite popular now and is used by many companies to create their neural network models. After what you have seen in Keras, the logic behind augmenting TensorFlow models using fastText is the same.
Word embeddings in TensorFlow
To create word embeddings in TensorFlow, you will need to create an embeddings matrix where all the tokens in your list of documents have unique IDs, and so each document is a vector of these IDs. Now, let's say you have an embedding in a NumPy array called word_embedding
, with vocab_size
rows and embedding_dim
columns, and you want to create a tensor W
. Taking a specific example, the sentence "I have a cat." can be split up into ["I", "have", "a", "cat", "."], and the tensor of the corresponding word_ids
will be of shape 5. To map these word IDs into vectors, create the word embedding variable and use the tf.nn.embedding_lookup
function:
word_embeddings = tf.get_variable(“word_embeddings”, ...