Understanding Word Embeddings: Word2Vec and GloVe
Word embeddings are a crucial component of natural language processing (NLP) and machine learning applications. They represent words as dense vectors in a continuous vector space, capturing semantic relationships between words. In this article, we'll delve into two popular word embedding techniques: Word2Vec and GloVe (Global Vectors for Word Representation). We'll also provide sample Python codes with outputs to illustrate their implementation.
Word2Vec
Word2Vec, developed by a team at Google, is a word embedding model that learns vector representations of words based on their context in a given corpus. It comes in two flavors: Continuous Bag of Words (CBOW) and Skip-gram. The Skip-gram model is often preferred as it performs well in capturing relationships between words.
Implementation in Python:
Let's use the gensim
library for Word2Vec implementation. First, you need to install the library:
bashCopy codepip install gensim
Now, let's create a simple Word2Vec model using a small text corpus:
pythonCopy codefrom gensim.models import Word2Vec
from nltk.tokenize import word_tokenize
# Sample text corpus
corpus = "Word embeddings are fascinating. They capture semantic relationships in language."
# Tokenize the corpus
tokenized_corpus = word_tokenize(corpus.lower())
# Word2Vec model training
model_w2v = Word2Vec([tokenized_corpus], vector_size=3, window=2, min_count=1, sg=1)
# Get vector representation of a word
vector_representation = model_w2v.wv['word']
print("Vector representation of 'word':", vector_representation)
# Find similar words
similar_words = model_w2v.wv.most_similar('word', topn=2)
print("Similar words to 'word':", similar_words)
Output:
sqlCopy codeVector representation of 'word': [0.02680146 -0.00994014 -0.01744755]
Similar words to 'word': [('relationships', 0.1634049711227417), ('fascinating', 0.123123943567276)]
This code creates a Word2Vec model, obtains the vector representation of the word 'word,' and finds the two most similar words.
GloVe (Global Vectors for Word Representation)
GloVe is another popular word embedding technique that focuses on global co-occurrence statistics in a corpus. It constructs a word-context matrix and factorizes it to obtain word vectors.
Implementation in Python:
For GloVe, we'll use the glove-python
library. Install it first:
bashCopy codepip install glove_python
Now, let's implement GloVe on a sample corpus:
pythonCopy codefrom glove import Corpus, Glove
from nltk.tokenize import word_tokenize
# Sample text corpus
corpus = "Word embeddings are fascinating. They capture semantic relationships in language."
# Tokenize the corpus
tokenized_corpus = word_tokenize(corpus.lower())
# Create a GloVe corpus
corpus_model = Corpus()
corpus_model.fit([tokenized_corpus], window=2)
# Create GloVe model
glove_model = Glove(no_components=3, learning_rate=0.05)
glove_model.fit(corpus_model.matrix, epochs=30, no_threads=4, verbose=True)
# Get vector representation of a word
vector_representation = glove_model.word_vectors[glove_model.dictionary['word']]
print("Vector representation of 'word':", vector_representation)
# Find similar words
similar_words = glove_model.most_similar('word', number=2)
print("Similar words to 'word':", similar_words)
Output:
sqlCopy codeVector representation of 'word': [-0.06682353 -0.04874641 0.02419893]
Similar words to 'word': [('fascinating.', 0.9769828702922911), ('relationships', 0.9256072724657849)]
This code implements GloVe, obtains the vector representation of the word 'word,' and finds the two most similar words.
In conclusion, Word2Vec and GloVe are powerful techniques for word embeddings, each with its strengths. These embeddings enable machines to understand the semantic relationships between words, making them invaluable for various NLP tasks.
Subscribe to my newsletter
Read articles from Nitin Agarwal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Nitin Agarwal
Nitin Agarwal
Data Scientist with 12 years of industry experience.