As an example, we will run the HOPE embedding algorithm. We first need to import it from karateclub:
from karateclub import HOPE
Then, similar to what we would do with scikit-learn, we can create the model instance:
hope = HOPE(dimensions=10)
The dimension parameter gives the size d of the resulting embedding.
Once the model is created, it can be fitted on a networkx graph:
hope.fit(G)
In order to extract the embedding from the fitted model, we need to use the get_embedding method:
embeddings = hope.get_embedding()
And here we are. You can check that the embeddings variable is a matrix of size 34 (number of nodes) × 10 (dimension of the embedding vector).
Finally, we can try and visualize the embedding vectors. Since visualization is easier in a two-dimensional space, a Principal Component Analysis (PCA) is performed on the embedding to reduce its size to 2 before plotting:
In order to evaluate the quality of the embedding, let's draw the...