Graph Neural Networks are the current hot topic. [1] And this interest is surely justified as GNNs are all about latent representation of the graph in vector space.
Representing an entity as a vector is nothing new. There are many examples like word2vec and Gloves embeddings in NLP which transforms a word into a vector.
What makes such representation powerful are,
these vectors incorporate a notion of similarity among them i.e. two words who are similar to each other tend to be closer in the vector space (dot product is large), and
they have application in diverse downstream problems like classification, clustering, etc.
This is what makes GNN interesting, as while there are many solutions to embed a word or image as a vector, GNN laid the foundation to do so for graphs.