This document discusses the theory and practice of vector embeddings for structured data, highlighting the importance of such representations in machine learning and data analysis. It reviews various embedding methods, including hand-crafted feature vectors and learned embeddings, while emphasizing the need for theoretical perspectives, particularly focusing on the Weisfeiler-Lehman algorithm and homomorphism vectors. The paper also identifies key theoretical questions related to expressivity, complexity, and dimensions of latent spaces in the context of graph and relational structure embeddings.