Humans intuitively understand the meaning of words: Which words are similar, opposites or related to each other? But our machine learning models do not have this intuition. Word embeddings are numeric vectors that represent text. These vectors are learned through neural networks. The objective when creating these embedding vectors is to capture as much “meaning” as possible: Related words should be closer together than unrelated words. Also, they should be able to preserve mathematical relationships between words such as

Continue reading

Author's picture

Heike Maria (PhD)


Data Science @MOIA working on the future of mobility, previously Data Science at @Xing on recommender systems and search

Germany