Unlocking Sentence Similarity: A Practical Guide to Enhancing Language Models

Unlocking Sentence Similarity: A Practical Guide to Enhancing Language Models

Introduction


Understanding how similar two sentences are is at the core of building intelligent language models. Whether it's powering chatbots, enhancing search engines, or summarizing documents, accurately measuring sentence similarity ensures that models grasp the nuances of human language. As discussed in our previous explorations of word embedding and sentence embedding, machines interpret text by transforming it into numerical representations. This post dives into practical and effective methods to evaluate sentence similarity, providing a step-by-step guide to equip you with the tools to enhance your language models' performance.

Dot Product Similarity

To understand dot product similarity, let's consider a simple example. Imagine a dataset of four sentences, each represented by a 2-dimensional embedding — similar to coordinates on a Cartesian plane, where each sentence is assigned two values (one for the x-axis and one for the y-axis). The embeddings for the sentences are as follows:

  • Lagos is the commercial capital of Nigeria – [6, 5]

  • Cairo is the capital of Egypt – [7, 4]

  • Away – [7, 0]

  • I’ve got a job – [0, 5]

By analyzing the first value of each embedding, you’ll notice that the first three sentences have high x-axis values, while the fourth sentence, I’ve got a job, has an x-axis value of 0. This suggests that the first three sentences share a common feature – they describe locations, whereas the fourth does not. On the other hand, the second value (y-axis) is high for all sentences except for Away, which has a value of 0. This could indicate that the second dimension represents a different characteristic, such as population or significance, which Away lacks.

SentenceValue 1Value 2
Lagos is the commercial capital of Nigeria65
Cairo is the capital of Egypt74
Away70
I’ve got a Job05

Now, let's calculate the similarity between the first and second sentences, as well as the third and fourth sentences. At first glance, it’s clear that the first two sentences share a strong similarity, while the third and fourth sentences differ significantly. To quantify this, we need to establish a similarity score — one that is high for the pair [Lagos is the commercial capital of Nigeria, Cairo is the capital of Egypt] and low for the pair [Away, I’ve got a job].

This can be achieved by computing the dot product of their embeddings:

  • Dot product for [Away, I’ve got a job]:
    0×7+5×0=0

  • Dot product for [Lagos is the commercial capital of Nigeria, Cairo is the capital of Egypt]:
    6×7+5×4= 62

The results show that the first two sentences exhibit a much higher similarity score (62) compared to the last two sentences (0). This demonstrates that Lagos is the commercial capital of Nigeria and Cairo is the capital of Egypt are more closely related, while Away and I’ve got a job share little to no similarity.

Cosine Similarity

Another effective method for measuring sentence similarity is by evaluating the angle between their vector representations (embeddings). The image below illustrates this concept:

  • Similar sentences have embeddings that form a small angle, indicating strong similarity.

  • Unrelated sentences exhibit an angle close to 90 degrees, reflecting little to no relationship.

  • Opposite or contrasting sentences form an angle near 180 degrees, suggesting complete dissimilarity.

In mathematical terms, when the angle between vectors is small, the cosine of that angle approaches 1 — as shown in the first diagram. As the angle increases, the cosine value decreases, with larger angles producing values closer to 0 or negative, as demonstrated in the second and third diagrams. This makes cosine similarity a powerful tool for quantifying the relationship between sentences in large language models.

Conclusion

Measuring sentence similarity is a foundational step in building more intelligent and nuanced language models. By leveraging techniques like dot product and cosine similarity, we can translate textual relationships into quantifiable metrics, enabling models to better understand and process human language. Dot product offers a straightforward approach to evaluate similarity based on the magnitude of vector components, while cosine similarity provides a robust measure by focusing on the angle between embeddings, ensuring that even sentences of varying lengths can be compared effectively.

As language models continue to evolve, mastering these similarity techniques will not only enhance model performance but also open new possibilities in areas like information retrieval, machine translation, and text summarization. By consistently refining our understanding of sentence embeddings and similarity metrics, we lay the groundwork for more accurate and human-like AI systems.

References

Cohere, "What is Similarity Between Sentences?," LLM University. [Online]. Available: https://cohere.com/llmu/what-is-similarity-between-sentences