Embedding Models

Vector Search vs Semantic Search

A woman is looking at a whiteboard covered in post-its. The post-its represent the words in semantic and vector search.

With the growing popularity of large language models (LLMs) like OpenAI's GPT-4, there's been a surge in interest in embedding models and vector search. Yet, there's some confusion surrounding vector similarity search, its capabilities, and its relationship with semantic search. 

To put it simply, vector search and semantic search are interconnected but fundamentally different concepts. Vector search acts as a building block for semantic search, enabling data retrieval based on relevance. In this article, we’ll explore them in more detail and explain their differences. 

Semantic search is all about context and meaning. It employs a blend of natural language processing (NLP) techniques and understanding (NLU) to interpret the nuances, synonyms, and relationships inherent in language. The aim is to deliver search results that are not just textually similar but are meaningfully relevant to the user's search intent, even if the exact words used in the query aren't present in the content.

For instance, a search for "climate change effects" could return relevant documents that discuss "global warming impacts," even if the exact phrase isn't used, thanks to the semantic understanding embedded in the vectors.

Check out this article for more details on semantic search.

Now, how do we translate this nuanced understanding into something computers can work with? That's where vector search comes in. Vector search transforms words, sentences, or entire documents into vectors—think of them as points in a multidimensional space. These vectors are not just random points; they're calculated in such a way that similar meanings are positioned closer together. For instance, vectors for "trucks" and "cars" would be neighbors despite being different words.

This transformation is done using embedding models, which are a type of AI trained to understand the subtle meanings and relationships between words. When you perform a search, the model converts your query into a vector and then looks for other vectors (documents, web pages, etc.) that are close by in this multidimensional space. The closer they are, the more relevant they're deemed to be.

Learn more about vector search.

Leveraging Vector Search for Semantic Understanding

So, how does vector search turn into semantic search? It's all about leveraging those embeddings to capture the essence of your query's intent. By analyzing the positions and distances of vectors, we can infer semantic relationships, such as synonyms, related concepts, or even nuanced thematic links between seemingly unrelated terms.

To leverage vector search for semantic search, systems typically follow a multi-step process:

  1. Embedding generation for the content: the content to be searched is transformed into vectors using embedding models.

  2. Storing the content and embeddings in a vector database: both the content and its embedding are stored in a vector database that then allows performant search on the embeddings.

  3. Embedding generation for the query: the query is transformed into a vector using the same embedding model we used for the content.

  4. Retrieving relevant data from the vector database: The database is then asked to return all items whose embeddings are closest to the queries’ embedding. For this task, the vector database will use a distance function between vectors, such as cosine or Euclidean distance.

Semantic search is a powerful concept that enables much more useful computer systems. Instead of users having to figure out the exact keyword to search for, the system returns relevant content for a much broader range of queries. Vector search, with its ability to process and understand the geometry of meanings, provides the foundation to develop an advanced semantic search system. This synergy not only enhances the accuracy of search results but also makes digital interactions more intuitive and human-like.

Understanding these concepts is crucial, especially for those venturing into the fields of AI and data science. If you’re building AI applications, check out Timescale's open-source PostgreSQL stack for AI applications. It includes pgvector along with two open-source extensions developed by the Timescale team: pgai and pgvectorscale.

While pgai makes it easier for developers to build search and retrieval-augmented generation (RAG) applications by bringing more AI workflows into PostgreSQL, pgvectorscale enables developers to build more scalable AI applications with higher-performance embedding search and cost-efficient storage.

Both extensions are available for you to install in the pgai and pgvectorscale GitHub repositories (GitHub stars are much appreciated!). For a seamless developer experience with greater time series and analytics capabilities, try Timescale Cloud, which provides ready access to pgvector, pgvectorscale, and pgai, plus a fully managed PostgreSQL cloud database experience.