High-performance vector DB with filtering
Rust-based vector database optimized for filtering. Supports named vectors, payloads, and hybrid search. Self-hostable or cloud.
The memory layer — stores and retrieves vector embeddings for RAG and semantic search
Other tools in this slot:
AIchitect's Genome scanner detects Qdrant in your project via these signals:
@qdrant/js-client-rest@qdrant/qdrant-jsqdrant-clientQDRANT_URLQDRANT_API_KEYLangChain has a native Qdrant vectorstore integration — pass a Qdrant client and it handles embedding storage and similarity search.
→ Semantic retrieval inside any LangChain chain or agent without writing custom retrieval code.
LlamaIndex stores and retrieves document embeddings from Qdrant via its QdrantVectorStore adapter inside a VectorStoreIndex.
→ Production-grade semantic retrieval with Qdrant's filtered search and payload metadata inside LlamaIndex pipelines.
Haystack has a native Qdrant document store integration — Qdrant becomes a retrieval backend in Haystack pipelines.
→ Production-grade vector retrieval inside Haystack pipelines using Qdrant's filtered search and payload storage.
Dify connects to a self-hosted Qdrant instance as its knowledge base vector store — documents are chunked, embedded, and stored in Qdrant.
→ Self-hosted knowledge retrieval inside Dify workflows, keeping document data on your own infrastructure.
Apps built with the Vercel AI SDK call Qdrant directly for retrieval in RAG endpoints, fetching context before passing it to the SDK's generate function.
→ Semantic retrieval in Vercel AI SDK streaming endpoints — context from Qdrant enriches every generation without breaking streaming.
Add to your GitHub README
[](https://aichitect.dev/tool/qdrant)Explore the full AI landscape
See how Qdrant fits into the bigger picture — browse all 207 tools and their relationships.