β οΈ RedisVL TypeScript is in active development. APIs may change before v1.0 release.
Redis Vector Library (RedisVL) is the TypeScript/Node.js client for building AI applications on Redis.
| Core Capabilities | AI Extensions | Dev Utilities |
|---|---|---|
| Index Management Schema design, data loading, CRUD ops |
Semantic Caching Reduce LLM costs & boost throughput |
Vectorizers 8+ embedding provider integrations |
| Vector Search Similarity search with metadata filters |
LLM Memory Agentic AI context management |
Rerankers Improve search result relevancy |
| Hybrid Queries Vector + text + metadata combined |
Semantic Routing Intelligent query classification |
|
| Multi-Query Types Vector, Range, Filter, Count queries |
Embedding Caching Cache embeddings for efficiency |
RedisVL helps you build production-ready AI applications:
- RAG Pipelines - Combine vector similarity search with metadata filtering to retrieve the most relevant context for your LLMs
- Semantic Caching - Cache LLM responses based on semantic similarity to improve response times and reduce costs
- AI Agents - Give your agents memory that persists across conversations and sessions, with semantic routing for quick intelligent decision-making
- Recommendation Systems - Find similar items quickly and rerank results based on user preferences or business logic
Install redisvl into your Node.js (>=22.0.0) environment using npm:
npm install redisvlOr using yarn:
yarn add redisvlOr using pnpm:
pnpm add redisvlChoose from multiple Redis deployment options:
-
Redis Cloud: Managed cloud database (free tier available)
-
Redis Stack: Docker image for development
docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
-
Redis Enterprise: Commercial, self-hosted database
-
Redis Sentinel: High availability with automatic failover
// Connect via Sentinel const redisUrl = 'redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster';
-
Azure Managed Redis: Fully managed Redis Enterprise on Azure
Enhance your experience and observability with the free Redis Insight GUI.
RedisVL is a TypeScript client for building AI applications on Redis. It sits on top of node-redis and handles the common patterns you need: managing indexes, loading data, generating embeddings, vector search, and techniques like semantic caching and LLM memory to improve the performance of your AI applications at scale.
What it does:
- Schema Management - Define indexes with YAML or objects
- Vector Search - Semantic similarity search with metadata filtering
- Data Operations - Batch loading with validation, TTL, and preprocessing
- Embeddings - Generate vectors with HuggingFace (local, no API key)
- Type Safety - Full TypeScript support
π Read the full documentation β
Define your data structure with fields for text, tags, numbers, geo locations, and vectors:
import { IndexSchema } from 'redisvl';
const schema = IndexSchema.fromObject({
index: { name: 'products', prefix: 'product:', storage_type: 'json' },
fields: [
{ name: 'title', type: 'text' },
{ name: 'category', type: 'tag' },
{ name: 'price', type: 'numeric' },
{
name: 'embedding',
type: 'vector',
attrs: { algorithm: 'hnsw', dims: 768, distance_metric: 'cosine' },
},
],
});Create and manage search indexes:
import { createClient } from 'redis';
import { SearchIndex } from 'redisvl';
const client = createClient();
await client.connect();
const index = new SearchIndex(schema, client);
await index.create();Load documents and retrieve them by key:
const documents = [
{ id: '1', title: 'Product A', price: 99 },
{ id: '2', title: 'Product B', price: 149 },
];
// Load with explicit IDs
await index.load(documents, { idField: 'id' });
// Fetch documents
const doc = await index.fetch('1');
const docs = await index.fetchMany(['1', '2']);Learn more about CRUD operations β
Perform semantic similarity search:
import { VectorQuery } from 'redisvl';
// Create query
const query = new VectorQuery({
vector: embedding,
vectorField: 'embedding',
filter: '@category:{electronics}',
numResults: 10,
});
// Execute search
const results = await index.search(query);
results.documents.forEach((doc) => {
console.log(`${doc.value.title} (score: ${doc.score})`);
});Learn more about vector search β
Generate embeddings for semantic search:
import { HuggingFaceVectorizer } from 'redisvl';
const vectorizer = new HuggingFaceVectorizer({
model: 'Xenova/all-MiniLM-L6-v2',
});
const embedding = await vectorizer.embed('Hello world');
// Use with data loading
await index.load(documents, {
preprocess: async (doc) => ({
...doc,
embedding: await vectorizer.embed(doc.content),
}),
});Learn more about vectorizers β
- Hybrid Search - Combine vector, text, and numeric filters
- Range Queries - Vector search within distance range
- Semantic Caching - Cache LLM responses by similarity
- LLM Memory - Context management for AI agents
- Semantic Routing - Intent-based query classification
- More Vectorizers - OpenAI, Cohere, Azure, VertexAI
- Rerankers - Improve search result relevancy
For additional help, check out the following resources: