AnnouncementNew serverless free plan with 3x capacityLearn more

The most popular vector database — now serverless

Build remarkable GenAI applications fast, with lower cost, better performance, and greater ease of use at any scale.

Fast, cost-efficient performance at any scale

51ms query latency (p95)*
96% recall*
up to 50x lower cost
*Performance with MSMarco V2 dataset of 138M embeddings (1536 dimensions)
Latency chart

Bring AI products to market faster

Start building in minutes. Forget about configuring or scaling your index.

SDKs

Growing number of SDKs including Python and Node make working with Pinecone a breeze for developers.

Streamlined API

Manage control and data plane requests across environments with a single API.

Any AI Model

Compatible with embeddings from any AI model or LLM, including those from OpenAI, Anthropic, Cohere, Hugging Face, PaLM, etc.

Integrations

Supercharge your AI stack with integrations for popular data sources, frameworks, models, and more.

Get just the results you want

Always fresh, relevant results as your data changes and grows.

Hybrid search

Combine vector search with keyword boosting for the best of both worlds (hybrid search).

Namespaces

Partition your workload with namespaces to minimize latency and compute needed for query.

Metadata Filtering

Combine vector search with familiar metadata filters to get just the results you want.

Live index updates

As your data changes, the Pinecone index is updated in realtime to provide the freshest results.

The vector database reimagined

Build your next great GenAI apps with our industry-first architecture.

Efficient query-planning

Built-in logic to scan the optimal number of semantically similar clusters needed for query, not the entire index.

Durable writes

Write requests are committed to a write-ahead-log in blob storage for guaranteed durability and strong ordering.

Adaptive clustering

Indexes automatically adapt as data grows to maintain low-latency and O(s) freshness.

Multi-tenant layer

Built to efficiently manage thousands of tenants without performance degradation.

Intelligent retrieval

Only the most used clusters are cached in memory instead of loading from blob storage for quick, memory efficient retrieval.

Reimagining the vector database to enable knowledgeable AI

Learn more about the architecture and performance in our technical deep dive.

View Post

Ready to build with your favorite tools

Learn how to build with Pinecone and the GenAI stack.

Vercel

Vercel

Pulumi

Pulumi

Langchain

Langchain

Cohere

Cohere

Confluent

Confluent

Anyscale

Anyscale

View all integrations

Secure by design

Pinecone is GDPR-ready, SOC2 Type II certified, HIPAA-compliant. Easily control and manage access within the console with organizations and SSO. Data is encrypted at rest and in transit.

Integrations to connect to your AI stack

Use Pinecone with your favorite cloud provider, data sources, models, frameworks, and more.