Build Retrieval-augmented generation (RAG) with Databricks & Pinecone
Nitin Wagh
Head of Product Growth, Machine Learning at Databricks
Roie Schwaber-Cohen
Developer Advocate
Hosted by:
Retrieval Augmented Generation(RAG) is commonly used to build conversational interfaces to improve the quality of responses from Large Language Models(LLMs). RAG works by using external sources of knowledge to supplement context in a LLM Prompt to increase accuracy of response. In this workshop, we will explore the lifecycle of building a RAG GenAI application using Databricks Lakehouse AI platform for data preparation, embedding generation and LLM optimized Model Serving and backed by Pinecone’s scalable Vector database to build powerful conversational AI applications. We will also provide an end-to-end demo for building a RAG application with Databricks and Pinecone.

Nitin Wagh
Head of Product Growth, Machine Learning at Databricks
Roie Schwaber-Cohen
Developer Advocate
Hosted by:
Start building knowledgeable AI today
Create your first index for free, then pay as you go when you're ready to scale.