From the course: LLM Foundations: Vector Databases for Caching and Retrieval Augmented Generation (RAG)

Unlock the full course today

Join today to access over 23,200 courses taught by industry experts.

Introduction to retrieval augmented generation

Introduction to retrieval augmented generation

Retrieval-augmented generation, or RAG for short, is arguably the most popular use case for LLMs in the business context. What is RAG? It is a framework that combines knowledge from a curated knowledge base with language capabilities of an LLM to provide accurate and well-structured answers. In RAG, we use a knowledge base for context-specific knowledge and an LLM for language generation, providing the best of both worlds. When a user asks a question through a prompt, the knowledge base provides contextual knowledge and the LLM provides well-structured answers. What are some key features and advantages of RAG? With RAG, we can use enterprise and confidential data sources to answer questions. This is not possible when using third-party LLMs. It allows us to combine multiple data sources in different formats to create a knowledge base. We can use product manuals in PDF format, support tickets from a ticketing system, and content from web pages together in a single knowledge base. The…

Contents