Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • RAG-Driven Generative AI
  • Toc
  • feedback
RAG-Driven Generative AI

RAG-Driven Generative AI

By : Denis Rothman
4.3 (18)
close
RAG-Driven Generative AI

RAG-Driven Generative AI

4.3 (18)
By: Denis Rothman

Overview of this book

RAG-Driven Generative AI provides a roadmap for building effective LLM, computer vision, and generative AI systems that balance performance and costs. This book offers a detailed exploration of RAG and how to design, manage, and control multimodal AI pipelines. By connecting outputs to traceable source documents, RAG improves output accuracy and contextual relevance, offering a dynamic approach to managing large volumes of information. This AI book shows you how to build a RAG framework, providing practical knowledge on vector stores, chunking, indexing, and ranking. You’ll discover techniques to optimize your project’s performance and better understand your data, including using adaptive RAG and human feedback to refine retrieval accuracy, balancing RAG with fine-tuning, implementing dynamic RAG to enhance real-time decision-making, and visualizing complex data with knowledge graphs. You’ll be exposed to a hands-on blend of frameworks like LlamaIndex and Deep Lake, vector databases such as Pinecone and Chroma, and models from Hugging Face and OpenAI. By the end of this book, you will have acquired the skills to implement intelligent solutions, keeping you competitive in fields from production to customer service across any project.
Table of Contents (14 chapters)
close
11
Other Books You May Enjoy
12
Index
Appendix

Pipeline 2: Scaling a Pinecone index (vector store)

The goal of this section is to build a Pinecone index with our dataset and scale it from 10,000 records up to 1,000,000 records. Although we are building on the knowledge acquired in the previous chapters, the essence of scaling is different from managing sample datasets.

The clarity of each process of this pipeline is deceptively simple: data preparation, embedding, uploading to a vector store, and querying to retrieve documents. We have already gone through each of these processes in Chapters 2 and 3.

Furthermore, beyond implementing Pinecone instead of Deep Lake and using OpenAI models in a slightly different way, we are performing the same functions as in Chapters 2, 3, and 4 for the vector store phase:

  1. Data preparation: We will start by preparing our dataset using Python for chunking.
  2. Chunking and embedding: We will chunk the prepared data and then embed the chunked data.
  3. Creating the Pinecone...
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete