Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Unlocking Data with Generative AI and RAG
  • Toc
  • feedback
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

By : Keith Bourne
5 (2)
close
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

5 (2)
By: Keith Bourne

Overview of this book

Generative AI is helping organizations tap into their data in new ways, with retrieval-augmented generation (RAG) combining the strengths of large language models (LLMs) with internal data for more intelligent and relevant AI applications. The author harnesses his decade of ML experience in this book to equip you with the strategic insights and technical expertise needed when using RAG to drive transformative outcomes. The book explores RAG’s role in enhancing organizational operations by blending theoretical foundations with practical techniques. You’ll work with detailed coding examples using tools such as LangChain and Chroma’s vector database to gain hands-on experience in integrating RAG into AI systems. The chapters contain real-world case studies and sample applications that highlight RAG’s diverse use cases, from search engines to chatbots. You’ll learn proven methods for managing vector databases, optimizing data retrieval, effective prompt engineering, and quantitatively evaluating performance. The book also takes you through advanced integrations of RAG with cutting-edge AI agents and emerging non-LLM technologies. By the end of this book, you’ll be able to successfully deploy RAG in business settings, address common challenges, and push the boundaries of what’s possible with this revolutionary AI technique.
Table of Contents (20 chapters)
close
Free Chapter
1
Part 1 – Introduction to Retrieval-Augmented Generation (RAG)
7
Part 2 – Components of RAG
14
Part 3 – Implementing Advanced RAG

Summary

This chapter explored the key technical components of RAG systems in the context of LangChain: vector stores, retrievers, and LLMs. It provided an in-depth look at the various options available for each component and discussed their strengths, weaknesses, and scenarios in which one option might be better than another.

The chapter started by examining vector stores, which play a crucial role in efficiently storing and indexing vector representations of knowledge base documents. LangChain integrates with various vector store implementations, such as Pinecone, Weaviate, FAISS, and PostgreSQL with vector extensions. The choice of vector store depends on factors such as scalability, search performance, and deployment requirements. The chapter then moved on to discuss retrievers, which are responsible for querying the vector store and retrieving the most relevant documents based on the input query. LangChain offers a range of retriever implementations, including dense retrievers...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete