Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Unlocking Data with Generative AI and RAG
  • Toc
  • feedback
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

By : Keith Bourne
5 (2)
close
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

5 (2)
By: Keith Bourne

Overview of this book

Generative AI is helping organizations tap into their data in new ways, with retrieval-augmented generation (RAG) combining the strengths of large language models (LLMs) with internal data for more intelligent and relevant AI applications. The author harnesses his decade of ML experience in this book to equip you with the strategic insights and technical expertise needed when using RAG to drive transformative outcomes. The book explores RAG’s role in enhancing organizational operations by blending theoretical foundations with practical techniques. You’ll work with detailed coding examples using tools such as LangChain and Chroma’s vector database to gain hands-on experience in integrating RAG into AI systems. The chapters contain real-world case studies and sample applications that highlight RAG’s diverse use cases, from search engines to chatbots. You’ll learn proven methods for managing vector databases, optimizing data retrieval, effective prompt engineering, and quantitatively evaluating performance. The book also takes you through advanced integrations of RAG with cutting-edge AI agents and emerging non-LLM technologies. By the end of this book, you’ll be able to successfully deploy RAG in business settings, address common challenges, and push the boundaries of what’s possible with this revolutionary AI technique.
Table of Contents (20 chapters)
close
Free Chapter
1
Part 1 – Introduction to Retrieval-Augmented Generation (RAG)
7
Part 2 – Components of RAG
14
Part 3 – Implementing Advanced RAG

Code lab 14.3 – MM-RAG

The code for this lab can be found in the CHAPTER14-3_MM_RAG.ipynb file in the CHAPTER14 directory of the GitHub repository.

This is a good example of when an acronym can really help us talk faster. Try to say multi-modal retrieval augmented regeneration out loud once, and you will likely want to use MM-RAG from now on! But I digress. This is a groundbreaking approach that will likely gain a lot of traction in the near future. It better represents how we as humans process information, so it must be amazing, right? Let’s start by revisiting the concept of using multiple modes.

Multi-modal

Up to this point, everything we have discussed has been focused on text: taking the text as input, retrieving text based on that input, and passing that retrieved text to an LLM that then generates a final text output. But what about non-text? As the companies building these LLMs have started to offer powerful multi-modal capabilities, how can we incorporate...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete