Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Unlocking Data with Generative AI and RAG
  • Toc
  • feedback
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

By : Keith Bourne
5 (2)
close
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG

5 (2)
By: Keith Bourne

Overview of this book

Generative AI is helping organizations tap into their data in new ways, with retrieval-augmented generation (RAG) combining the strengths of large language models (LLMs) with internal data for more intelligent and relevant AI applications. The author harnesses his decade of ML experience in this book to equip you with the strategic insights and technical expertise needed when using RAG to drive transformative outcomes. The book explores RAG’s role in enhancing organizational operations by blending theoretical foundations with practical techniques. You’ll work with detailed coding examples using tools such as LangChain and Chroma’s vector database to gain hands-on experience in integrating RAG into AI systems. The chapters contain real-world case studies and sample applications that highlight RAG’s diverse use cases, from search engines to chatbots. You’ll learn proven methods for managing vector databases, optimizing data retrieval, effective prompt engineering, and quantitatively evaluating performance. The book also takes you through advanced integrations of RAG with cutting-edge AI agents and emerging non-LLM technologies. By the end of this book, you’ll be able to successfully deploy RAG in business settings, address common challenges, and push the boundaries of what’s possible with this revolutionary AI technique.
Table of Contents (20 chapters)
close
Free Chapter
1
Part 1 – Introduction to Retrieval-Augmented Generation (RAG)
7
Part 2 – Components of RAG
14
Part 3 – Implementing Advanced RAG

Not all semantics are created equal!

A common mistake made in RAG applications is choosing the first vectorization algorithm that is implemented and just assuming that provides the best results. These algorithms take the semantic meaning of text and represent them mathematically. However, these algorithms are generally large NLP models themselves, and they can vary in capabilities and quality as much as the LLMs. Just as we, as humans, often find it challenging to comprehend the intricacies and nuances of text, these models can grapple with the same challenge, having varying abilities to grasp the complexities inherent in written language. For example, models in the past could not decipher the difference between bark (a dog noise) and bark (the outer part of most trees), but newer models can detect this based on the surrounding text and the context in which it is used. This area of the field is adapting and evolving just as fast as other areas.

In some cases, it is possible that...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete