Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Generative AI Foundations in Python
  • Table Of Contents Toc
  • Feedback & Rating feedback
Generative AI Foundations in Python

Generative AI Foundations in Python

By : Carlos Rodriguez
4.8 (5)
close
close
Generative AI Foundations in Python

Generative AI Foundations in Python

4.8 (5)
By: Carlos Rodriguez

Overview of this book

The intricacies and breadth of generative AI (GenAI) and large language models can sometimes eclipse their practical application. It is pivotal to understand the foundational concepts needed to implement generative AI. This guide explains the core concepts behind -of-the-art generative models by combining theory and hands-on application. Generative AI Foundations in Python begins by laying a foundational understanding, presenting the fundamentals of generative LLMs and their historical evolution, while also setting the stage for deeper exploration. You’ll also understand how to apply generative LLMs in real-world applications. The book cuts through the complexity and offers actionable guidance on deploying and fine-tuning pre-trained language models with Python. Later, you’ll delve into topics such as task-specific fine-tuning, domain adaptation, prompt engineering, quantitative evaluation, and responsible AI, focusing on how to effectively and responsibly use generative LLMs. By the end of this book, you’ll be well-versed in applying generative AI capabilities to real-world problems, confidently navigating its enormous potential ethically and responsibly.
Table of Contents (13 chapters)
close
close
Free Chapter
1
Part 1: Foundations of Generative AI and the Evolution of Large Language Models
6
Part 2: Practical Applications of Generative AI

Practice project: Implementing RAG with LlamaIndex using Python

For our practice project, we will shift from LangChain to exploring another library that facilitates the RAG approach. LlamaIndex is an open source library that is specifically designed for RAG-based applications. LlamaIndex simplifies ingestion and indexing across various data sources. However, before we dive into implementation, we will explain the underlying methods and approach behind RAG.

As discussed, the key premise of RAG is to enhance LLM outputs by supplying relevant context from external data sources. These sources should provide specific and verified information to ground model outputs. Moreover, RAG can optionally leverage the few-shot approach by retrieving few-shot examples at inference time to guide generation. This approach alleviates the need to store examples in the prompt chain and only retrieves relevant examples when needed. In essence, the RAG approach is a culmination of many of the prompt engineering...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY