Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Generative AI Foundations in Python
  • Table Of Contents Toc
  • Feedback & Rating feedback
Generative AI Foundations in Python

Generative AI Foundations in Python

By : Carlos Rodriguez
4.8 (5)
close
close
Generative AI Foundations in Python

Generative AI Foundations in Python

4.8 (5)
By: Carlos Rodriguez

Overview of this book

The intricacies and breadth of generative AI (GenAI) and large language models can sometimes eclipse their practical application. It is pivotal to understand the foundational concepts needed to implement generative AI. This guide explains the core concepts behind -of-the-art generative models by combining theory and hands-on application. Generative AI Foundations in Python begins by laying a foundational understanding, presenting the fundamentals of generative LLMs and their historical evolution, while also setting the stage for deeper exploration. You’ll also understand how to apply generative LLMs in real-world applications. The book cuts through the complexity and offers actionable guidance on deploying and fine-tuning pre-trained language models with Python. Later, you’ll delve into topics such as task-specific fine-tuning, domain adaptation, prompt engineering, quantitative evaluation, and responsible AI, focusing on how to effectively and responsibly use generative LLMs. By the end of this book, you’ll be well-versed in applying generative AI capabilities to real-world problems, confidently navigating its enormous potential ethically and responsibly.
Table of Contents (13 chapters)
close
close
Free Chapter
1
Part 1: Foundations of Generative AI and the Evolution of Large Language Models
6
Part 2: Practical Applications of Generative AI

Understanding jailbreaking and harmful behaviors

In the context of generative LLMs, the term jailbreaking describes techniques and strategies that intend to manipulate models to override any ethical safeguards or content restrictions, thereby enabling the generation of restricted or harmful content. Jailbreaking exploits models through sophisticated adversarial prompting that can induce unexpected or harmful responses. For example, an attacker might try to instruct an LLM to explain how to generate explicit content or express discriminatory views. Understanding this susceptibility is crucial for developers and stakeholders to safeguard applied generative AI against misuse and minimize potential harm.

These jailbreaking attacks exploit the fact that LLMs are trained to interpret and respond to instructions. Despite sophisticated efforts to defend against misuse, attackers can take advantage of the complex and expansive knowledge embedded in LLMs to find gaps in their safety precautions...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY