Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Generative AI Foundations in Python
  • Table Of Contents Toc
  • Feedback & Rating feedback
Generative AI Foundations in Python

Generative AI Foundations in Python

By : Carlos Rodriguez
4.8 (5)
close
close
Generative AI Foundations in Python

Generative AI Foundations in Python

4.8 (5)
By: Carlos Rodriguez

Overview of this book

The intricacies and breadth of generative AI (GenAI) and large language models can sometimes eclipse their practical application. It is pivotal to understand the foundational concepts needed to implement generative AI. This guide explains the core concepts behind -of-the-art generative models by combining theory and hands-on application. Generative AI Foundations in Python begins by laying a foundational understanding, presenting the fundamentals of generative LLMs and their historical evolution, while also setting the stage for deeper exploration. You’ll also understand how to apply generative LLMs in real-world applications. The book cuts through the complexity and offers actionable guidance on deploying and fine-tuning pre-trained language models with Python. Later, you’ll delve into topics such as task-specific fine-tuning, domain adaptation, prompt engineering, quantitative evaluation, and responsible AI, focusing on how to effectively and responsibly use generative LLMs. By the end of this book, you’ll be well-versed in applying generative AI capabilities to real-world problems, confidently navigating its enormous potential ethically and responsibly.
Table of Contents (13 chapters)
close
close
Free Chapter
1
Part 1: Foundations of Generative AI and the Evolution of Large Language Models
6
Part 2: Practical Applications of Generative AI

Summary

The advent of the transformer significantly propelled the field of NLP forward, serving as the foundation for today’s cutting-edge generative language models. This chapter delineated the progression of NLP that paved the way for this pivotal innovation. Initial statistical techniques such as count vectors and TF-IDF were adept at extracting rudimentary word patterns, yet they fell short in grasping semantic nuances.

Incorporating neural language models marked a stride toward more profound representations through word embeddings. Nevertheless, recurrent networks encountered hurdles in handling longer sequences. This inspired the emergence of CNNs, which introduced computational efficacy via parallelism, albeit at the expense of global contextual awareness.

The inception of attention mechanisms emerged as a cornerstone. In 2017, Vaswani et al. augmented these advancements, unveiling the transformer architecture. The hallmark self-attention mechanism of the transformer...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY