Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Building LLM Powered  Applications
  • Toc
  • feedback
Building LLM Powered  Applications

Building LLM Powered Applications

By : Valentina Alto
4.2 (22)
close
Building LLM Powered  Applications

Building LLM Powered Applications

4.2 (22)
By: Valentina Alto

Overview of this book

Building LLM Powered Applications delves into the fundamental concepts, cutting-edge technologies, and practical applications that LLMs offer, ultimately paving the way for the emergence of large foundation models (LFMs) that extend the boundaries of AI capabilities. The book begins with an in-depth introduction to LLMs. We then explore various mainstream architectural frameworks, including both proprietary models (GPT 3.5/4) and open-source models (Falcon LLM), and analyze their unique strengths and differences. Moving ahead, with a focus on the Python-based, lightweight framework called LangChain, we guide you through the process of creating intelligent agents capable of retrieving information from unstructured data and engaging with structured data using LLMs and powerful toolkits. Furthermore, the book ventures into the realm of LFMs, which transcend language modeling to encompass various AI tasks and modalities, such as vision and audio. Whether you are a seasoned AI expert or a newcomer to the field, this book is your roadmap to unlock the full potential of LLMs and forge a new era of intelligent machines.
Table of Contents (16 chapters)
close
14
Other Books You May Enjoy
15
Index

To get the most out of this book

This book aims to provide a solid theoretical foundation of what LLMs are, their architecture, and why they are revolutionizing the field of AI. It adopts a hands-on approach, providing you with a step-by-step guide to implementing LLMs-powered apps for specific tasks and using powerful frameworks like LangChain. Furthermore, each example will showcase the usage of a different LLM, so that you can appreciate their differentiators and when to use the proper model for a given task.

Overall, the book combines theoretical concepts with practical applications, making it an ideal resource for anyone who wants to gain a solid foundation in LLMs and their applications in NLP. The following pre-requisites will help you to get the most out of this book:

  • A basic understanding of the math behind neural networks (linear algebra, neurons and parameters, and loss functions)
  • A basic understanding of ML concepts, such as training and test sets, evaluation metrics, and NLP
  • A basic understanding of Python

Download the example code files

The code bundle for the book is hosted on GitHub at https://github.com/PacktPublishing/Building-LLM-Powered-Applications. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://packt.link/gbp/9781835462317.

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. For example: “I set the two variables system_message and instructions.”

A block of code is set as follows:

[default]
$pip install openai == 0.28
import os
import openai
openai.api_key = os.environment.get('OPENAI_API_KEY')
response = openai.ChatCompletion.create(
    model="gpt-35-turbo", # engine = "deployment_name".
    messages=[
        {"role": "system", "content": system_message},
        {"role": "user", "content": instructions},
    ]
)

Any command-line input or output is written as follows:

{'text': "Terrible movie. Nuff Said.[…]
 'label': 0}

Bold: Indicates a new term, an important word, or words that you see on the screen. For instance, words in menus or dialog boxes appear in the text like this. For example: “[…] he found that repeating the main instruction at the end of the prompt can help the model to overcome its inner recency bias.”

Warnings or important notes appear like this.

Tips and tricks appear like this.

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete