Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Building AI Intensive Python Applications
  • Table Of Contents Toc
  • Feedback & Rating feedback
Building AI Intensive Python Applications

Building AI Intensive Python Applications

By : Rachelle Palmer, Ben Perlmutter, Ashwin Gangadhar, Nicholas Larew, Sigfrido Narváez, Thomas Rueckstiess, Henry Weller, Richmond Alake, Shubham Ranjan
close
close
Building AI Intensive Python Applications

Building AI Intensive Python Applications

By: Rachelle Palmer, Ben Perlmutter, Ashwin Gangadhar, Nicholas Larew, Sigfrido Narváez, Thomas Rueckstiess, Henry Weller, Richmond Alake, Shubham Ranjan

Overview of this book

The era of generative AI is upon us, and this book serves as a roadmap to harness its full potential. With its help, you’ll learn the core components of the AI stack: large language models (LLMs), vector databases, and Python frameworks, and see how these technologies work together to create intelligent applications. The chapters will help you discover best practices for data preparation, model selection, and fine-tuning, and teach you advanced techniques such as retrieval-augmented generation (RAG) to overcome common challenges, such as hallucinations and data leakage. You’ll get a solid understanding of vector databases, implement effective vector search strategies, refine models for accuracy, and optimize performance to achieve impactful results. You’ll also identify and address AI failures to ensure your applications deliver reliable and valuable results. By evaluating and improving the output of LLMs, you’ll be able to enhance their performance and relevance. By the end of this book, you’ll be well-equipped to build sophisticated AI applications that deliver real-world value.
Table of Contents (18 chapters)
close
close
3
Part 1: Foundations of AI: LLMs, Embedding Models, Vector Databases, and Application Design
8
Part 2: Building Your Python Application: Frameworks, Libraries, APIs, and Vector Search
11
Part 3: Optimizing AI Applications: Scaling, Fine-Tuning, Troubleshooting, Monitoring, and Analytics
Appendix: Further Reading: Index

The generative AI stack

A stack combines tools, libraries, software, and solutions to create a unified and integrated approach. The GenAI stack includes programming languages, LLM providers, frameworks, databases, and deployment solutions. Though the GenAI stack is relatively new, it already has many variations and options for engineers to choose from.

Let’s discuss what you need to build a functional GenAI application. The bare minimum requirements are the following, as also shown in Figure 1.2:

  • An operating system: Usually, this is Unix/Linux based.
  • A storage layer: An SQL or NoSQL database. This book uses MongoDB.
  • A vector database capable of storing embeddings: This book uses MongoDB, which stores its embeddings within your data or content, rather than in a separate database.
  • A web server: Apache and Nginx are quite popular.
  • A development environment: This could be Node.js/JavaScript, .NET, Java, or Python. This book uses Python throughout the examples with a bit of JavaScript where needed.

Figure 1.2: A basic GenAI stack

If you want to learn more about the AI stack, you can find detailed information at www.mongodb.com/resources/basics/ai-stack.

Python and GenAI

Python was conceived in the late 1980s by Guido van Rossum and officially released in 1991. Over the decades, Python has evolved into a versatile language, beloved by developers for its clean syntax and robust functionality. It has a clean syntax that is easy to understand, making it an ideal choice for beginner developers.

Although it is not entirely clear why, fairly early on, the Python ecosystem began introducing more libraries and frameworks that were tailored to ML and data science. Libraries and frameworks such as TensorFlow, Keras, PyTorch, and scikit-learn provided powerful tools for developers in these fields. Analysts who were less technical were still able to get started with Python with relative ease. Due to its interoperability, Python seamlessly integrated with other programming languages and technologies, making it easier to integrate with data pipelines and web applications.

GenAI, with its demands for high computational power and sophisticated algorithms, finds a perfect partner in Python. Here are some examples that readily come to mind:

  • Libraries such as Pandas and NumPy allow efficient manipulation and analysis of large datasets, a fundamental step in training generative models
  • Frameworks such as TensorFlow and PyTorch offer pre-built components to design and train complex neural networks
  • Tools such as Matplotlib and Seaborn enable detailed visualization of data and model outputs, aiding in understanding and refining AI models
  • Frameworks such as Flask and FastAPI make deploying your GenAI models as scalable web services straightforward

Python has a rich ecosystem that is easy to use and allows you to quickly get started, making it an ideal programming language for GenAI projects. Now, let’s talk more about the other pieces of technology you’ll be using throughout the rest of the book.

OpenAI API

The first, and most important, tool of this book is the OpenAI API. In the following chapters, you’ll learn more about each component of the GenAI stack—and the most critical to be familiar with is OpenAI. While we’ll cover other LLM providers, the one used in our examples and code repository will be OpenAI.

The OpenAI API, launched in mid-2020, provides developers with access to their powerful models, allowing integration of advanced NLP capabilities into applications. Through this API, developers gain access to some of the most advanced AI models in existence, such as GPT-4. These models are trained on vast datasets and possess unparalleled capabilities in natural language understanding and response generation.

Moreover, OpenAI’s infrastructure is built to scale. As your project grows and demands more computational power, OpenAI ensures that you can scale effortlessly without worrying about the underlying hardware or system architecture. OpenAI’s models excel at NLP tasks, including text generation, summarization, translation, and sentiment analysis. This can be invaluable for creating content, chatbots, virtual assistants, and more.

Much of the data from the internet and internal conversations and documentation is unstructured. OpenAI, as a company, has used that data to train an LLM, and then offered that LLM as a service, making it possible for you to create interactive GenAI applications without hosting or training your own LLM. You’ll learn more about LLMs in Chapter 3, Large Language Models.

MongoDB with Vector Search

Much has been said about how MongoDB serves the use case of unstructured data but that the world’s data is fundamentally relational. It can be argued that no data is meaningful until humans deem it so, and that the relationships and structure of that data are determined by humans as well. For example, several years ago, a researcher at a leading space exploration company made this memorable comment in a meeting:

We scraped text content from websites and PDF documents primarily, and we realized it didn’t really make sense to try and cram that data into a table.”

MongoDB thrives with the messy, unstructured content that characterizes the real world—.txt files, Markdown, PDFs, HTML, and so on. MongoDB is flexible enough to have the structure that engineers deem is best suited for purpose, and because of that flexibility, it is a great fit for GenAI use cases.

For that reason, it is much easier to use a document database for GenAI than it is to use a SQL database.

Another reason to use MongoDB is for its vector search capabilities. Vector search ensures that when you store a phrase in MongoDB, it converts that data into an array. This is called a vector. Vectors are numerical representations of data and their context, as shown in Figure 1.3. The number of these dimensions is referred to as an embedding, and the more of them you have, the better off you are.

Figure 1.3: Example of a vector

After you’ve created embeddings for a piece of data, a mathematical process will identify which vectors are closest or nearest to each other, and you can then infer that the data is related. This allows you to return related words instead of only exact matches. For instance, if you are looking for pets, you could find cats, dogs, parakeets, and hamsters—even though those terms are not the exact word pets. Vectors are what allow you to receive results that are related in meaning or context or are alike, without being an exact match.

MongoDB stores your data embeddings alongside the data itself. Storing the embeddings together makes the consequent queries faster. It is easiest to visualize vector search via an example with explanations of how it works along the way. You will learn more about vector search in Chapter 8, Implementing Vector Search in AI Applications.

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY