Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Building AI Applications with Microsoft Semantic Kernel
  • Table Of Contents Toc
  • Feedback & Rating feedback
Building AI Applications with Microsoft Semantic Kernel

Building AI Applications with Microsoft Semantic Kernel

By : Lucas A. Meyer
3.9 (9)
close
close
Building AI Applications with Microsoft Semantic Kernel

Building AI Applications with Microsoft Semantic Kernel

3.9 (9)
By: Lucas A. Meyer

Overview of this book

In the fast-paced world of AI, developers are constantly seeking efficient ways to integrate AI capabilities into their apps. Microsoft Semantic Kernel simplifies this process by using the GenAI features from Microsoft and OpenAI. Written by Lucas A. Meyer, a Principal Research Scientist in Microsoft’s AI for Good Lab, this book helps you get hands on with Semantic Kernel. It begins by introducing you to different generative AI services such as GPT-3.5 and GPT-4, demonstrating their integration with Semantic Kernel. You’ll then learn to craft prompt templates for reuse across various AI services and variables. Next, you’ll learn how to add functionality to Semantic Kernel by creating your own plugins. The second part of the book shows you how to combine multiple plugins to execute complex actions, and how to let Semantic Kernel use its own AI to solve complex problems by calling plugins, including the ones made by you. The book concludes by teaching you how to use vector databases to expand the memory of your AI services and how to help AI remember the context of earlier requests. You’ll also be guided through several real-world examples of applications, such as RAG and custom GPT agents. By the end of this book, you'll have gained the knowledge you need to start using Semantic Kernel to add AI capabilities to your applications.
Table of Contents (14 chapters)
close
close
Free Chapter
1
Part 1:Introduction to Generative AI and Microsoft Semantic Kernel
4
Part 2: Creating AI Applications with Semantic Kernel
9
Part 3: Real-World Use Cases
11
Chapter 8: Real-World Use Case – Making Your Application Available on ChatGPT

Multistage prompts

One way to improve the accuracy of LLMs when doing math is to use multistage prompts. In this technique, the answer from the first prompt is passed to the second as a parameter. We’re going to illustrate this with the Chain-of-Thought (CoT) technique.

CoT – “Let’s think step by step”

In the paper Large Language Models are Zero-Shot Reasoners [1], the authors found that simply adding “Let’s think step by step” right after the question can help improve the accuracy of LLMs a lot. Their proposed process works as follows:

  1. Ask the intended question, but instead of asking the LLM to answer, simply append “Let’s think step by step” at the end.
  2. The LLM will answer with a process to answer the question.
  3. Combine the question from step 1 with the process from step 2 in a new prompt, and finish with “Therefore, the answer is…:”
Figure 2.1 – The Zero-shot-CoT method
...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY