-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Practical Generative AI with ChatGPT
By :

In the previous chapters, we mentioned how LLMs typically come in a pre-trained format. They have been trained on a huge amount of data and have had their (billions of) parameters configured accordingly.
However, this doesn’t mean that those LLMs can’t learn anymore. In Chapter 2, we learned the concept of fine-tuning. In the Appendix, too, we will see that one way to customize an OpenAI model and make it more capable of addressing specific tasks is by fine-tuning.
Fine-tuning is a proper training process that requires a training dataset, compute power, and some training time (depending on the amount of data and compute instances).
That is why it is worth testing another method for our LLMs to become more skilled in specific tasks: shot learning.
Definition
In the context of LLMs, shot learning refers to the model’s ability to perform tasks with varying amounts of task-specific examples...