In-context learning refers to providing a general-purpose Large Language Model with examples of problems and appropriate solutions or a small number of labeled examples directly in the prompt. This approach is often used for classification tasks by providing a small number of labeled examples in domains where good training data is rare1.
A number of alternative names exist, depending on the amount of examples in the prompt. During zero-shot, one-shot or few-shot prompting the models is provided with zero, one or a few examples in the prompt respectively.
Alternatives, for providing more extensive domain knowledge to an LLM, include Retrieval-Augmented Generation or Cache-Augmented Generation.