Skip to content
Glossary

Prompt Engineering

The order of few-shot examples in a prompt shifts accuracy by over 40 percentage points. Not luck — craft. Prompt engineering controls what an AI model outputs, purely through the way you ask. No code, no training, immediately deployable.

The techniques that count

Zero-shot: a single instruction without examples. Works for simple tasks. Few-shot: 2-8 examples in the prompt, improves accuracy by 25-50%. Chain-of-thought: the model thinks step by step. Boosts reasoning results by up to 58%. One variant (self-consistency) raised math accuracy from 18% to 91%.

System prompts define role, constraints, and format. They set the frame in which the model responds.

Prompt, RAG, or fine-tuning?

Prompt engineering changes the input. Fine-tuning changes the model. RAG extends the knowledge. We combine all three — but always start with the prompt. 80% of the results come from the right instruction.

Prompt engineering is the most cost-effective method to adapt an LLM to a task. No weight changes, no training data needed. That is why it is the first lever we apply in every client project.

Questions about a term?

We are happy to explain what this means for your business.

Schedule a consultation