By: Amir Tadrisi

Published on: 6/2/2025

Last updated on: 6/2/2025

What is Prompt Engineering

Like a shepherd guiding his flock, so doth prompt engineering steer the model’s speech—gentle yet sure, leading every thought aright.

Introduction

Imagine you’re driving a car through fog. Your steering wheel is all you have to stay on course. In the world of large language models (LLMs), prompts are the steering wheel—every word you write guides the model’s response. Prompt engineering is the art of crafting those instructions so your AI assistant consistently delivers accurate, engaging, and cost-effective outputs. In this guide, we’ll walk through the building blocks of powerful prompts, explore advanced techniques like Chain-of-Thought (CoT), delve into context-construction strategies, and show you how to monitor performance over time. 

Prompts

At its core, a prompt is text you feed into an LLM to shape both *what* it says and *how* it says it—tone, length, format, even domain focus. A vague prompt like “Tell me about climate change” can yield anything from a scientific treatise to a poetic lament. But a precise prompt—“You are an environmental policy expert. In three bullet points, summarize the top legislative actions on climate change in 2023.”—anchors the model to your needs, cuts down on hallucinations, and saves you time and money. 

Core Prompt Components

A robust prompt typically weaves in four key elements:   

1. Role/System Message: Sets global tone (“You are a data analyst who cites sources.”)

2. User Instruction: The task itself (“Compare Trello vs. Asana in a 5-row table.”)

3. Examples (Few-Shot): Input→output pairs illustrating style or structure

4. Constraints: Limits on length, format, tone, or content (“100 words max,” “no speculation.”)

Example:

You are a children’s storyteller.Setting: a hidden valley where rivers glow at night.Example 1 – Input: ‘A young fox finds a lost lantern.&nbsp;Output: ‘In the valley of lights…’”Now write a brand-new tale about a curious mouse.150 words max, two paragraphs, end with a lesson about friendship.<br>

Writing Effective Prompts

1. Define Your Objective: Exactly what do you want—list, summary, code, analysis?

2. Choose a Persona: “You are an expert security auditor…”

3. Supply Context: Bullet facts or prior conversation history.

4. Add Examples: Show 1–3 ideal input→output pairs.

5. Set Constraints: Word limits, format (bullet points, table), tone (formal, friendly).

6. Iterate & Test: Run your prompt, review output, tweak and version-control every change.

System Prompt&nbsp;(the Director)

The system prompt defines a global guardrail for the model before any user inputs. In our example, "You are a children’s storyteller," can be considered a system prompt, for example, "write a brand-new tale about a curious mouse."

User Prompt (the actor)

The user prompt is the user request or question, for example, "Now write a brand-new tale about a curious mouse."

Context (the stage)

The context provides prior messages, documents, or data that inform the purpose. For example, to chat with your PDF document, the PDF document is the context of the prompt, or your earlier conversation with the bot can be considered as context.

Prompt Engineering

The art of crafting a prompt to get the desired outcome from the model is called prompt engineering. To write an effective prompt as a prompt engineer need to first ask yourself, "What exactly do I want the model to produce?" Then you can define

  • The hat the model should wear, is it "Data analyst", "Security Auditor", "Digital Marketing Specialist", etc
  • Prepare the stage for the model with the context, it can be as simple as some bullet-point facts, or 1000 Wikipedia pages
  • Include examples, Studies show that including examples makes your model's performance better. Prompts without examples are called zero-shot prompts. Each example is a shot in a prompt.
  • For the task, define directive verbs like "list", "write", "explain", "generate", etc
  • Define the tone and style
  • Iterate and test. Models are changing and improving very fast if your prompt worked today doesn't guarantee it will work in the future.

Chain Of Thoughts

Complex, multi-step tasks can stump an LLM—but CoT lets it “think out loud.” Simply append “Let’s think step by step” or “Explain your reasoning before answering,” and you’ll often see more accurate math, logic puzzles, or nuanced analyses. Log whether CoT improves your *accuracy* metric to decide if it’s right for your use case.

Context Construction

As we said, context is the background “stage” you give the model so it can answer accurately. Good context reduces hallucinations and tailors responses to your domain, and improves models' accuracy and actuality. There are different ways to provide context for your application

  1. Providing static context in the prompt, like bullet points, some facts, or a long text
  2. Using  a Web agent to search the internet and find the content
  3. Using RAG applications to provide big data for your model to query and use as context

Prompt Monitoring

Your work isn’t done once the prompt *looks* good—it must *perform* reliably. Track these metrics on every call:

– Quality Score (1–5 human review or auto-classifier)

– Latency (ms) & Cost (tokens, USD per call)

– Error Rate (malformed, off-topic, blank responses)

Log entries in JSON, then visualize in Grafana, Kibana, or even Google Sheets. Set alerts for cost spikes or quality drops, A/B test prompt variants, and iterate based on real data.