Harwell — Prompt Engineering for Developers

2-day practical course

Course aim

This two-day course is designed for developers who want to use AI effectively for coding: prompting, AI-assisted Java, tooling choices, RAG, and the Model Context Protocol (MCP). We focus on practical techniques you can apply immediately, with demos, sample code, and runnable POCs in the course repository.

Day 1 covers introduction to GenAI, core prompt engineering (the 3 Cs, few-shot, debugging), AI-assisted Java (generation, explanation, refactoring, testing), and tooling strategies (sidecar vs integrated, chat vs inline). Day 2 covers RAG, MCP, AI APIs (stateful/stateless, temperature, streaming, cost), and the future of AI with clear next steps.

Course materials

All slides, demo prompts, sample code, and runnable RAG/MCP demos are in the GitHub repository. Handouts and resources are below and can be downloaded from the links in this section.

Repository

Handouts

Quick-reference handouts:

Slides (PDF)

Module slides are available as PDFs for offline reference. Download by module:

To regenerate PDFs from the reveal.js sources, run npm run export-harwell-slides (requires Puppeteer). Slide sources are in the GitHub repo.

Module 1: Introduction to GenAI

LLM landscape, choosing the right model and tool, public vs. enterprise AI, MS Copilot in context, capabilities and limitations (hallucinations, cut-offs, bias), and when to trust or verify AI output.

Module 2: Core Prompt Engineering

The three Cs (context, clarity, constraints), zero-shot and few-shot prompting, chain-of-thought, iterative refinement, and debugging prompts.

Module 3: AI-Assisted Java

Code generation, explanation, refactoring, and unit testing with AI; when to generate vs. write manually; review-before-paste and evaluation checklists.

Module 4: Tooling Strategies

Sidecar vs. integrated workflows, context headers for sidecar AI, chat vs. inline completion, and best practices for mixed environments.

Module 5: RAG

Retrieval-augmented generation: when LLMs don’t know your knowledge, how RAG works, RAG vs. fine-tuning vs. long context, and when RAG helps.

Module 6: MCP

Model Context Protocol: the silo problem, how MCP works, and when to consider it in your tooling.

Module 7: AI APIs

Stateful vs. stateless APIs, non-determinism and temperature, streaming, token economics and cost, and when to use what.

Module 8: Future of AI

Recent developments, what’s coming next, what to try now vs. watch, course recap, and next steps so Monday feels different.

Handouts (inline)

Prompt template

Use this structure to get consistent, useful answers from AI when writing or reviewing code.

  1. Context (role + environment) — Role, stack, style (e.g. constructor injection, no @Autowired, immutable DTOs).
  2. Clarity (one clear ask) — One-sentence task, input, and desired output.
  3. Constraints (what to avoid) — No deprecated APIs, no new dependencies, same method signature, etc.
Example
Context: Senior Java developer. Stack: Spring Boot 3, Java 17, JUnit 5, Mockito. Style: Constructor injection, no @Autowired.
Task: Generate a JPA repository interface and a service method that returns all books by author. Return ResponseEntity<List<Book>>.
Constraints: Use the existing Book entity. Do not add new dependencies. Include error handling for empty results.

Context header for sidecar AI

When using a browser-based AI (e.g. ChatGPT, Copilot) alongside an IDE without built-in AI, paste this header above your code so the model knows your environment.

Copy-paste template
Context: Framework: Spring Boot 3. Java version: 17. Style: Constructor injection (no @Autowired). Testing: JUnit 5, Mockito. Database: JPA/Hibernate.
Code: [Paste your code here]
Question: [Your question]

Customise framework, Java version, style rules, and testing stack for your project. Without context, the AI gives generic answers; with this header, answers respect your stack and reduce back-and-forth.

What we covered

Day 1: Introduction to GenAI, core prompt engineering (3 Cs, few-shot, debug), AI-assisted Java (generation, explanation, refactoring, testing), tooling strategies (sidecar vs integrated, chat vs inline).

Day 2: RAG (retrieve–augment–generate), Model Context Protocol (MCP), AI APIs (stateful/stateless, temperature, streaming, cost), future of AI and next steps.

Additional resources

Links to documentation and tools referenced in the course:

Learning outcomes

By the end of the course you will be able to:

  • Choose the right model and tool for coding tasks and policy
  • Write prompts using context, clarity, and constraints
  • Use few-shot and iterative refinement effectively
  • Apply AI for Java code generation, explanation, refactoring, and testing
  • Use sidecar and integrated workflows and context headers
  • Explain when and how RAG and MCP apply
  • Reason about API design: stateful vs stateless, temperature, streaming, cost
  • Plan next steps to use AI confidently and safely

Prerequisites

  • Programming experience (Java/Spring helpful but not required)
  • Basic familiarity with using an IDE and running code
  • No prior AI or prompt-engineering experience required

Course delivery

  • Format: Live, instructor-led training (2 days)
  • Hands-on: Demos, sample code, and runnable POCs in the repo
  • Materials: Slides (reveal.js), handouts, and GitHub repo

Getting started

  1. Clone the repo — All code, demos, and slide sources are in doingandlearning/harwell-ai-course.
  2. Use the handoutsPrompt template and context header for sidecar AI when working with AI.
  3. Export slide PDFs — Open each module’s index.html in a browser and use Print → Save as PDF for offline reference.