Module 3: Agentic Coding
This module transforms you from a coder writing syntax to a manager orchestrating intelligence.
You’ll learn:
- The fundamental shift from copilots (next-token predictors) to agents (autonomous task completers) and why this changes your role from writer to architect.
- How to communicate effectively with LLMs through structured prompt engineering using instruction, data, format, persona, and context.
- The ReAct loop (Reason + Act) that transforms a passive language model into an autonomous agent capable of task completion.
- Context engineering techniques to manage the LLM’s working memory across its lifecycle through scratchpads, retrieval, summarization, and multi-agent architectures.
The Journey
Let’s talk about where this module takes you. We start with hands-on experience building with agents, then unpack the three operational layers that power agentic AI: the interface, the engine, and the operating system.
Hands-on: Building with Agents is practice-first. You’ll use Google Antigravity to build a functional game and refactor a codebase entirely through natural language instructions. This experience grounds the theory that follows.
Prompt Tuning: The Interface Layer teaches you how to communicate effectively with LLMs by understanding them as stateless pattern matchers sampling from probability distributions. You’ll structure prompts using instruction, data, format, persona, and context to reliably activate desired patterns.
Agentic AI: The Engine Layer explains the ReAct loop (Reason + Act) that transforms a passive language model into an autonomous agent. You’ll build a working agent using LangGraph that can query and analyze datasets without human intervention.
Context Engineering: The Operating System Layer solves the context window problem. LLMs are brilliant but bounded with limited working memory that degrades as it fills. You’ll learn to manage context across its lifecycle: write through scratchpads and memories, select using MCP and just-in-time retrieval, compress through summarization, and isolate using multi-agent architectures.
Why This Matters
Have you ever wished your coding assistant could actually finish the task instead of just suggesting the next line? The shift from copilot to agent is not just an upgrade in model size, it is a fundamental change in how you interact with AI. This shift changes your role from writer of syntax to manager-architect. Your job is no longer to know the exact syntax of a matplotlib plot. Instead, you know what plot you need, how to clearly specify that requirement, and how to verify that the agent built it correctly.
This transformation matters for productivity, but also for how you think about software development. Understanding agentic systems means understanding feedback loops, context management, and task decomposition. These concepts extend far beyond coding assistants to any system that makes decisions autonomously.
Prerequisites
You should be comfortable with basic Python programming and familiar with API usage. Prior exposure to language models helps but isn’t required (we’ll introduce LLM concepts as needed). If you completed Module 4 on text and transformers, you’ll have deeper insight into what’s happening under the hood, but this module stands alone.
What You’ll Build
By the end of this module, you’ll build a working agent using LangGraph, design effective prompts that reliably produce desired outputs, and implement context management strategies like scratchpads and retrieval systems. You’ll gain practical skills in crafting clear instructions, structuring agent workflows, debugging when agents fail, and managing computational costs. Most importantly, you’ll develop the mindset shift from “How do I code this?” to “How do I specify this clearly enough for an agent to execute?”
Let’s begin by getting your hands dirty with agentic systems.