Software Engineer, powered by Devin
Go from 'I want to build software' to shipping real PRs — with a real autonomous engineer pair-programming with you.
A 12-week curriculum that teaches modern software engineering by pairing you with Devin. You'll learn Git, code reading, specs, debugging, system design, and shipping — and at every capstone you'll actually ship a PR that Devin opens for you. Inspired by Curriki's open educational resources ethos and the 'Programs → Playlists → Activities' model from CurrikiStudio.
At a glance
- Modules
- 5
- Lessons
- 17
- Duration
- 12 weeks
- AI engine
- Devin
Module 1
Git & GitHub
The muscle memory every software engineer needs: branches, commits, pull requests, and code review.
Module 2
Reading code
Engineers spend far more time reading code than writing it. Learn to read strategically — and to ask Devin the right questions.
1. Reading is the job
Why senior engineers read 10x more code than they write — and how to do it well.
10 min2. Ask Devin to explain an unfamiliar repo
Turn a cold codebase warm in 15 minutes by pairing with Devin.
20 min3. Reading a PR diff effectively
Diffs are a dense, specialized form of code — and they're most of what you'll read.
15 min
Module 3
Pair-programming with AI
Using Claude Code, Devin, and subagents at the keyboard — where each tool shines, how to delegate without losing the thread, and how to stay the one driving.
1. The new inner loop
Write → run → debug is now write → delegate → review. What changes when an agent types alongside you.
12 min2. Your keyboard-level agent
Setting up a terminal-native AI pair, the three commands you'll run 90% of the time, and what it can't do.
15 min3. Delegation as a skill
The anatomy of a good task description, the four kinds of context to hand over, and how to avoid the 'almost right' trap.
15 min4. Subagents: decomposing a task
When one agent isn't the right shape — and how spawning researcher/writer/critic subagents changes the work.
12 min
Module 4
Building with AI APIs
Wiring a real LLM into your own software — the request shape, streaming, tool use, cost and error handling, and the shape of a production AI endpoint.
1. Anatomy of an LLM request
What's actually in the wire when you call Claude / GPT / Gemini, and why every field matters.
12 min2. Streaming, chunks, and the TextDecoder flush bug
Why your chat UI feels snappy, how SSE actually works, and the one-line bug that drops emoji at chunk boundaries.
14 min3. Tool use: letting the model call your code
How tool calling works under the hood, what to expose, what not to expose, and the auth check every agent endpoint needs.
16 min4. Cost, errors, and the shape of a production AI endpoint
Counting tokens before you cry, retry policy, user-visible error messages, and the one-page checklist every AI route should pass.
14 min
Module 5
Model Context Protocol
The open protocol that lets any agent discover and call any tool. How MCP servers work, how to write one, and how to plug it into your own code.
1. Why a protocol?
Every agent calling every tool in a custom way doesn't scale. MCP solves the N×M problem.
10 min2. Your first MCP server
A 30-line Python server that exposes one tool. Then another. Then 'connect to my Postgres.' Build up from the minimal shape.
18 min3. MCP beyond tools: resources, prompts, and composition
The rest of the protocol. How to expose data (not just actions), ship prompt templates, and stitch multiple servers together.
14 min