
A Professional Introduction to Building with AI-Assisted Development
LunarTech · SilverAI Productions_ _Version 1.0 — February 2026
_"I have never enjoyed coding as much as I do today — because I no longer have to deal with the minutia."_
— Boris Cherny, Head of Claude Code, Anthropic
This handbook was written for anyone who intends to work with Claude Code seriously — whether you have been writing software for two decades or have never opened a development environment.
The tone here is direct and considered. The explanations are complete. Nothing is glossed over, and nothing is inflated. When a concept requires depth, it receives it. When simplicity is sufficient, nothing more is offered.
You will find no motivational padding here. What you will find is a thorough account of what Claude Code is, how it functions, and how to use it well.
Software development has historically been access-restricted. To build a working application — a web service, a data tool, a user-facing product — required either years of technical training or a funded team of engineers. The knowledge barrier was steep, the required time investment was significant, and the population of people who could build was correspondingly small.
This constraint began to erode with the emergence of large language models capable of generating functional code from natural-language descriptions. What started as an augmentation of individual developers has, within the span of a few years, become a structural transformation of how software is built.
As of early 2026, the scale of this shift is measurable and substantial. Independent analysis by SemiAnalysis found that 4% of all GitHub commits globally are authored by Claude Code. That figure is projected to reach 20% by year-end. Spotify disclosed that its highest-performing engineers have not written code manually since December. Boris Cherny, who leads Claude Code at Anthropic, ships between 10 and 30 pull requests daily — every one generated by Claude, none edited by hand.
This is not a speculative future. It is the current state.
The significance of this moment extends beyond productivity metrics. The ability to build software is becoming accessible to a broader class of people — not because the underlying complexity has been eliminated, but because much of the mechanical translation between intent and implementation can now be delegated to an AI agent. What this unlocks, in human terms, is closer to what the printing press unlocked for written communication: a dramatic expansion of who can participate.
Boris Cherny frames it precisely: "I imagine a world where everyone is able to program. Anyone can just build software anytime." He draws the parallel to the printing press explicitly — a technology that transferred a capability previously held by a small, specialized group to the general population, and that preceded an explosion of human creative and intellectual output.
This handbook exists to help you become a capable participant in that transition.
Claude Code is a product of Anthropic. To understand the product fully, it is useful to understand the organization that built it.
Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and several colleagues who had previously worked at OpenAI. The founding motivation was not competitive positioning. It was a principled disagreement about how AI development should proceed.
The founders believed — and continue to believe — that building powerful AI systems without a rigorous, primary commitment to safety constitutes one of the most consequential risks humanity has introduced. Their response was to establish an organization whose central purpose is to develop AI capability and AI safety in parallel, treating the latter not as a constraint on the former but as an equal and inseparable objective.
Three of Anthropic's co-founders co-authored the original scaling laws paper — one of the foundational documents of modern AI research, which described mathematically how model capability scales with size and compute. These are people who understood the trajectory of AI capability before most of the industry had internalized it. Their choice to build an organization focused on safety reflects informed conviction, not caution.
At Anthropic, safety research manifests across multiple layers. The deepest is mechanistic interpretability — the scientific effort to understand what is actually happening inside a model at the level of individual computational components. This is not an abstract exercise. As Boris Cherny describes it: "We can identify a neuron related to deception. We are starting to get to the point where we can monitor it and understand that it's activating."
This work informs how models are trained, how they are evaluated, and how they are deployed. It also shapes Claude Code directly. Before public release, Claude Code ran internally at Anthropic for four to five months, with behavior studied carefully before any external release. This was not a formality. It reflected genuine uncertainty about how an agentic AI system would behave in conditions that training-time evaluations cannot fully anticipate.
By early 2026, Anthropic had reached a valuation of over $350 billion. Claude Code is reported to generate over $2 billion in annual revenue and continues to accelerate — daily active users doubled in the month prior to this writing. The company's models, particularly Claude Sonnet 4.6 and Claude Opus 4.6, are the current standard for serious AI-assisted software development across organizations from early-stage startups to the largest technology companies in the world.
Claude Code is powered by Anthropic's Claude models. The models are the intelligence underlying the system. Claude Code provides the environment — the tools, the interface, the scaffolding — but the models determine the quality of reasoning, planning, and execution.
Claude Sonnet 4.6 is Anthropic's mid-tier model. It delivers strong performance across coding tasks — planning, implementation, debugging, documentation — at a meaningfully lower cost per token than Opus.
Sonnet 4.6 represents the inflection point at which Claude Code became broadly useful. Prior to this generation, models were capable but insufficiently reliable for production workflows. Sonnet 4.6 changed that, providing the reasoning depth required for real engineering work at a price accessible to individual developers and small teams.
For most development tasks of moderate complexity, Sonnet 4.6 is adequate. It handles single-feature implementations, debugging sessions, and documentation generation well. Where it reaches its limits — extended autonomous sessions, deeply architectural decisions, complex multi-step reasoning — Opus 4.6 becomes the appropriate choice.
Claude Opus 4.6 is Anthropic's most capable model. Research measuring its performance on real software engineering tasks found that it achieves a time horizon of approximately 14.5 hours at 50% task completion rate — meaning it can handle unattended work that would occupy a skilled engineer for most of a working day.
Boris Cherny uses Opus 4.6 exclusively, with maximum effort enabled, and never reduces capability to save tokens. His reasoning is precise: "Because a less capable model is less intelligent, it requires more tokens to do the same task. It is not obvious that using a cheaper model is actually cheaper. Often, the most capable model is cheaper and less token-intensive because it completes the task faster with less correction."
Opus 4.6 is Anthropic's first ASL-3 class model — a designation in their safety classification framework applied to models of sufficient power that the most rigorous safety protocols are warranted before and after release.
Claude Haiku is Anthropic's lightest model — fast, inexpensive, and suited to simple tasks: summarization, brief lookups, lightweight generation. For Claude Code work, Haiku is rarely the right choice. It lacks the reasoning depth required for meaningful software development.
The practical guidance is straightforward: do not select a model primarily on cost. A less capable model that requires more correction cycles, more context clarification, and more tokens to reach an acceptable output frequently costs more in total than a more capable model that completes the task in fewer passes.
Claude Code is an AI agent for software development. That definition requires unpacking.
A conversational AI system — ChatGPT, Claude.ai in basic form, most early AI products — produces text in response to text. It can explain, summarize, translate, draft. It generates output that a human then acts upon. The AI does not itself act in the world.
Claude Code is categorically different. It is an agent — an AI system equipped with tools that allow it to act. In a Claude Code session, the model does not merely generate a code snippet and return it to you. It reads the files in your project. It writes and modifies those files. It executes terminal commands. It installs packages. It runs tests. It searches the web. It commits to version control. It opens pull requests.
The distinction matters. When you engage Claude Code on a development task, you are not prompting a text generator. You are directing an autonomous agent that can execute sequences of actions, make decisions about which tools to employ, and produce material results in your codebase.
Most AI-powered development tools are built by constraining the model — defining rigid workflows, controlling what the model can see, specifying precisely which tools it can use in which sequence. This creates predictability at the cost of flexibility and capability.
Claude Code's architecture inverts this. As Boris Cherny describes it: "The product is the model." The approach is to expose the model as directly as possible, with a minimal set of tools and minimal scaffolding, and allow the model to determine the best approach for a given task. The model decides which tools to use, in what order, and how to combine them.
This approach trusts the model's judgment. With Claude Opus 4.6, that trust is warranted. The model can assess a complex problem, formulate a strategy, execute it using available tools, and adapt when it encounters unexpected conditions — without constant human intervention.
Claude Code is available across multiple surfaces:
The underlying agent is identical across all surfaces. The interface differs; the capability does not.
This breadth is a deliberate expression of product philosophy: bring the tool to wherever people already work, rather than requiring people to adapt to a new environment. Boris describes this as "latent demand" — a principle that shaped both Claude Code's original terminal deployment and its subsequent expansion.
Boris describes the growth trajectory as still accelerating: "It's not just going up — it's going up faster and faster."
Claude Code did not emerge from a product strategy. It emerged from a genuine need in how software is built — and from an observation about where friction in that process actually lives.
Professional software development involves significant mechanical work that has nothing to do with the intellectual challenge of building good systems. A large portion of a developer's day, in a typical codebase, involves:
None of this is where engineering expertise is exercised. It is the overhead of translation — converting understanding into syntax, correct syntax, in the right files. Boris describes it as "the minutia" and "the tedious parts" — things that consumed time without demanding the faculties that matter.
Claude Code eliminates this overhead. The mechanical translation is handled by the agent. What remains for the engineer is the part that requires judgment: what to build, how to architect it, whether it is correct, whether it serves its purpose.
Beyond the tedium experienced by professional developers, there is the structural barrier facing everyone else. Building functional software has required years of technical training. The ideas exist broadly. The capacity to execute them has been concentrated in a small technical population.
Claude Code redistributes that capacity. It does not eliminate the need for understanding and judgment — this handbook will return to that point repeatedly — but it dramatically lowers the threshold at which someone can produce working software. A product manager, a scientist, a business owner, a domain expert with a clear idea can now begin building in a way that was not previously available to them.
Senior engineers frequently know exactly what they want but find it difficult to convey that precisely enough to produce it consistently. The distance between a specification and its implementation — what engineers call the specification gap — is one of the chronic sources of friction in engineering teams.
With Claude Code, that gap narrows substantially. The developer remains in the loop throughout, reviewing plans before they are executed, inspecting output as it is produced, redirecting when necessary. The feedback cycle collapses from days to minutes. Misalignments are caught early, when they are cheap to fix.
This chapter covers everything required to have Claude Code running on your machine from a position of zero prior configuration.
Before installing Claude Code, you will need an account on Claude.ai.
This account authenticates you across all Anthropic products, including Claude Code.
Claude Code requires a paid subscription. The available tiers as of 2026:
Claude Pro — $20 per month
The Pro plan provides access to Claude's models with a daily usage limit. When you reach that limit, you wait until the following day to continue.
For someone beginning with Claude Code — building small projects, learning the system, running sessions of moderate intensity — the Pro plan is sufficient. It is not designed for sustained professional use, but it is an appropriate entry point.
Claude Max — $100 or $200 per month
The Max plan substantially increases or effectively removes usage limits. At the $200 tier, Anthropic's own team describes never encountering the usage ceiling under normal working conditions.
If Claude Code becomes a primary instrument in your workflow — which it will, if you use it consistently — the Pro plan will constrain you. Upgrade to Max when you begin hitting limits regularly.
On the cost question: Claude Code at the $20 tier represents a low threshold for access to something that materially changes what one person can build. The question is not whether the tool is worth the cost. The question is whether you will use it consistently enough to benefit from it. Begin at $20. The answer will be evident within a few weeks.
Navigate to code.claude.ai. This is Anthropic's official installation hub for Claude Code. It provides:
If you are comfortable in a terminal, the single-line npm installation command is sufficient. If not, proceed to the VS Code method described in the following chapter.
For those without a prior terminal workflow, Visual Studio Code provides the most accessible entry point into Claude Code. This chapter covers that setup completely.
Visual Studio Code is a free, open-source code editor distributed by Microsoft. It holds over 70% market share among developers and is the environment in which the majority of professional software development occurs today — not because it is technically superior to all alternatives, but because it is well-designed, extensible, and broadly supported.
The Claude Code extension for VS Code provides a graphical interface to the same underlying agent you would access through a terminal. Within this interface:
This environment is appropriate for any skill level. Experienced developers benefit from the tight integration. Those new to development benefit from the visibility — it is always clear what Claude is doing and why.
When installation completes, a Claude Code icon will appear in the VS Code toolbar. This opens the Claude Code panel.
Claude Code operates within a directory — a folder containing your project files. Before beginning, create a folder for your project and open it in VS Code:
VS Code will display the (currently empty) folder in its sidebar. Claude Code will read from and write to this directory throughout your session.
Click the Claude Code icon. The Claude Code panel opens. You are ready to begin.
The first time you open Claude Code:
A working understanding of token economics will inform better decisions about how you use Claude Code.
Every interaction with a Claude model consumes tokens. A token corresponds roughly to three-quarters of a word — so a prompt of 100 words and a response of 300 words represents approximately 530 tokens total.
Token consumption accumulates across a session. Each message you send, each response Claude generates, each file Claude reads — all of this is tokenized and counted. On subscription plans, this accumulated usage is measured against your plan's daily allowance.
Brief queries consume trivially few tokens. But the kind of work Claude Code is built for — reading through an existing codebase, planning a feature, implementing it, correcting course, running verification — can consume tens of thousands of tokens in a single session.
Advanced users running multiple parallel agents consume far more. Boris Cherny notes that some engineers at Anthropic spend hundreds of thousands of dollars monthly in token costs. For those individuals, Claude Code has replaced what would otherwise require entire engineering teams.
For someone beginning, usage will be modest. Token costs at the Pro tier are not a practical concern during the learning phase.
Boris Cherny's advice on model selection addresses a counterintuitive point: using a less capable model to reduce token costs often increases total token consumption. A model with weaker reasoning requires more correction passes, generates more context clarification, and takes longer to converge on an acceptable result. The less capable model costs less per token and often costs more per task.
His recommendation: use the most capable model available. Currently, that is Opus 4.6. Reduce model capability only when performance requirements are demonstrably lower and the trade-off is understood.
The broader principle he articulates: "Don't try to optimize too early. Give engineers as many tokens as they need. At the point where something is proven and scaling, then optimize. Not before."
The rule is this: begin at the plan that does not actively frustrate your usage. If you reach limits and want to continue, that is information — it means you have integrated the tool into your work. Upgrade at that signal.
With Claude Code installed and a project directory open, the first session is an exercise in learning how the system communicates and responds.
The appropriate starting point is a project with immediately visible output. For those new to software development, web projects are optimal: you write files, open a browser, and see the result directly. The feedback loop is fast and unambiguous.
A minimal first task might look like this:
Build a personal homepage. Include a name, a short biographical description,
and links to LinkedIn and Twitter. The design should be clean and dark-themed.
Use HTML and CSS only — no JavaScript frameworks.
Claude Code will:
Open the resulting index.html file in a browser. Assess whether it meets your intent. Where it does not, state precisely what needs to change. This cycle — prompt, output, assessment, refinement — is the fundamental working method.
Proficiency with Claude Code develops through use, not through reading. Each session — each cycle of prompt, output, and refinement — builds judgment about how to specify intent clearly, how to evaluate Claude's output, and how to correct course efficiently.
This is not unique to AI tools. It is how expertise in any instrument develops. Begin with modest scope, observe closely, and adjust. The speed at which fluency develops is proportional to how actively you engage with each result rather than accepting or rejecting it without analysis.
The relationship between prompt quality and output quality is direct and consistent. This is not a limitation of Claude Code — it is a property of any system that must translate intent into action. The more precisely intent is expressed, the more accurately it can be executed.
Poor results from Claude Code are almost never attributable to model incapability. They are attributable to underspecified prompts. When Claude produces something that does not match what you wanted, the correct diagnostic question is: _was my specification sufficient to produce what I wanted?_
In most cases, it was not.
Claude operates on what it receives. If you provide an abstract description of a product, Claude fills the gaps with its own reasonable assumptions. Where those assumptions deviate from your expectations — and they will — the output disappoints. The gap is not between what Claude can do and what you need. It is between what Claude knew and what it needed to know.
Effective prompts are specific, contextual, and feature-oriented. Consider the difference:
Underspecified:
Build a task management app.
Adequately specified:
Build a task management application for a three-person team. Requirements:
1. Tasks have a title, optional description, due date, and priority level (low, medium, high)
2. Each task can be assigned to one of three hardcoded users: Alice, Bob, or Carol
3. Tasks can be marked complete, with the completion timestamp recorded
4. The task list can be filtered by assignee or by priority
5. All data persists in localStorage — no backend required
6. Interface: clean, light theme; no external CSS frameworks
Technology: HTML, CSS, vanilla JavaScript only.
The second version closes all the significant decision points Claude would otherwise resolve by assumption. It produces a result substantially closer to intent on the first pass.
State what you want Claude to do even when it seems obvious. If you want Claude to examine documentation before writing code — say so. If you want specific library versions — specify them. If you want no changes to existing files — make that explicit. If you want a particular file structure — describe it.
Implicit expectations are expectations that are frequently not met. Explicit instructions are instructions that are consistently followed.
A practical test: if a competent engineer read your prompt with no other context, could they build exactly what you have in mind? If not, the prompt is not yet specific enough.
This standard is useful because it surfaces the actual gaps — the decisions you have not yet made, the constraints you have not yet articulated, the behavior you have not yet defined. Filling those gaps before Claude begins building is far more efficient than discovering them in the output.
Planning is the highest-leverage activity in Claude Code development. It is also the most commonly underinvested.
A well-specified plan, reviewed and approved before any code is written, produces three effects:
Without adequate planning, Claude makes architectural and implementation decisions autonomously, based on what it infers from limited input. Some of those decisions will be correct. Some will not. Discovering which ones were wrong after the codebase has been built around them is expensive — it requires reading, understanding, and correcting code that was generated at significant token cost.
The ratio of planning time to development time that produces optimal outcomes is higher than most people initially expect. Thirty minutes of structured planning frequently reduces a ten-hour build to three hours. The mathematics are not subtle.
Claude Code includes a dedicated Plan Mode. In this mode, Claude reasons through the task and produces a structured plan — which files will be affected, what the implementation sequence will be, how data will flow, what edge cases need to be handled — without writing a single line of code.
You review the plan. You can question it, modify it, reject portions of it, or add constraints. Only when the plan reflects your actual intent do you release Claude to begin implementation.
Boris Cherny uses Plan Mode for approximately 80% of his sessions. The mechanism itself is disarmingly simple — a single sentence injected into the model's context: _"Please do not write any code yet."_ That single instruction changes Claude's behavior from execution to structured reasoning.
To activate Plan Mode:
The discipline here is important: actually read the plan Claude produces. Do not approve it reflexively. The plan is the point at which you can intervene at minimum cost.
The formal output of a planning session is a Product Requirements Document (PRD). For individual projects, this need not be elaborate. It should contain:
A PRD.md file at the project root, readable by Claude Code at the start of each session, provides consistent context that persists across sessions. The quality of this document directly determines the quality of every subsequent build session.
A useful technique for generating a complete PRD is to instruct Claude to interview you about the project before planning anything:
I want to build [project description]. Before writing any plan or code,
please use the Ask User Question tool to interview me systematically —
covering technical requirements, feature specifications, UI decisions,
data model, and any trade-offs I should consider.
Do not proceed to planning until the interview is complete.
This surfaces decisions you may not have consciously formulated. Some questions will have obvious answers. Others will reveal gaps in your own thinking — gaps you would prefer to close before they manifest as incorrect implementation decisions.
A planned project is built incrementally. The instinct to attempt complete implementation in a single pass should be resisted. Feature-by-feature development is not slower — it is more reliable, more verifiable, and ultimately faster.
Each feature implemented is a unit of behavior that can be verified independently. If Feature 2 is built on top of Feature 1 without verifying Feature 1, defects compound. A flaw in the foundation propagates upward, embedding itself in everything built above it. Discovering that flaw late multiplies the cost of correcting it.
Each verified feature is also a stable platform from which the next feature can be built with confidence. The accumulation of verified, working behavior is what a production system is.
There is also the matter of understanding. A developer who has watched — and reviewed — Claude build each feature, one at a time, understands how the system works. That understanding is necessary for directing effective corrections, making informed architectural decisions, and explaining the system to others.
For each feature in the PRD:
1. Specify the feature in detail. Provide the feature description and any supplemental context — files that will be involved, constraints on implementation, acceptance criteria.
2. Enter Plan Mode and review the plan. Read Claude's proposed approach. Is it consistent with your design intent? Will it affect files it should not touch? Is the data flow correct? Revise the plan if necessary.
3. Approve and release to implementation. Once the plan is sound, release Claude to implement. Review the diff view for each file change.
4. Test the feature. Manually exercise the feature's intended behavior. Deliberately test boundary conditions. Does it behave correctly when data is missing? When values are at their limits? When the user does something unexpected?
5. Address any discrepancies. State precisely what is incorrect and ask Claude to correct it. Specificity in correction is as important as specificity in the original specification.
6. Confirm and advance. When the feature works correctly, move to the next one.
For those new to software development, the best initial projects are small, web-based, and produce immediately visible output. Examples:
Web projects require no server configuration, no deployment setup, and no build pipeline. Open a browser, load a file, observe the result. The feedback cycle is as short as possible — which makes the learning cycle as short as possible.
Claude Code's operational modes govern how it behaves when executing a task. Understanding these modes prevents common errors and allows you to calibrate the level of oversight appropriate to your context.
Described in detail in Chapter 11. Claude reasons; does not write. Use it to begin any task of non-trivial complexity. The cost of a few minutes in Plan Mode is invariably lower than the cost of correcting a misaligned implementation.
This is the default mode. Before modifying any existing file or creating new ones, Claude presents the proposed change and requests explicit approval.
The presentation takes the form of a diff view — proposed additions displayed in green, proposed deletions in red. You can approve or reject each change individually.
This mode is appropriate whenever:
The overhead of reviewing and approving each change is not waste. It is learning. You understand what is being built because you have reviewed every element of it.
In this mode, Claude writes files without asking for approval between each change. This is appropriate only after Plan Mode has been used and the plan has been reviewed and approved. Once you have confirmed that Claude's approach is correct, there is no additional value in approving each individual file write — the strategy has already been established.
Boris describes the transition: "Once the plan looks good, I just let the model execute. I auto-accept edits after that. With Opus 4.6, it oneshots it correctly almost every time."
Do not default to this mode. Earn it by developing confidence in your own plan review. Automatic edits without plan review is the condition in which errors most readily compound.
Every Claude session operates within a context window — the total volume of information the model can hold simultaneously. When that window fills, older information is compressed or lost. Understanding this constraint is necessary for managing long sessions and multi-session projects effectively.
For Claude Opus 4.6, the context limit is 200,000 tokens — equivalent to roughly 150,000 words, or approximately 200–300 pages of text.
In a long development session, this fills faster than it appears to. Reading a large codebase consumes tokens. A lengthy conversation accumulates them. Plans, implementations, test outputs, corrections — all tokenized, all counting toward the window.
The symptom of context saturation is drift: Claude begins making decisions inconsistent with constraints it established early in the session. It forgets architectural decisions. It reverts to default assumptions. If you notice this, the session has likely consumed most of its available context.
A working convention among experienced Claude Code users: when context usage reaches 40–50%, begin a new session. This is conservative enough to avoid the degradation zone and preserves clarity throughout the active session.
Claude Code displays context usage as a percentage. Monitor it during long sessions.
Beginning a new session does not mean beginning from zero. All files on disk are preserved. Claude can read them fresh.
The key is documentation. A well-maintained PRD.md, a current README.md with implementation status, and a CLAUDE.md with project conventions allow a new session to reach productive state rapidly. The briefing document replaces the accumulated conversation history as the source of context.
Some developers maintain an explicit progress.md that tracks, feature by feature, what has been completed, what is in progress, and what remains. A new session begins with: "Read PRD.md, README.md, and progress.md, then describe your understanding of where we are."
MCP — the Model Context Protocol — is Anthropic's open protocol for connecting AI agents to external tools, services, and systems. It was developed by the same team that built Claude Code and released as an open standard.
Out of the box, Claude Code can read and write files, execute terminal commands, and search the web. MCP extends this to include virtually any application or service that exposes a compatible interface.
When an MCP server is installed and connected, Claude Code gains the ability to act on that service directly — reading data from it, modifying data within it, triggering actions in it — without requiring any manual data transfer between Claude and the external system.
This is the difference between Claude handing you a LinkedIn post to publish manually and Claude publishing it directly, including scheduling and image attachment.
GitHub (included by default):
Notion:
Google Workspace:
Playwright (browser automation):
Airtable / database integrations:
Boris uses Claude to pay parking fines, cancel subscriptions, send Slack messages, maintain project tracking spreadsheets, and send reminders to team members — all via plane English instructions to an agent that executes these tasks through connected MCP servers.
Within a Claude Code session:
Add the following MCP server at user scope so it is available across
all my projects: [paste the server installation command or configuration]
After installation, restart the Claude Code session. The MCP server will be active in all subsequent sessions.
The user scope specification ensures the server is available globally, not only for the current project directory.
Connect MCP servers that correspond to tools you already use. There is no benefit to connecting services you do not actively work with. The value of MCP is in reducing friction in existing workflows, not in creating new ones.
As competency with Claude Code develops, the natural extension is parallel operation — multiple Claude sessions running simultaneously, each handling a distinct workstream.
A software project typically involves multiple independent workstreams that do not require sequential execution. Frontend implementation, backend logic, test coverage, documentation — these can often proceed in parallel without introducing conflicts, provided they are working on different files.
Multiple Claude Code sessions, each assigned a specific scope, replicate the parallel capacity of a small team. One session builds the authentication system. Another builds the data visualization layer. A third writes tests for features already completed. Each operates independently; all write to the same disk.
Boris Cherny operates with five or more parallel sessions routinely. His description of the workflow: "I kick off one task, then something else, then something else, and go get a coffee while they run."
Before running multiple sessions, ensure each session has a clearly bounded scope. Ambiguous boundaries between sessions create conflicts — both sessions attempting to modify the same file, or making incompatible assumptions about shared components.
Begin with two parallel sessions. Assign each a scope that is clearly distinct in terms of files affected. Verify that both produce coherent output before increasing parallelism.
Claude Code can spawn sub-agents internally — additional Claude instances dedicated to specific components of a larger task. This happens automatically when Claude determines that parallel execution of sub-tasks is appropriate.
For example: given the instruction "audit the entire codebase for security vulnerabilities," Claude may spawn sub-agents for different modules, each producing a findings report, which Claude then aggregates into a single document. You submit one instruction and receive a single coherent result.
This capability scales with model capability. Opus 4.6 operates autonomous sessions for 10 to 30 minutes reliably. Extended sessions of hours or more are reported in advanced deployments.
For production codebases, Git worktrees provide the cleanest mechanism for parallel agent development. Each agent operates in an isolated branch with its own working directory, preventing file conflicts entirely. When a branch's work is complete and verified, it is merged into the main branch through the standard review process.
This requires familiarity with Git workflow. It is not necessary at the beginning, but it becomes important as the scale and complexity of parallel workstreams increases.
Claude Code can be given project-specific instructions that persist across sessions — applied consistently without requiring re-specification in each new conversation. These are called Skills (in Anthropic's terminology) or, more simply, rules.
Every project has standards: how files are named, what libraries are used and which are prohibited, what the testing coverage requirements are, how authentication must be implemented, what the database access patterns are. These standards exist to ensure that the codebase remains coherent across contributions, sessions, and time.
Without persistent instructions, you re-specify these standards in each session. With them, Claude knows the project's conventions from the moment a session begins. The quality of output aligns with your standards without requiring constant specification.
The primary mechanism for persistent instructions is a file named CLAUDE.md at the project root. Claude Code reads this file at the start of every session.
A well-written CLAUDE.md contains:
# Project: [Name] — Claude Context
## Architecture
[Technology stack, infrastructure, auth mechanism, external services]
## Conventions
- [Naming conventions]
- [File organization]
- [Async patterns — e.g., always async/await]
- [Data access patterns — e.g., all DB calls through service layer]
## Libraries
[What is used, what is prohibited]
## Testing
[Framework, coverage requirements, testing patterns]
## Security
[Specific security requirements — credential handling, input sanitization specifics]
## Definition of Done
[What constitutes a completed feature before it can be marked complete]
With this file in place, every Claude session for this project begins with full knowledge of the codebase's standards. You do not repeat yourself. The codebase does not drift from its own conventions.
Organizations and platforms are beginning to publish standardized Skills — pre-written instruction sets that encode best practices for building on their platforms. Vercel has launched this initiative for their hosting and deployment platform, enabling Claude Code to make correct deployment decisions without explicit guidance.
This represents a direction in which the ecosystem will develop further: a library of verified, platform-specific instruction sets that any developer can include in their project, encoding decades of accumulated engineering judgment into Claude's available context.
Autonomous loop operation — where Claude Code executes a task sequence without human approval between steps — is frequently discussed and frequently misapplied. This chapter describes the conditions under which it is appropriate and the conditions under which it is not.
In autonomous operation, Claude Code receives a task list and works through it sequentially without pausing for approval. It makes decisions, encounters obstacles, adapts, and continues. The human observer reviews the aggregate output, not individual steps.
The efficiency gain is real. A well-constructed loop running against a well-specified task list can accomplish hours of repetitive work — documentation, test generation, systematic refactoring — without human supervision.
A loop amplifies both the quality of its instructions and the defects within them. If a misunderstanding exists in the task specification, the loop will execute all subsequent tasks consistently with that misunderstanding. The error does not self-correct. It accumulates.
This is why loops are inappropriate for ambiguous, underspecified, or creatively demanding tasks. The absence of human review between steps removes the checkpoints that catch drift early.
The practical advice from practitioners who have learned this: build without loops first. Develop a repertoire of projects in which you have reviewed each plan, approved each edit, and understood each output. Build the judgment needed to distinguish a well-specified task from an underspecified one. Only then introduce autonomous loops, and only for the class of tasks — well-bounded, repetitive, clearly defined — that they serve well.
The loop is a tool for known problems. It is not a substitute for understanding or oversight.
The capabilities of Claude Code do not eliminate the requirement for verification. AI-generated code is held to the same standards as any other code — and those standards require review.
Claude Code generates code based on patterns learned during training. Across a wide range of tasks, this produces correct, functional, secure output. In a defined set of conditions, it does not.
API hallucination: Claude may reference functions, parameters, or library versions that do not exist or have changed since its training data was collected. This is most common in libraries that evolve rapidly.
Edge case omission: Claude generates implementations that handle the primary flow correctly and may not fully address boundary conditions — empty inputs, null values, network failures, malformed data.
Security vulnerability introduction: Common vulnerability classes — SQL injection, inadequate input sanitization, insecure random number generation, improper credential handling — can be present in generated code that passes visual inspection. These require deliberate security review to detect.
Confident incorrectness: Claude presents output with consistent confidence regardless of its correctness. The tone of a response is not a reliable indicator of its accuracy.
Read the code. Not exhaustively, but substantively. Understand what each significant section does. Could you explain it? Is the logic consistent with your stated requirements?
Test the behavior. Manually exercise the functionality. Test the primary flow. Test the edges. Does it behave correctly when inputs are missing? When values are at their extremes? When dependencies are unavailable?
Use automated verification. Request that Claude generate tests for the code it writes. Ask for coverage that includes edge cases explicitly. Automated tests are not a substitute for code review, but they catch regressions systematically.
Apply heightened scrutiny to sensitive domains. Authentication, authorization, payment processing, medical data handling, privacy-related data storage — these areas require security expertise and careful review beyond what automated checks provide.
Anthropic has released Claude Code Security — a capability in preview as of 2026 that scans codebases for known vulnerability patterns and generates proposed corrections. This represents the direction of security tooling: integrated, automated, and AI-assisted. For production systems, treat it as an additional layer, not a replacement for expert review.
Experience across the Claude Code community consistently confirms: developers who write some code themselves, rather than delegating all implementation, maintain significantly better understanding of their systems.
This understanding is not incidental. It is what allows correct review of Claude's output. It is what surfaces subtle errors that are invisible to anyone without domain knowledge. It is what produces systems that remain maintainable when the context of their creation is no longer fresh.
Use Claude Code to eliminate mechanical overhead — the boilerplate, the repetitive patterns, the documentation that takes time but requires no judgment. Do not use it to replace engagement with the system you are building. That engagement is where your expertise lives, and where the quality of the system is ultimately determined.
The pace at which Claude Code's capabilities are evolving makes any description of "current" features provisional. What follows is an account of the frontier as of February 2026.
Claude Code is beginning to generate actionable suggestions based on observed project signals — not merely executing tasks specified by the developer, but identifying tasks that warrant attention.
Boris describes the current state: Claude reads Slack feedback threads, examines GitHub issue trackers, reviews telemetry data, and surfaces suggestions with associated pull requests: "Here are a few things I can do. I've put up a couple PRs. Want to take a look?"
The developer reviews, approves, and directs. But the initiative no longer flows exclusively in one direction. The agent is beginning to participate in the question of what should be built, not only in the execution of what has been decided.
Boris Cherny's assessment of the current state: "Coding is largely solved — at least the kind of coding I do." The frontier, accordingly, is expanding into adjacent domains.
The co-work product extends Claude Code's agentic capabilities — acting on tools, executing multi-step tasks, operating in browser environments — to general knowledge work. The target population is not only developers but anyone who works with digital tools: analysts, product managers, administrators, researchers.
For the development community, this expands the scope of what Claude Code can assist with beyond implementation into the full lifecycle of building a product: user research synthesis, product specification, market analysis, customer communication, project coordination.
Boris Cherny's most actionable strategic guidance for those building on Claude Code: "Build for the model six months from now, not for the model of today."
Tool use capability, session duration, autonomous operation reliability — these dimensions improve on a predictable trajectory. A workflow designed for today's capability limits will be underpowered six months from now. A workflow designed at the edge of near-future capability will be effective exactly when it matters.
This requires accepting that the current product experience may be slightly ahead of the current model's reliable range. That gap closes. The developers and organizations that have already built the workflows when the model catches up will have a structural advantage.
This chapter addresses the question that underlies every chapter in this handbook: what does it mean to practice software engineering as an AI-assisted developer?
Programming is the translation of logic into code. It requires knowledge of syntax, libraries, patterns. It is the mechanical layer.
Software engineering is the design and construction of reliable, maintainable systems. It requires judgment about architecture, testing strategy, operational concerns, and the long-term consequences of current decisions. Code is the medium; the system is the product.
Software architecture is the discipline of making structural decisions that determine what a system can become. It requires the ability to translate complex requirements into design, and to anticipate how a system will need to evolve.
Claude Code handles a significant portion of the programming layer. The engineering and architecture layers remain human responsibilities — and their importance, if anything, increases when the mechanical translation is delegated to an agent. With Claude generating implementation rapidly, the quality of what gets built is almost entirely a function of the quality of the design guiding it.
How systems work. Understanding why a query index matters, what a foreign key constraint enforces, how a session token is validated, what a memory leak actually is — this knowledge is what allows you to direct Claude Code correctly and evaluate its output accurately. You do not need to implement these things by hand. You need to understand them.
Why design decisions exist. Separation of concerns, dependency inversion, input validation, layered access control — these are not bureaucratic conventions. They are solutions to recurring engineering problems. When you understand why a pattern exists, you can direct Claude to implement it correctly and catch deviations.
What quality looks like. Readable code. Consistent behavior. Graceful error handling. Clear interfaces. Adequate test coverage. These evaluations require judgment that is developed through experience — and that judgment is what separates software that works from software that lasts.
Boris speaks of "audacity and taste" as properties he looks for in the products his team builds — software that stops users in place, that solves a problem with an elegance that makes the solution feel inevitable. This standard cannot be delegated.
Taste is developed through sustained engagement with excellent work. Reading good software. Using well-designed products with a critical eye. Understanding what produces the feeling of quality, and being able to articulate that understanding precisely enough to direct Claude toward it.
The developers who produce remarkable software with Claude Code are not using it as a replacement for their own judgment. They are using it as an execution instrument for a vision that is entirely theirs.
Reproducing existing products — building another version of something that already exists — is a reasonable learning exercise. It is a poor use of what Claude Code actually makes possible.
The barrier to building novel software has changed. A working prototype of a genuinely original idea can be produced in a weekend. The cost of exploring an unusual approach is dramatically lower than it was. The ability to try something that has never been tried, and to have a working version of it in hours, is new.
This creates an obligation to think more ambitiously about what to build, not to settle for the comfort of known ground. The question worth asking is not "what already exists that I could rebuild?" but "what should exist that does not?"
This final chapter provides a concrete sequence for applying everything in this handbook.
The learning begins with a real project — not an exercise, not a tutorial reproduction, but something you intend to exist and potentially to use.
The size should be minimal. A personal homepage. A simple tool for a specific personal workflow. A landing page for an idea you have been carrying. The constraint is that it be real — that its completion produces something with actual value to you.
A structured first week:
Day 1: Install VS Code and the Claude Code extension. Configure your account. Ask Claude Code what it can do. Understand the interface.
Day 2: Define your first project. Write down five specific features. Make them as concrete as you can. Do not begin building yet.
Day 3: Use Plan Mode to discuss the project with Claude. Refine the plan until it reflects your intent accurately. Implement Feature 1.
Day 4: Test Feature 1 until you are confident it works correctly. Implement Feature 2.
Day 5: Refine the project. Ask Claude to review it against the original specification. Close any gaps.
Day 6: Share the project with someone who can give you honest feedback. Note the gap between what you built and what would have been better.
Day 7: Assess the week. What was harder than expected? Where did Claude's output surprise you? What would you do differently? What do you want to build next?
This cycle — one week, one small project, one complete iteration — produces more learning than any amount of reading about Claude Code.
Competency with Claude Code develops in stages. The sequence is:
Stage 1 — Orientation (Weeks 1–4) Web-based projects. Single-feature scope. Emphasis on learning to communicate with Claude Code precisely and to evaluate its output critically.
Stage 2 — Construction (Months 1–3) Multi-feature projects. Introduction to databases, APIs, and multi-file architecture. Emphasis on planning discipline and feature-by-feature verification.
Stage 3 — Professional Practice (Months 3–6) Full-stack applications. Deployment and production considerations. Multi-session workflows. MCP integrations. Emphasis on reliability, security, and maintainability.
Stage 4 — Advanced Operation (Months 6 and beyond) Parallel agent workflows. Autonomous loops for defined tasks. Custom Skills and project-level configuration. Open-source contribution and team-level Claude Code integration.
Each stage depends on the previous one. The temptation to skip ahead produces gaps that become expensive later.
Analyze every unexpected output. When Claude produces something different from what you expected — better or worse — understand why. This is how your model of Claude's behavior becomes accurate and your prompting becomes precise.
Read and understand the code Claude writes. Not line by line, but substantively. Passive acceptance of output that you do not understand produces a codebase you cannot maintain, direct, or explain.
Study software that you consider excellent. The standard against which you direct Claude is the standard you can articulate. Develop that standard through sustained exposure to well-built systems.
Write some code yourself. Maintain active engagement with implementation. The judgment that comes from building directly is what makes your direction of Claude effective.
Boris Cherny's assessment of the professional landscape is direct: the role boundaries between software engineers, product managers, and designers are becoming less distinct. The engineers, product managers, and designers on his team all write code. The value contributed by each is shifting from specialized technical execution toward the cross-disciplinary judgment that only comes from understanding the full system — user needs, technical constraints, business context, design quality.
His recommendation: cultivate breadth. The ability to reason across domains — to hold technical, product, and design considerations simultaneously — is what produces the most coherent decisions. AI handles the mechanical execution. The human provides the understanding.
The accumulation that matters is not the code you have written. It is the judgment you have developed through building real things, studying what others have built, and understanding the standards that separate work that endures from work that does not. That accumulation is not replicable by an AI agent. It is yours.
Understanding the mechanics of Claude Code at a technical level changes how you use it. When you know what is happening inside a session, you write better prompts, diagnose problems faster, and make better decisions about when to intervene and when to let the agent proceed.
At its core, Claude Code operates as a reasoning loop. Every session, at every step, follows the same underlying cycle:
This cycle is not visible in the interface. You see messages and file edits. Internally, Claude is running through this loop many times per task — reading a file, thinking about what it implies, reading another file, forming a plan, writing a change, running a command to verify it, reading the output, deciding whether to adjust.
The model is the reasoning engine. The tools provide the hands. The loop provides the structure.
Claude Code has access to a specific set of tools. From the model's perspective, these are callable functions — each with a name, a set of parameters, and a return value. When Claude decides to read a file, it does not "see" the file. It generates a structured tool call: read_file(path="src/auth.js"). The tool executes and returns the file's contents. Claude receives that content and continues reasoning.
The tools available to Claude Code include, in simplified form:
When Claude Code runs a test suite, it issues run_command("npm test"), receives the output, reads which tests failed, then uses edit_file to apply corrections — then runs the command again. When it is exploring an unfamiliar codebase, it issues list_directory to understand the structure before opening specific files.
This is why Plan Mode is so powerful: it lets Claude perform the reasoning portion of the loop — deciding which tools it would use, in what sequence, for what purpose — without actually executing those tools. You see the proposed sequence before anything happens.
When you open a project in Claude Code, Claude does not automatically read all your files. It reads selectively, on demand, as the loop requires.
A typical codebase exploration sequence:
This selective, on-demand reading is efficient but has implications you should understand:
Claude's view of your codebase is always partial. At any given point in a session, Claude has read some of your files and not others. If something relevant exists in a file Claude has not read, Claude does not know about it. This is why being explicit in your prompts matters — if a constraint or convention lives in a file Claude has not been asked to read, it will not apply that constraint unless you tell it to or point it to the file.
The CLAUDE.md file is read first, always. Because of this, your CLAUDE.md is the most reliable place to encode conventions. It is the one piece of context Claude has before it reads anything else.
You can direct Claude's attention. In your prompts, you can name specific files: "Before implementing this feature, read src/auth/middleware.js and src/db/schema.js." Claude will read those files first, incorporate their content into its understanding, and apply that understanding to the task.
Each step of the loop constructs what is called a context — the full set of information the model receives before generating its next response. This context includes:
This entire assembled context is what the model "sees." There is no memory outside it. Nothing persists from one session to the next except what exists in files on disk.
This is why context management matters. A session with a long conversation history, multiple large file reads, and extensive tool output will have a full context by the time it reaches 40–50% of the window limit. The model reasons over everything in context simultaneously — when that window is too full, the model begins to weight recent inputs more heavily than earlier ones, and coherence with decisions made early in the session can degrade.
Because Claude Code's architecture exposes the model directly — without tight scripted workflows constraining what it can decide — the model must exercise genuine judgment at each step of the loop. It decides:
This is the practical meaning of "the product is the model." Claude Code's quality is the model's quality. The tools are consistent. The loop is consistent. What varies from one model generation to the next is the quality of reasoning at each decision point — and that is everything.
Writing a prompt that produces a feature is one skill. Designing an application that Claude Code can build reliably, maintain coherently, and extend over time is another, deeper skill. This chapter covers the structural principles that make Claude Code development efficient and the codebases it produces maintainable.
There is a direct relationship between how a codebase is organized and how well Claude Code can work within it. A well-structured codebase is one in which:
When a codebase has this kind of structure, Claude can read a small number of files to understand a given component, make targeted changes, and produce output that fits coherently with what already exists.
When a codebase is disorganized — logic mixed with presentation, files responsible for multiple unrelated things, dependencies tangled across layers — every change Claude makes requires reading more context to understand the system, and the probability of an unintended side effect increases. The prompts required become longer and more prescriptive. The review burden increases.
Good structure is not organization for its own sake. It is a productivity investment. A codebase that Claude Code can navigate efficiently is one that produces higher-quality output in fewer tokens.
The most important structural principle for Claude Code projects is also one of the most important in software engineering generally: each module should do one thing.
In practice, for a web application:
src/
components/ — UI components: each component renders one thing
services/ — Business logic: each service owns one domain
api/ — HTTP handlers: each handler manages one route group
db/ — Data access: each module owns one entity
utils/ — Pure utility functions: no side effects, no state
config/ — All configuration: no configuration scattered elsewhere
When you ask Claude to "add email validation to the signup form," Claude reads components/SignupForm.jsx (the component), services/auth.js (the auth logic), and possibly utils/validation.js (shared validators). Three files. Focused change. Clean result.
If signup logic were embedded in a single large app.js file with all other logic, Claude would need to read everything, reason about what to touch and what not to, and work in a context where a single misplaced edit can affect unrelated behavior. The change is the same — the cost of making it is not.
The pattern that consistently produces the best outcomes with Claude Code is a strict separation between layers of an application:
Presentation layer — what the user sees and interacts with. Components, pages, templates. No business logic. No data fetching beyond what the component directly needs.
Business logic layer — what the application does. Services, use cases. No knowledge of the database interface. No UI-specific concerns.
Data access layer — how data is stored and retrieved. Repository pattern or service layer specific to database interaction. No business logic. No presentation concerns.
Integration layer — connections to external services (APIs, email providers, payment processors). Strictly isolated so that the rest of the system does not depend on the implementation details of external services.
When these layers are enforced — in the file structure and in the CLAUDE.md conventions — Claude Code respects them automatically. It knows that a feature touching the UI does not require changes to the data access layer. It knows that an email integration lives in one place and is called through a defined interface. Changes are targeted. Side effects are minimal. Reviews are coherent.
When beginning a new project, spend time on the directory structure before asking Claude to write any code. Establish the structure, write the CLAUDE.md, and then begin feature implementation.
A practical sequence:
1. Create the directory structure manually (or ask Claude to create it from a spec)
2. Write CLAUDE.md with: stack, conventions, layer rules, library choices
3. Create a PRD.md with the full feature list
4. Ask Claude to implement a skeleton — empty files in the right places,
with the right imports and class/function signatures, but no real logic yet
5. Review the skeleton — is the structure right? — before building on it
6. Implement features one at a time into this established structure
Step 4 is particularly valuable: a skeleton gives you a complete view of the application's shape before any logic exists. Structural mistakes — a component in the wrong layer, a misplaced dependency, a missing interface — are visible and cheap to fix. Once logic is written into the wrong structure, correcting the structure requires rewriting the logic too.
Pay attention to which files Claude touches when implementing a feature. Specifically:
If Claude is modifying more than 3–5 files for a single feature, the feature may not be coherently scoped, or the application structure may have too many dependencies between components.
If Claude is modifying the same files repeatedly across different features, those files are taking on too much responsibility.
If Claude asks clarifying questions before beginning, the specification was not complete enough or the existing structure is ambiguous about where the feature should live.
Each of these is a signal. Read it as information about the design, not as criticism of the instruction. Well-designed systems produce focused changes. Tangled systems produce sprawling changes.
Claude Code navigates your codebase by reading file and directory names, examining import statements, and searching for patterns. Names are therefore not cosmetic. They are structural.
A file named utils.js tells Claude almost nothing about what is inside it. A file named validation.js, dateFormatters.js, or currencyUtils.js tells Claude exactly where to look when it needs that functionality.
Enforce these naming standards in your CLAUDE.md:
## Naming Conventions
- Components: PascalCase, descriptive noun (UserProfileCard, TaskListItem)
- Services: camelCase, domain noun (authService, notificationService)
- Utilities: camelCase, descriptive of the concern (dateFormatters, currencyUtils)
- API handlers: camelCase, resource-oriented (userHandlers, taskHandlers)
- Test files: same name as the file being tested, with .test.js suffix
With these rules in CLAUDE.md, Claude will follow them consistently. Your code will be navigable — by Claude in a session, and by developers (including yourself) returning to the project later.
One of the highest-value additions to any CLAUDE.md is a clear definition of what "done" means for a feature. Developers use different standards. Claude Code should use yours:
## Definition of Done
A feature is complete when:
1. The specified behavior works correctly across all described scenarios
2. Edge cases identified in the specification are handled
3. All new functions have JSDoc comments
4. A corresponding unit test exists and passes
5. No linting errors are introduced
6. The feature works correctly at desktop and mobile viewport widths
With this definition present, Claude will not declare a feature finished when the happy path works and the edge cases have not been tested. It will complete all steps in the definition before telling you the task is done.
The following six project blueprints provide complete specifications — technology choices, directory structure, feature list, and implementation sequence — ready to hand directly to Claude Code. Each is designed for a specific stage of development competency and a specific class of use case.
These are not toy examples. They are real projects that produce genuinely useful software, chosen because they introduce important patterns in a controlled scope.
Appropriate for: Absolute beginners. First session with Claude Code.
What it teaches: HTML/CSS file structure, dark-themed UI, responsive layout, link components.
Technology: HTML5, CSS3, no JavaScript required.
Directory structure:
my-homepage/
index.html
style.css
assets/
avatar.jpg (add your own photo)
Prompt to give Claude Code:
Build a personal homepage. Specifications:
1. Single-page HTML/CSS site — no JavaScript, no frameworks
2. Sections: hero with name and one-line description, short bio (2–3 sentences),
links section with icons for LinkedIn, GitHub, and Twitter/X, footer with year
3. Design: dark background (#0f0f13), light body text (#e2e2e2),
accent color (#6366f1 — indigo), sans-serif typography (Inter via Google Fonts)
4. Responsive: readable and clean on both desktop and mobile
5. File structure: index.html and style.css only
Placeholder text is fine for bio — I will replace it. Use placeholder links (#)
for the social links — I will update them.
Enter Plan Mode first. Show me the plan before writing any files.
What to verify after it builds:
Appropriate for: Early intermediate. First application with state and interactivity.
What it teaches: JavaScript DOM manipulation, localStorage persistence, CRUD patterns, event handling, filtering.
Technology: HTML5, CSS3, vanilla JavaScript. No dependencies. No build step.
Directory structure:
task-manager/
index.html
style.css
app.js
components/
TaskList.js
TaskForm.js
TaskFilter.js
utils/
storage.js (localStorage read/write)
dateUtils.js (formatting helpers)
Prompt to give Claude Code:
Build a task manager application. Full specification:
FEATURES:
1. Create task: title (required), description (optional), due date,
priority (Low / Medium / High)
2. Display tasks as cards in a list, sorted by due date ascending
3. Mark task complete — completed tasks display with strikethrough and 50% opacity
4. Delete task with confirmation
5. Filter tasks: by status (All / Active / Completed), by priority (All / Low / Medium / High)
6. All data persists in localStorage — survives page refresh
TECHNICAL REQUIREMENTS:
- Vanilla JavaScript, ES6 modules
- File structure as specified: index.html, style.css, app.js,
components folder with TaskList.js / TaskForm.js / TaskFilter.js,
utils folder with storage.js and dateUtils.js
- No external libraries, no frameworks, no build step
- storage.js must abstract all localStorage access — no other file reads/writes localStorage directly
DESIGN:
- Clean light theme, comfortable whitespace
- Cards with subtle shadow and hover state
- Priority levels: low = blue, medium = amber, high = red (use colored left border on card)
- Responsive for mobile and desktop
CLAUDE.md content to follow: all localStorage access through storage.js only.
No inline styles — all styling through style.css.
Enter Plan Mode. Show me the complete file tree and implementation plan
before writing anything.
What to verify after it builds:
Appropriate for: Intermediate. First project involving an external API and dynamic data display.
What it teaches: Fetch API, async/await, loading states, error handling, structured data display.
Technology: HTML5, CSS3, vanilla JavaScript. Uses a free public API with no authentication required.
The API used: Open-Meteo weather API (free, no key required).
Directory structure:
weather-dashboard/
index.html
style.css
app.js
services/
weatherApi.js (all API calls isolated here)
components/
CurrentWeather.js
ForecastCard.js
LocationSearch.js
utils/
formatters.js (unit conversion, date formatting)
Prompt to give Claude Code:
Build a weather dashboard using the Open-Meteo API (https://open-meteo.com).
No API key required. Full specification:
FEATURES:
1. Location search: user types a city name, app geocodes it using
Open-Meteo's geocoding API and retrieves weather data
2. Current conditions display: temperature (Celsius), feels-like, humidity,
wind speed, weather description with an icon (use Unicode weather emoji)
3. 7-day forecast: one card per day showing high/low temps and condition
4. Loading state: visible spinner while data is fetching
5. Error state: clear message if location not found or API fails
6. Last searched location persists in localStorage on refresh
TECHNICAL REQUIREMENTS:
- All API calls must go through services/weatherApi.js — no fetch() calls elsewhere
- Async/await throughout — no .then() chaining
- formatters.js handles all unit formatting and date display
- Graceful error handling: network failures and invalid locations display
user-readable messages (never raw error objects)
API REFERENCE:
- Geocoding: https://geocoding-api.open-meteo.com/v1/search?name={city}&count=1
- Weather: https://api.open-meteo.com/v1/forecast?latitude={lat}&longitude={lon}
¤t=temperature_2m,relative_humidity_2m,wind_speed_10m,weather_code
&daily=temperature_2m_max,temperature_2m_min,weather_code
&timezone=auto&forecast_days=7
DESIGN: clean card-based layout, dark theme, readable type hierarchy.
Enter Plan Mode. Describe the complete data flow before writing code:
how a user search triggers the API chain and populates each component.
What to verify after it builds:
Appropriate for: Intermediate-advanced. First project with a real backend and database.
What it teaches: Node.js/Express server, SQLite database, REST API design, client-server separation, CRUD at every layer.
Technology: Node.js, Express, better-sqlite3 (synchronous SQLite binding), HTML/CSS/vanilla JS frontend served statically.
Directory structure:
notes-app/
server/
index.js (Express app setup)
db/
database.js (SQLite connection and migrations)
notesRepo.js (all database queries for notes)
routes/
notes.js (REST routes for /api/notes)
middleware/
errorHandler.js
client/
index.html
style.css
app.js
services/
notesApi.js (all fetch calls to the backend)
components/
NoteEditor.js
NoteList.js
package.json
.env
Prompt to give Claude Code:
Build a full-stack notes application. Complete specification:
BACKEND (Node.js + Express + SQLite):
1. Express server running on port 3001
2. SQLite database via better-sqlite3 package
3. Notes table: id (integer primary key autoincrement), title (text),
content (text), created_at (datetime), updated_at (datetime)
4. REST API:
- GET /api/notes — return all notes, ordered by updated_at descending
- GET /api/notes/:id — return single note
- POST /api/notes — create note, return created note
- PUT /api/notes/:id — update note, return updated note
- DELETE /api/notes/:id — delete note, return 204
5. Database initialization: create table if not exists on server start
6. Error handling middleware: catch all unhandled errors, return JSON error response
FRONTEND (HTML/CSS/vanilla JS):
1. Three-panel layout: sidebar (note list), editor (active note), empty state
2. Click a note in the sidebar to open it in the editor
3. New Note button creates an empty note and opens it immediately
4. Auto-save: debounce saves to the API 1 second after the user stops typing
5. Delete button on active note with confirmation
6. Note list shows title and first line of content as preview, plus updated date
CONVENTIONS (enforce in CLAUDE.md):
- All database access through notesRepo.js — no SQL in route files
- All API calls through client/services/notesApi.js — no fetch() elsewhere in frontend
- All routes return JSON — no HTML from the API
Enter Plan Mode. Show me the complete architecture: how data flows
from the database through the API to the UI and back on save.
What to verify after it builds:
Appropriate for: Developers with command-line comfort. Introduction to scriptable tools.
What it teaches: Command-line argument parsing, file system automation, structured output, practical tooling.
The tool built: A project scaffolding tool — given a project type argument, it generates a directory structure with starter files.
Technology: Node.js, commander (CLI argument library), fs-extra.
Directory structure:
scaffold-tool/
src/
index.js (entry point, argument definitions)
commands/
create.js (scaffold a new project)
list.js (list available templates)
templates/
web-basic/ (template directory structure)
node-api/
react-app/
utils/
fileSystem.js (file/directory operations)
logger.js (colored console output)
package.json
README.md
Prompt to give Claude Code:
Build a Node.js command-line scaffolding tool. Full specification:
PURPOSE: Running scaffold create generates
a new project directory with a starter file structure.
COMMANDS:
1. scaffold create
- Creates a new directory named in the current working directory
- Copies the corresponding template into it
- Replaces the placeholder {{PROJECT_NAME}} in all template files with
- Prints a success message with next steps
2. scaffold list
- Lists all available templates with a one-line description of each
TEMPLATES TO INCLUDE (create each as an actual template directory with starter files):
- web-basic: index.html, style.css, app.js, README.md
- node-api: server.js, routes/index.js, package.json with express, README.md
- react-app: package.json with react/vite, src/App.jsx, src/main.jsx, index.html
TECHNICAL REQUIREMENTS:
- commander package for argument parsing
- fs-extra for file system operations (not native fs)
- All file/directory operations through utils/fileSystem.js
- Colored console output (green = success, red = error, blue = info)
- If the target directory already exists, exit with a clear error — do not overwrite
- Executable via npx scaffold (set up package.json bin entry)
Enter Plan Mode. Show me how the create command flow works
end-to-end before writing any files.
What to verify after it builds:
Appropriate for: Advanced intermediate. First project meant for real use by other people.
What it teaches: User-facing product thinking, data validation, shared state, production-readiness concerns.
The tool built: A team standup tracker — team members log daily standups (what they did, what they are doing, any blockers), and the tool displays a historical view per person.
Technology: Node.js, Express, SQLite, vanilla JS frontend.
Directory structure:
standup-tracker/
server/
index.js
db/
schema.sql (initial table definitions)
database.js
repos/
standupsRepo.js
usersRepo.js
routes/
standups.js
users.js
middleware/
validate.js (input validation)
errorHandler.js
client/
index.html
style.css
app.js
services/
api.js
components/
StandupForm.js
TeamView.js
PersonHistory.js
package.json
Prompt to give Claude Code:
Build a team standup tracker. Complete specification:
CONTEXT: A small team (3–10 people) uses this tool to log and view
daily standups. Each standup has three fields: yesterday, today, blockers.
DATABASE SCHEMA:
- users: id, name, email (unique), created_at
- standups: id, user_id (FK), date (date type, one per user per day),
yesterday (text), today (text), blockers (text, nullable), created_at
BACKEND API:
- GET /api/users — list all users
- POST /api/users — create user (name + email, validate uniqueness)
- GET /api/standups?date=YYYY-MM-DD — all standups for a given date, with user data joined
- GET /api/standups/user/:userId — last 14 days of standups for one user
- POST /api/standups — submit standup (userId, date, yesterday, today, blockers)
— if standup for that user+date already exists, update it (upsert)
FRONTEND:
- Default view: today's standups for the whole team, one card per person
- Each card: user name, their standup fields, time submitted (or "Not submitted" if absent)
- Date navigation: prev/next day buttons to browse historical dates
- Submit standup: user selects their name from a dropdown, fills three fields, submits
- If the user already submitted today, the form pre-fills their existing standup for editing
VALIDATION (enforce server-side via validate.js middleware):
- yesterday and today: required, non-empty, max 1000 characters
- blockers: optional, max 1000 characters
- date: must be a valid date, not in the future
- userId: must reference an existing user
CONVENTIONS (write into CLAUDE.md before starting):
- All DB access through repos/ — no SQL in route files
- All validation through validate.js middleware — no validation logic in route handlers
- Routes return consistent JSON: { data: ... } on success, { error: ... } on failure
Enter Plan Mode. This is a multi-file, multi-layer project. Show me
the complete plan — architecture, data flow, file list, implementation sequence —
before writing a single file.
What to verify after it builds:
These six projects exist on a deliberate progression:
Each blueprint uses the patterns from the previous ones and adds a new dimension. Building them in sequence produces a developer who has encountered and solved the fundamental problems in each tier of application architecture — which is the foundation from which Claude Code can be directed most effectively.
Beginning a new feature:
I want to implement the following feature:
[Feature title]
[Behavioral specification from PRD]
Enter Plan Mode and show me your proposed approach before writing any code.
Include: files to be affected, implementation sequence, data flow, edge cases.
Diagnosing a defect:
A defect exists with the following characteristics:
Expected behavior: [description]
Observed behavior: [description]
Steps to reproduce: [sequence]
Relevant files: [if known]
Diagnose the cause and propose a correction. Do not implement until I have
reviewed your diagnosis.
Code review:
Review the implementation you produced for [feature] against the following
criteria:
1. Are there edge cases not handled by the current implementation?
2. Are there security considerations that require attention?
3. Is the code readable and maintainable for someone encountering it without context?
4. Is error handling adequate for the failure modes this code may encounter?
Provide your assessment before I approve this feature as complete.
Context loading for a new session:
New session. Begin by reading the following files in order:
1. PRD.md
2. CLAUDE.md
3. README.md
Confirm your understanding of the project's current state and identify
where we left off, so we can proceed without reconstructing context manually.
Anthropic Documentation
Foundational Reading
For Those New to Web Development
Both provide sufficient foundation to understand what Claude Code is building and to evaluate its output critically.
This handbook was written in February 2026. The specifics of Claude Code — its features, its model capabilities, its interface — will continue to evolve. Some of what is written here will require updating within months.
The underlying principles will not.
The quality of what you build with Claude Code is determined by the quality of your planning, the precision of your communication, the rigor of your verification, and the clarity of your standards. These are properties of practice, not of tooling. They compound. They do not expire.
Author: Vahe Aslanyan

.jpg)