Development

The Socratic Method With Claude Code

April 1, 2026 28 min read
The Socratic Method With Claude Code

I tracked my debugging sessions over six weeks on a NestJS monorepo. The ones where I let Claude just fix the problem averaged 4 minutes. The ones where I used the Socratic approach — Claude questioning me, or me questioning Claude — averaged 7 minutes. Three minutes slower per session. But by week four, the repeat-bug rate had dropped by roughly 60%. I was hitting the same class of issue (cookie mismatches, missing validation, partial-write consistency) far less often, because I’d actually understood the underlying pattern the first time around.

Most developers use Claude Code the same way: paste a problem, get a solution, move on. It works. But it also turns you into a consumer of answers instead of a builder of understanding. Six months later, when a similar bug appears in a different shape, you’re starting from scratch.

There’s an alternative. Instead of telling Claude Code what to do, you structure your interaction so that Claude asks you questions that guide you toward the answer. You become the learner; Claude becomes the questioner. Or you flip it: you ask Claude probing questions instead of requesting solutions directly.

Both directions work. The key is replacing “do this for me” with a dialogue that builds understanding.

Why This Matters

The difference between a developer who asks AI to fix their bugs and one who uses AI to understand their bugs shows up over time:

  • You learn the codebase instead of outsourcing comprehension
  • You catch flawed assumptions before they become flawed code
  • You retain the knowledge for next time
  • You make better architectural decisions because you reasoned through them

The fix might take 2 minutes either way. But the understanding compounds.

The Five Methods

Method 1: Instruct Claude to Question You

Add Socratic instructions to your CLAUDE.md or give them inline.

Persistent (in CLAUDE.md):

## Interaction Style

When I ask for help with a bug or feature, do not jump to a solution. Instead:
1. Ask me what I think the root cause is
2. Ask me what I've already tried
3. Ask clarifying questions about my assumptions
4. Guide me toward the answer with progressively more specific questions
5. Only provide the direct answer if I explicitly ask you to stop questioning

When I say "just tell me", switch to direct mode.

Per-conversation:

I have a bug where the auth token isn't being set. Don't fix it for me.
Instead, ask me questions that help me find the root cause myself.

Method 2: You Drive the Questions

Instead of asking Claude to do things, ask it to explain things. Then act on what you learn.

Instead of thisTry this
”Fix the failing test""What is this test actually asserting? Walk me through line by line."
"Add a new endpoint""What existing endpoint is most similar to what I need? What would I need to change?"
"Why is this slow?""What SQL query does this endpoint execute? What indexes exist on that table?"
"Refactor this""What are the code smells here? Which one causes the most concrete harm?”

The pattern: ask Claude to explain or analyze, then use that understanding to act yourself.

Method 3: The Rubber Duck Escalation

A structured approach that starts with self-reliance and escalates only as needed:

  1. State the problem out loud to Claude: “I think X is happening because Y.”
  2. Ask Claude to poke holes: “What am I missing? What assumptions am I making?”
  3. Ask for a hint, not a solution: “Point me to the right file or function, but don’t tell me the fix.”
  4. Ask for the smallest useful nudge: “Is the issue in the query or the mapper?”
  5. Only then, if stuck: “OK, show me the fix and explain the reasoning.”

Each step gives you a chance to solve it yourself with progressively less effort.

Method 4: Hypothesis-Driven Debugging

Frame every debugging session as a hypothesis to test:

My hypothesis: the 401 error happens because the cookie isn't being sent
on cross-origin requests.

What evidence would confirm or refute this? What should I check first?

Claude responds with diagnostic questions and checks. You run them. You update your hypothesis. Repeat.

This mirrors how experienced developers actually debug — you’re just making the internal monologue explicit.

Method 5: Architecture by Inquiry

Before building a feature, have Claude question your design:

I'm planning to add a caching layer in front of the job-ad-formats endpoint.
Play devil's advocate. What questions should I be able to answer before I
start writing code?

Claude might ask:

  • What’s the cache invalidation strategy?
  • How stale can the data be before it causes problems?
  • Is the bottleneck actually the database query, or is it somewhere else?
  • Have you measured the current response time?

If you can’t answer one of these, that’s the thing to investigate first.

Setting a Difficulty Level

Tell Claude how much help you want:

  • Hard mode: “Only ask questions. Never show code.”
  • Medium mode: “Ask questions first. Show code snippets only as hints.”
  • Easy mode: “Ask one clarifying question, then show the solution with explanation.”

When to Use It (And When Not To)

The Socratic method is a learning tool, not a productivity tool. Use it when:

  • You’re in an unfamiliar part of the codebase
  • You’re making an architectural decision you’ll live with for months
  • You keep hitting the same class of bug

Use direct mode when:

  • You understand the problem and just need implementation speed
  • The task is mechanical (rename, format, migrate)
  • You’re under time pressure and the learning can wait

Examples in Practice

These five examples show the Socratic method applied to real development scenarios. Each one demonstrates how questioning leads to deeper understanding than a direct answer would.

Debugging an Auth Bug

Users are getting logged out randomly after deploying a new feature. Instead of asking Claude to fix it, you use the Socratic method to find the root cause yourself.

Turn 1 — State the problem
You

Users are getting logged out randomly after we deployed the org switcher. Don't fix it -- help me find it.

Claude

Let's narrow it down. Two questions: (1) Does this happen to all users, or only users with specific characteristics? (2) What does "randomly" mean -- is it after a specific action, or truly at unpredictable times?

Turn 2 — Narrow the scope
You

It's users with multiple organizations. It happens when they switch orgs.

Claude

OK, so it's tied to the switch action. When a user switches orgs, what does the backend do with their auth state? Walk me through the endpoint.

Turn 3 — Trace the code
You

POST /auth/switch-organization calls clearCookie('auth_token') then setCookie('auth_token', newToken).

Claude

When you clear a cookie and set it again, the browser matches cookies by name and path and domain. If those attributes differ between the clear and the set, the old cookie survives. Are the cookie options identical in both calls?

Turn 4 — Discovery
You

...no. clearCookie uses path: '/' but setCookie uses path: '/api'. So the root-path cookie never gets cleared, and the browser sends the stale one on non-API routes.

Claude

Exactly. Before you fix it, one more question: where are the cookie options defined? Is there a shared config, or are they hardcoded in each call?

Turn 5 — Fix with full context
You

They're hardcoded in two different places. I'll extract them into AUTH_COOKIE_OPTIONS in the config and use it everywhere.

Conversation Flow
flowchart TD
    A["You: Users are getting logged out randomly after the last deploy"] --> B{"Claude asks:\nWhat changed in the last deploy?\nDoes it affect all users or a subset?"}
    B --> C["You: We added the organization switcher.\nIt seems to affect users with multiple orgs."]
    C --> D{"Claude asks:\nWhat happens to the auth token\nwhen a user switches orgs?\nDoes the cookie get rewritten?"}
    D --> E["You: Let me check...\nThe switchOrganization endpoint\ncalls res.clearCookie() then res.setCookie()"]
    E --> F{"Claude asks:\nAre the cookie options identical in both calls?\nWhat if clearCookie uses different options?"}
    F --> G["You: Found it! clearCookie uses '/'\nbut setCookie uses '/api'.\nThe old cookie persists and conflicts."]
    G --> H["You fix it yourself with full understanding"]
    style A fill:#1a1a2e,stroke:#e94560,color:#eee
    style B fill:#16213e,stroke:#0f3460,color:#eee
    style C fill:#1a1a2e,stroke:#e94560,color:#eee
    style D fill:#16213e,stroke:#0f3460,color:#eee
    style E fill:#1a1a2e,stroke:#e94560,color:#eee
    style F fill:#16213e,stroke:#0f3460,color:#eee
    style G fill:#1a1a2e,stroke:#53d769,color:#eee
    style H fill:#0d7377,stroke:#14ffec,color:#eee
What You Learned
mindmap
  root((Auth Bug))
    Cookie mechanics
      Browser matches by name + path + domain
      Mismatched options = zombie cookies
    Codebase insight
      Cookie options were hardcoded in 2 places
      No shared config for auth cookie settings
    Prevention
      Extract shared cookie config
      One source of truth for cookie options
    Debugging pattern
      Trace the exact HTTP headers
      Check attribute consistency

Contrast — direct mode:

If you'd said "fix the logout bug," Claude would have found and fixed the path mismatch. You'd have working code but no understanding of why cookie path matching matters. Next time a cookie bug appears in a different form, you'd start from scratch.

The Underlying Principle

Every example above follows the same pattern: understanding before action. The Socratic method doesn’t make you slower — it makes the first solution you try more likely to be the right one. And it leaves you better equipped for the next problem that looks like this one.

The fix might take 2 minutes either way. But next time a similar issue appears, you won’t need to ask.

AI Claude Workflow Productivity Learning Debugging