The Essential Claude Code Python Setup That Actually Makes You Productive

The Essential Claude Code Python Setup That Actually Makes You Productive

Why Most Developers Waste Claude Code’s Potential

I’ve watched too many developers fire up Claude Code, ask it to build something, and then spend hours fixing the mess it creates. The problem isn’t the AI—it’s that you’re using default settings optimized for general assistance, not your specific development workflow.

Claude Code Rules : Configuration instructions stored in CLAUDE.md that override default AI behavior, teaching Claude Code your project’s conventions, constraints, and patterns before it writes a single line of code.

Claude Code reads a CLAUDE.md file in your home directory (.claude/CLAUDE.md) and treats those instructions as gospel. This isn’t prompt engineering—it’s setting foundational rules that apply to every interaction. Get this right once, and you never have to correct the same mistakes again.

The Rules That Actually Matter

Here’s what I put in my global CLAUDE.md file. Every rule exists because I got burned by not having it.

Type System Enforcement

1
2
use python 3.11+ style types like int, str, list, dict, etc
use dataclass/typeddict instead of pydantic

Modern Python type hints are built-in and fast. Claude Code loves to reach for Pydantic because it’s popular, but unless you’re building an API that needs validation, you’re adding dependency weight for no reason. TypedDict and dataclass give you 90% of the value with zero external dependencies.

Import Hygiene

1
Use absolute imports. Avoid relative imports except within the same module.

Relative imports break the moment you refactor or move files. Absolute imports are explicit and searchable. This rule has saved me countless debugging sessions tracking down import errors.

Testing Structure That Scales

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
### Testing Approach

- Use table-driven tests when possible
- Use from inline_snapshot import snapshot, assert something() == snapshot(28620972) where possible
- Always test error paths and edge cases
- Use test structure:
tests/
  __init__.py
  integration/
  system/
  unit/
Table-Driven Tests : A testing pattern where test cases are defined as data structures (usually lists or dicts) and iterated through a single test function, reducing duplication and making test coverage obvious at a glance.

The test structure isn’t just organization—it’s about making it obvious what kind of test you’re writing. Unit tests for functions, integration tests for components working together, system tests for the full stack. Claude Code understands this hierarchy and places new tests correctly.

The inline_snapshot instruction is my secret weapon. Instead of manually writing expected values, you write the assertion with an empty snapshot, run the test, and inline_snapshot captures the actual output. You review the snapshot once, and it becomes your regression test. Massive time saver.

The Anti-Patterns

1
2
3
4
5
don't use argparse
Never use emojis.
Never add Claude as a commit author.
Never create .md files unless explicitly instructed.
minimize docstrings

These are the “stop doing that” rules:

  • argparse: It’s verbose and a pain to test. I prefer simpler CLI parsing or just environment variables.
  • emojis: They break in logs, terminals, and CI/CD output. Hard pass.
  • Claude as co-author: It’s an AI assistant, not a contributor. I wrote the code, even if Claude generated the first draft.
  • .md files: Claude loves to create documentation for everything. I only want docs when I explicitly ask.
  • docstrings: Most functions are self-documenting with good names and type hints. Skip the boilerplate.

The One Rule That Changes Everything

1
simple solution for a complex problem

This single line does more than any other rule in my config. Without it, Claude Code will over-engineer solutions with plugins, factories, dependency injection, and abstractions you don’t need. With it, you get straightforward code that solves the actual problem.

AI models are trained on open source code, which tends toward the overengineered side. Explicitly telling Claude to keep it simple counteracts that bias.

How to Structure Your CLAUDE.md

Set Up Claude Code Configuration

Create a global configuration file that applies your development standards to every Claude Code session

Create the Global Config Directory

Claude Code looks for configuration in ~/.claude/ by default:

1
2
mkdir -p ~/.claude
touch ~/.claude/CLAUDE.md

This file applies to all projects unless you override it with a project-specific CLAUDE.md in the project root.

Define Your Core Rules

Start with language-specific type and style rules. These have the highest ROI because they prevent constant corrections:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Python Development Standards

## Type System
use python 3.11+ style types like int, str, list, dict, etc
use dataclass/typeddict instead of pydantic

## Code Style
Use absolute imports. Avoid relative imports except within the same module.
Never use emojis.
minimize docstrings
simple solution for a complex problem

Adapt these to your language and preferences. The key is being specific—“clean code” is too vague, “use absolute imports” is actionable.

Add Testing Patterns

Define your test structure and preferred testing libraries:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
### Testing Approach

- Use table-driven tests when possible
- Use from inline_snapshot import snapshot for regression tests
- Always test error paths and edge cases
- Use test structure:
tests/
  __init__.py
  integration/
  system/
  unit/

This prevents Claude from generating test files in random locations or using testing patterns you don’t want.

Set Project-Specific Overrides

For project-specific rules, create a CLAUDE.md in the project root:

1
2
3
4
5
# Project-Specific Rules

don't refactor legacy/database.py
use PostgreSQL for all database operations
API responses must include request_id field

Project-specific rules override global rules where they conflict.

The Skills You Actually Need

Beyond rules, Claude Code supports custom skills—basically reusable workflows triggered by slash commands. I keep mine minimal:

  • /commit: Stages changes and creates a commit with a conventional commit message
  • /test: Runs the test suite and reports failures
  • /deploy: Project-specific deployment workflow

The key is not to go overboard. Skills are useful for common workflows you repeat daily, not one-off tasks. If you find yourself creating a skill you’ll use once, you’re over-automating.

When Rules Break Down

This configuration philosophy works great for greenfield projects and codebases you control. It breaks down when:

Working with established codebases: If the project already uses Pydantic everywhere, fighting that in CLAUDE.md just creates inconsistency. Adapt your rules to match existing patterns.

Polyglot projects: Global rules optimized for Python will confuse Claude when you’re working in JavaScript. Use project-specific CLAUDE.md files to handle multi-language repos.

Team projects with different standards: Your personal CLAUDE.md shouldn’t override team conventions. Either align your rules with the team’s style guide or keep conflicting rules minimal.

The Testing Pattern That Changed My Workflow

The inline_snapshot approach deserves special attention because it’s not obvious how powerful it is until you use it:

1
2
3
def test_user_serialization():
    user = User(name="Alice", email="alice@example.com")
    assert serialize_user(user) == snapshot()

First run generates:

1
2
3
def test_user_serialization():
    user = User(name="Alice", email="alice@example.com")
    assert serialize_user(user) == snapshot({"name": "Alice", "email": "alice@example.com"})

You review the snapshot, and if it’s correct, the test is done. Future changes that break serialization get caught immediately.

This works brilliantly with Claude Code because it can generate the test skeleton, you run it once, and the snapshot becomes the contract. No more manually constructing expected dictionaries or JSON strings.

FAQ

Can I use different rules for different programming languages?

Yes. Either use project-specific CLAUDE.md files for each language’s projects, or structure your global CLAUDE.md with clear sections like “## Python Rules” and “## JavaScript Rules”. Claude Code is smart enough to apply the relevant section based on the files you’re working with.

How do I know if my rules are too strict or too loose?

If Claude Code frequently produces code you have to rewrite, your rules are too loose or missing key patterns. If you find yourself constantly fighting the rules with “ignore CLAUDE.md for this”, they’re too strict. The sweet spot is when you rarely think about the rules after the initial setup.

Should I commit CLAUDE.md to version control?

Project-specific CLAUDE.md files? Absolutely—they’re part of your development standards. Global ~/.claude/CLAUDE.md? No, those are your personal preferences. Different team members can have different global configs as long as project-specific rules enforce team standards.

What about skills versus rules?

Rules are constraints and patterns that apply passively to all Claude Code interactions. Skills are active workflows you invoke deliberately. Use rules for “how to write code” and skills for “how to execute common tasks”. Most developers need lots of rules and just a few skills.

Why This Configuration Philosophy Works

The core insight is that AI coding assistants work best with explicit constraints. Without rules, Claude Code optimizes for generality—it tries to write code that works for everyone, which means it’s not optimized for you.

With the right CLAUDE.md setup, you’re teaching Claude your development philosophy once. Every interaction after that builds on those foundations. The AI stops suggesting patterns you don’t use and starts reinforcing the patterns you do.

This isn’t about replacing your judgment with rules. It’s about eliminating the repetitive corrections that waste time and breaking your flow. The rules handle the obvious stuff—type hints, import style, test structure—so you can focus on the actual problem you’re solving.

Conclusion

Key Takeaways

  • Claude Code’s power comes from configuration, not just prompting—set rules once in CLAUDE.md instead of repeating corrections
  • Type system enforcement and import rules prevent the most common AI coding mistakes in Python projects
  • Table-driven tests and inline_snapshot dramatically reduce test writing time while improving coverage
  • The “simple solution for a complex problem” rule counteracts AI’s tendency toward over-engineering
  • Project-specific CLAUDE.md files override global rules for codebase-specific patterns and constraints
  • Skills work best for daily workflows, not one-off tasks—keep the skill library minimal
  • Rules should match your existing codebase patterns, not fight against established conventions

The best Claude Code setup is one that disappears. You shouldn’t be thinking about configuration or fighting with the AI’s suggestions. Get your CLAUDE.md file right, and the tool becomes an extension of your development process instead of something you have to constantly correct. That’s when the productivity gains actually show up.

Security runs on data.
Make it work for you.

Effortlessly test and evaluate web application security using Vibe Eval agents.