I adapted Christopher Wiles' Claude Code config for Django - 9 skills covering models, forms, testing, HTMX, DRF, Celery, and more

Hey Django devs,

I’ve been using Claude Code (Anthropic’s CLI for AI-assisted coding) and came across GitHub - ChrisWiles/claude-code-showcase: Comprehensive Claude Code project configuration example with hooks, skills, agents, commands, and GitHub Actions workflows for TypeScript projects. It was so well-structured that I forked it and adapted the whole thing for the Django stack.

What this is: A ready-to-use Claude Code configuration for Django/PostgreSQL/HTMX projects.

GitHub: GitHub - kjnez/claude-code-django: Comprehensive Claude Code project configuration example for Django with hooks, skills, agents, commands, and GitHub Actions workflows

Forked from: GitHub - ChrisWiles/claude-code-showcase: Comprehensive Claude Code project configuration example with hooks, skills, agents, commands, and GitHub Actions workflows (TypeScript version)

What’s included

9 Django-specific skills:

  • pytest-django-patterns - Factory Boy, fixtures, TDD workflow
  • django-models - QuerySet optimization, managers, signals, migrations
  • django-forms - ModelForm, clean methods, HTMX form submission
  • django-templates - inheritance, partials, template tags
  • htmx-alpine-patterns - hx-* attributes, partial templates, dynamic UI
  • drf-patterns - serializers, viewsets, permissions, pagination
  • celery-patterns - task definition, retry strategies, periodic tasks
  • django-channels - WebSocket consumers, real-time features
  • systematic-debugging - four-phase debugging methodology

Slash commands:

  • /ticket PROJ-123 - reads ticket from JIRA/Linear, explores codebase, implements feature, creates PR, updates ticket status
  • /pr-review - code review using project standards
  • /code-quality - runs ruff, pyright, pytest

Automated workflows:

  • Code review agent that checks against Django best practices
  • GitHub Actions for automated PR review, weekly code quality sweeps, monthly docs sync

Skill evaluation hooks - automatically suggests which skills Claude should activate based on what you’re working on.

Why this matters

When I ask Claude to “add a form for user registration,” it knows to:

  • Use ModelForm
  • Validate in clean() and clean_() methods
  • Create a partial template for HTMX responses
  • Write a Factory Boy factory and pytest tests first (TDD)
  • Use select_related() if the form touches related models

It’s not guessing at patterns—it’s following the Django conventions I’ve documented.

Stack assumptions

  • Django with PostgreSQL
  • HTMX for dynamic UI
  • uv for package management
  • ruff + pyright for linting/types
  • pytest-django + Factory Boy for testing
  • Celery for background tasks (optional)
  • DRF for APIs (optional)

The configuration is modular—remove what you don’t need.

Getting started

  1. Clone or copy the .claude/ directory and CLAUDE.md to your project
  2. Install Claude Code if you haven’t: npm install -g @anthropic-ai/claude-code
  3. Customize CLAUDE.md with your project specifics
  4. Start using it

Big thanks to Christopher Wiles for the original structure—check out his TypeScript version if that’s your stack.

Happy to answer questions or take feedback.

1 Like