Contributing with AI

I’ve been experimenting with AI agents (Claude Code, etc.). I’ve found they get me started on tasks I wouldn’t normally attempt.

A django example
I tested this with Django’s collectstatic issues, specifically the CSS/JS file parsing challenges. The current implementation uses regex to replace static file references, which works in many cases but has edge cases: CSS comments break it, and JS import/export statements proved so tricky they’re only experimental.

The proper solution is parsing CSS/JS files rather than regex substitution, but that comes with performance and complexity costs. I’ve done a little parser work before, just enough to know it would take me significant time and be outside my comfort zone. I could do it if required but wouldn’t have the energy for a volunteer effort. I doubt I’m the only one who felt like this in the 12 years the issue has been open.

AI-Assisted Approach
I researched the tickets and learned, from Adam, about Django’s existing JS lexer for gettext. I asked the agent to show me how this would work for import/export cases — it provided working code that I then copy/pasted, tested, and fixed. A nice start but I could have done this without AI. I submitted a PR #19574 [1]

For CSS, I needed to create a lexer from scratch. I started with the prompt:

“I have this Python code for JavaScript lexical analysis [included JsLexer class]. Could we build a class for doing the same with CSS files?”

The agent provided a solid starting point that became a working lexer with minimal prompting. Django’s test suite let me prove the lexer still made all the needing substitution and could now handle CSS comments. #19561

Full Agent Mode
I then tried “full agent” mode - giving it codebase access, showing it how to run tests, and having it write commit messages. I tried this workflow on three different collectstatic issues. (#26583, #28200, #27929)

Results
In one afternoon, I had 5 PRs across 4 issues - work I wouldn’t have attempted before. This highlights the potential to create more reviewer workload. I held back on submitting all PRs simultaneously out of respect for maintainers’ time, only fully reviewing and submitting the assisted work, rather than the solo agent work. I did later review the simplest of the solo agent’s work and submit it with only minor modifications needed by me. #19574

I think that these AI agents will increase the amount of contributions. I looked for community discussion on this topic but found little relevant, so I was glad to see the PR template discussion starting:
#19594. I think it will be worth considering how to manage the affects of these coding agents. While at the same time I do see how it can help improve the framework as hopefully in the example of the CSS parser.


  1. When Shai did a review and suggested adding support for more modern JS features, like await/async, I did go back to AI for suggestions on that too. Again I could have done it myself but it was multiple times faster this way. ↩︎

Like anything else AI is a tool. IMO there is no problem to use a tool. A tool allows you to do things faster, but like with anything else, the more you know yourself the more efficiently you will be able to use the tool. Where it becomes a problem (at least as I see it) is when you cannot handle the tool anymore (in the context of AI that probably means when you are not in the position to review it’s code since you don’t have the domain knowledge).

Like when you say:

I’ve done a little parser work before, just enough to know it would take me significant time and be outside my comfort zone. I could do it if required but wouldn’t have the energy for a volunteer effort.

and then you give it to an AI and your conclusion is:

The agent provided a solid starting point that became a working lexer with minimal prompting. Django’s test suite let me prove the lexer still made all the needing substitution and could now handle CSS comments.

How can you be sure that this is a solid starting point? Please don’t get me wrong, I am not saying that you didn’t review the code or that the tests don’t pass. But how sure are you that the code is actually correct and reasonably good? If you are not sure about that, then personally I wouldn’t submit the code as a PR or at the very least clearly label it as such to set the expectations straight (whether that means less or more review or outright dismissal I do not know).

I don’t get you wrong, I think you’ve got it. I’m not sure, that’s why I went looking for discussions on AI and what the best practices were. Why I was happy to see the PR guildlines started. As a starting point expecting that we mark our AI use seems like a sensible step. That can at least allow the reviewer judge the code with that eye and may lead to follow up questions or sometimes outright dismissal based on contribution quality.

2 Likes