With every new feature release, Django increases the PBKDF2 iteration count (PBKDF2 is the default password hasher). For Django 4.2 it is 600,000, and for the upcoming Django 5.0 it will be 720,000.
I think that Django 5.0 iteration count is becoming too much. On commodity AWS instances, one password hash takes ~0.7s with 600,000, and it keeps increasing with every release. I don’t think server hardware gets faster at this rate.
It is quite easy to bring a set of servers to their knees with a simple DDOS on the login endpoint. Note that even if the username doesn’t exist, the password hasher still runs, as a mediation for timing attacks. It is difficult to protect against this attack, requiring captchas or annoying stuff like that. A very high iteration count makes this attack less expensive for the attacker.
Of course it’s possible to override the default (which I do), but it’s another thing to do and most people have no clue about it - the default matters. I also like that someone else worries about it for me.
My suggestion is to change the policy to match the OWASP recommendation at the relevant time, which handily is currently 600,000, like Django 4.2. Django 5.0 will exceed it.
I am torn. On one hand boundless increases like we currently have are bad – no argument there. But imo OWASP is relatively conservative. The main problem is that our hash has to hold up against a leak of the database, ie offline cracking. With the rise of free hosting where you get relatively weak CPUs you see more problems with a slow password hash than with proper server grade hardware. And your attacker has none of those problems, as soon as they got your hash they can throw the fastest hardware at it…
I think that it is currently to hard to override the iteration/etc counts for password hashers. One could argue for a setting there (yes I know…), or one could argue that people having problems with the iteration count are on a scale where settings their own subclass is a nobrainer.
Just as an example, on my three year old laptop with the cpu in powersave mode a pbkdf2 hash with sha256 takes roughly 130ms (see below, I hope I didn’t make any mistakes). This is not really slow anymore…
In : %timeit hashlib.pbkdf2_hmac("sha256", b"abcde", b"abcde", 600_000)
131 ms ± 359 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Digging into the OWASP link a bit more, their recommendation is based on this analysis (last updated on December 29th, 2022). Their criteria is:
Minimum good password settings for authentication cause an attacker to get <10 kH/s/GPU. A “GPU” is a current high-end but not super high-end GPU due to diminishing returns in performance per cost. Basically a GPU with an MSRP of about $700 in 2015 USD (which is about $900 in 2022). Currently a “GPU” is one of the following: an RTX 4070 Ti, 2/3 speed of an RTX 4090, or an RX 7900 XTX.
I am not an expert but it sounds reasonable to me.
I measured again, and got ~490ms. I understand now that my measurements are flawed because I am running them on a busy server. On the other hand, most servers are busy so one can argue it is actually more realistic.
I feel like this thread has provided evidence for keeping the current policy, rather than changing it.
Django adopted a policy of increasing the iterations 10 years ago in this PR, and set the 20% rate 9 years ago in this PR. Now this thread reports that Django 4.2, released earlier this year, used the recommended 600,000 iterations from OWASP, as of Dec 2022. It sounds like Django’s formula has ended up working remarkably well, even if by estimate and accident.
I also have concerns with deciding to track the OWASP page:
It’s a “cheat sheet”, not a particularly official-looking document. Maybe there’s a better source.
I couldn’t find an advertised update schedule. What if they don’t update it for five or ten years?
To me, it seems safer to continue with the current formula. Maybe we can schedule a review, say every .0 release.
(Also, I’m not sold that the proposed 20% reduction is meaningful for preventing DDoS. Attackers can probably send ~20% extra traffic to make up the difference. Rate-limiting of password-hashing pages provides much more reliable DDoS protection.)
It seems there is no consensus on PBKDF2 so I’m dropping the proposal to change the policy for it.
Vulnerability to DDoS
How to rate-limit password-hashing pages against a DDoS? Can’t rate limit by IP, since it’s distributed. Can’t rate limit by username, since the attacker can use different usernames. You can do CAPTCHAs, but this worsens the user experience. There are PAKEs (Password-authenticated key agreement protocols), and client proof-of-work schemes, but these are complicated and not supported by Django anyway.
What really prevents rate-limiting from working is that Django also runs password hashing on non-existent usernames, to prevent an attacker from distinguishing between an existent and non-existent username based on the request timing. This is a worthy goal.
Is there an alternative solution for the timing issue? It is tempting to do a sleep (with some random delta) instead, but there’s no reasonable way to determine the appropriate sleep duration in a generic way that I can think of.
The other PBKDF available in the stdlib (i.e. not requiring an extra dependency) is scrypt, and Django provides ScryptPasswordHasher. AFAIK scrypt is preferred over PBKDF2 by experts because it is also memory-limited not just CPU-limited.
However, it seems that the default parameters used by Django n=2**14, r=8, p=1 are lower than the OWASP recommendation which uses p=5 for the same n and r. The parameter p is “parallelism”, current openssl doesn’t actually run it in parallel (but this might change), so the OWASP recommendation is effectively 5x the Django default.