Caching in auth hashers?

Context: I’m using DRF with basic auth on a very slow machine.

(I should probably just lower the number of iterations :slight_smile: and not care much.)

The default password hashing takes like 900ms, so if I have like 5 requests to shoot, boom, 5s, almost 100% of the time spent in password hashing.

Idea: what if hashers.verify had a cache for successes?

Yeah, messing around password verification may not look like a good idea…

I think of somethings like this:

class CachedPBKDF2PasswordHasher(PBKDF2PasswordHasher):
    def __init__(self, *args, **kwargs):
        self.success_cache = {}
        super().__init__(*args, **kwargs)

    def verify(self, password, encoded):
        if (password, encoded) in self.success_cache:
            return True
        result = super().verify(password, encoded)
        if result:
            self.success_cache[(password, encoded)] = True
        return result

what do you think?

Password hashing is deliberately slow, as a security measure to prevent DDoS attacks. See this cheatsheet:

Defenders can slow down offline attacks by selecting hash algorithms that are as resource intensive as possible.

Any kind of caching would be counter to that goal. It would also open up new attacks like the ability to check if anyone in the system is using common passwords by trying them out on the login form and seeing if the request returns instantly.

To only speed up tests, or requests on your local development machine, you can use a faster, weaker hasher, as noted here: Writing and running tests | Django documentation | Django

No, it’s a security measure to prevent brute forcing password hashes in leaked databases, which is a nice thing and I want to keep it.

It’s not a way to prevent brute-forcing passwords over HTTP, that could have been implemented implemented using short-timed account locking depending on the number of failures, like “you tried 3 times in a row, wait 30s before your next try”, “you failed your password 5 times in a row, wait 5mn before your next try”…

And it’s clearly not a way to prevent DDoS, on the contrary it can be used to saturate the server CPU by just repeatedly sending the same wrong password from many clients.

Any kind of caching would be counter to that goal.

No, caching would have strictly no impact on a leaked database in any way (as long as we don’t cache in the database?).

It would also open up new attacks like the ability to check if anyone in the system is using common passwords by trying them out on the login form and seeing if the request returns instantly

No because the cache key contains the encoded password (which is unique because of the salt) which is unique to the user, so trying the same password for a different account would cache miss.

The only way to cache hit is to have the right pair of (password, encoded), which can only happen for the right pair of (username, password) (because the encoded value is stored along the username in the database and unique for this user thanks to the salt). In other words: only the password owner should be able to cache hit (and only after a cache-miss).