Password hashing is deliberately slow, as a security measure to prevent DDoS attacks. See this cheatsheet:
Defenders can slow down offline attacks by selecting hash algorithms that are as resource intensive as possible.
Any kind of caching would be counter to that goal. It would also open up new attacks like the ability to check if anyone in the system is using common passwords by trying them out on the login form and seeing if the request returns instantly.
No, it’s a security measure to prevent brute forcing password hashes in leaked databases, which is a nice thing and I want to keep it.
It’s not a way to prevent brute-forcing passwords over HTTP, that could have been implemented implemented using short-timed account locking depending on the number of failures, like “you tried 3 times in a row, wait 30s before your next try”, “you failed your password 5 times in a row, wait 5mn before your next try”…
And it’s clearly not a way to prevent DDoS, on the contrary it can be used to saturate the server CPU by just repeatedly sending the same wrong password from many clients.
Any kind of caching would be counter to that goal.
No, caching would have strictly no impact on a leaked database in any way (as long as we don’t cache in the database?).
It would also open up new attacks like the ability to check if anyone in the system is using common passwords by trying them out on the login form and seeing if the request returns instantly
No because the cache key contains the encoded password (which is unique because of the salt) which is unique to the user, so trying the same password for a different account would cache miss.
The only way to cache hit is to have the right pair of (password, encoded), which can only happen for the right pair of (username, password) (because the encoded value is stored along the username in the database and unique for this user thanks to the salt). In other words: only the password owner should be able to cache hit (and only after a cache-miss).