Is it a bad idea to rely on Django keeping model object instances alive in the QuerySet’s
_result_cache after a first iteration over the QuerySet?
I have a team mate who wrote some code to the effect of:
from foo.models import Foo from foo.utils import patch_foo_with_name, do_something_with_foo_names foos = Foo.objects.filter(bar="bla") name = "Luke" for foo in foos: patch_foo_with_name(foo, name) do_something_with_foo_names(foos)
Note that this example is extremely simplified. The stuff that is actually patched into each Foo object is actually the result of some really complex logic that cannot be expressed as
do_something_with_foo_names() will iterate over the
foos and expects the patched data to be there.
Is this dangerous? I feel that the QuerySet’s cache is an implementation detail of Django that cannot be relied upon. I don’t think there is any explicit guarantee in the documentation that says the cache (
QuerySet._result_cache) will never drop objects from RAM, although that’s how it looks in the Django 1.11 source code. A future version of Django might decide to limit the size of the cache and objects can be dropped from the cache, right?
I already know that this approach is broken if the QuerySet is paginated or further refined - e.g.
foos = foos.filter(age__gt=3) - before calling
do_something_with_foo_names, but let’s ignore that case for the moment.