Updated to 5, now getting "<object> matching query does not exist" on my unmanaged model

I’m using Django to make testing easier to read in my non-Django project. The schema is created and managed by me, and I’m writing my own migrations using a homebrew migration tool. To make this work, my models are all declared as unmanaged. Here’s an example:

create table if not exists bar (
    id bigserial primary key,
    internal_type text
);

create table if not exists foo (
    id bigserial primary key,
    bar bigint references bar(id)
);
class Bar(models.Model):
    internal_type = models.TextField()

    class Meta:
        managed = False
        db_table = 'bar'


class Foo(models.Model):
    bar = models.ForeignKey('Bar', models.CASCADE, db_column='bar')

    class Meta:
        managed = False
        db_table = 'foo'
def test_foo_bar(self):
    from django_app import models

    self.cursor.execute("INSERT INTO bar(internal_type) VALUES ('test') RETURNING id;")
    self.cursor.execute("INSERT INTO foo(bar) VALUES (%s)", (self.cursor.fetchone()[0],))
    self.cursor.connection.commit()

    a = models.Foo.objects.all()
    b = models.Bar.objects.all()

    assert a[0].bar_id == b[0].id # this succeeds
    
    print(a[0].bar) # this raises an ObjectDoesNotExist error

This worked just fine in Django 3.2 and 4.2, but breaks in 5+. It does work if I change a to

...
a = mdoels.Foo.objects.all().select_related('bar')
...

I dug through the Django release notes, and I didn’t see anything about changes to how related models are loaded. Why is this happening? Is there something I can do to patch over this code-wide? I have hundreds of tests and going through each one to appropriately select_related would be prohibitively time consuming.

I did try creating a special model base class that overloads the get_queryset() to always select_related on foreign key fields, but that doesn’t seem to work for chains of queries, eg, foo.bar.other_fk.other_field.

In any case, I’d like to understand what’s going on here a little better.

Some relevant details:

  • Postgres 17
  • Python 3.11
  • Django 5 (also tested on 5.2 – works fine < 5.0)
  • NOT using Django unit tests, these are standard issue unittests.Testcase classes (with some magic to load Django in setUpClass)
  • Using psycopg2 with my unit tests, and I’ve uninstalled psycopg so that Django is also using psycopg2
  • That this does work as-is on Django < 5. I could pin our version at 4.2 for now, but I’d rather get all the way up to date. I’m migrating all the way from 3.2.

Edit: It’s probably notable that my traceback includes this:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/django/db/models/fields/related_descriptors.py", line 235, in __get__
    rel_obj = self.field.get_cached_value(instance)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/django/db/models/fields/mixins.py", line 15, in get_cached_value
    return instance._state.fields_cache[cache_name]
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'bar'

So it seems like Django thinks this object should be cached, but isn’t?

Has the exact same problem. In our Postgres database the id was big int, but the configuration of DJango for AutoField was:

DEFAULT_AUTO_FIELD = 'django.db.models.AutoField'

So just add to the model the field:

class YourModel(models.Model):
    id = models.BigAutoField(primary_key=True)

This was the error in our case, not in general.