Basically, if an object already exists in the list we’re trying to move it to, an IntegrityError will be thrown; I’d like to still do the efficient “bulk update” (not have to loop through one by one), but ignore those conflicts/errors and just skip that update, but still commit the ones that don’t violate the constraint.
I think we can do this using raw sql via something like ON CONFLICT DO NOTHING (postgres) but wondering if this can be done using the ORM, while again, still be efficient.
In actually reading some of the Postgres docs, and not just relying on hallucinations from Gemini, it seems Postgres actually does not support update ... on conflict ..., that’s only supported for insert. The recommendation, it seems, is to filter to the rows that don’t already exist via a correlated subquery or pre-query.
Not sure if there is a way via the ORM to basically mimic that or not.
As pointed out in the SO post this is won’t prevent you from running into integrity errors if a concurrent transaction inserts rows as the queries won’t be serialized. In other words, there might be transactions winning the commit race that insert that in between ~Exists is looked against and the updates proceeds.
The only thorough solution here would be to try the above in a transaction and retry any IntegrityError against your unique constraint. All the NOT EXISTS correlated subquery does is make it less likely that a conflict occurs.
Amazing thank you @charettes , works like a charm. It seems to have included a BEGIN; without the select_for_update() but I added it anyway. Appreciate it!