Speeding up Postgres bulk_create by using unnest

Yes psycopg3 brings a much better copy support, which is ~2 times faster than my current still psycopg2 compatible approach, but only when fully relying on the binary transport and the auto adaption for values (thats how psycopg calls the python to postgres type conversions). The latter is tricky - to really benefit from that, late value adjustments in methods like prep_save should be avoided, or the speed benefit goes away.

For copy_update I had started to port the custom python to postgres encoding to the auto adaption of psycopg3, but got kinda stuck on the question, whether things should be abstracted closer to django internals or not (e.g. make use of prep_save and such to some degree).

Last but not least I found plain model instantiation to be quite expensive itself (thats the reason why I came up with that from_dict=True switch) and wonder if this could be made faster somehow. I did not further investigate it yet, no clue if it is mainly CPU bound (lots of ctor logic to get through?), RAM/allocation pressure (RAM usage is alot higher) or both.

Edit: As for MTI I am pretty sure I have missed several important details there. While MTI brings so much convenience to modelling - on the other hand I try to avoid it like the plague for anything that needs performance. Imho better bulk support for those would be a partial relief at least. The fact that mysql is not able to propagate newly created pks in reliable way is - idk, planned dumbness of the DBMS? I mean comon mysql folks, even sqlite is miles ahead in this regard, mariadb also got it right now…