default_bounds argument in postgres range fields

The non discrete range fields (DecimalRangeField, DateTimeRangeField) support changing the bounds being auto-applied to tuple or list values on save via the default_bounds argument. The discrete ones (IntegerRangeField, DateRangeField) do not support that, although it is perfectly fine for postgres to do so:

SELECT '[3,7]'::int4range;
 int4range 
-----------
 [3,8)

Big difference for the discrete ones is, that postgres will not return those range definitions 1:1, but instead increases internally the upper bound by 1 and sets the bounds back to the global default of ‘[)‘ (and likewise for the lower bound). But that is just a representation detail, the ranges still have the same meaning, when eval’ed by postgres.

My question is - was there a specific reason for not supporting default_bounds for discrete range fields?

To me it seems that defining the default bounds is a great convenience for the user also for discrete range fields, but maybe I have overlooked some crucial detail here.

I had a look at the history, and it seems the ticket ( #27147 (Add support for defining bounds in postgres range fields) – Django ) proposed adding the feature to all range fields, but on the PR (Fixed #27147 -- Allowed specifying bounds of tuple inputs for non-discrete range fields. by gmcrocetti · Pull Request #14538 · django/django · GitHub), the author wrote:

We can add a default_bounds as an argument to RangeField class, and thus all subclasses. The problem of this approach is that we may add some cutler for discrete ranges. For example, IMO it makes no sense to set a default_bounds value for a IntegerRangeField as it’ll always be converted to [), cluttering user’s experience. Please, let me know if I’m missing something.

…and folks went with this.

I don’t have a strong opinion here myself. I can see how it would be useful to set the bounds for how you define ranges, but given Postgres will normalize the definition to [), maybe that could lead to inconsistencies, at least in a UI…

@adamchainz Thanks for the pointers and sorry for sending you into the ticket history, for some reason I wasn’t able to find the crucial one myself.

Yeah I understand the UI hassles and have no good idea either to prevent further user confusion, the initially proposed mimicking of the postgres normalization would def. create that at the UI stage, if bounds and values magically flip values. (Edit: Could be avoided by always undoing the normalization in the UI, but is that for any good? Would further complicate the code.)

(Stumbled over this for a totally different reason - value adaption for postgres for create/update. The values from `get_db_prep_save` for integer range fields are slightly off for COPY with binary format, so I have to reconstruct them…)