DRF: premature business logic validations

We are using DRF and got the new requirement to bundle all input validation errors and all possible business logic validation errors in one single reponse when the client submits a form.
That violates the default drf workflow which fires the response when input has syntax errors and ignores further validations (fail fast).
So our thought is that we can write a custom_exception_handler which waits for a raised ValidationError, runs all (possible) business validations and enriches the response. But this handler will grow with each endpoint which needs business logic validation.

Are there further disadvantages beside much logic in one single handler? Or is there a better way?

Thanks for answering.

If you’ve got a fundamental syntax error in the submitted JSON, how can you validate anything else?

If you’re missing a comma between elements, or a terminating quote for a string, or a close bracket for a list, etc, in what sense can you appropriately interpret anything else?

You’re just as likely to trigger an erroneous error as an appropriate one.

It doesn’t seem to me that doing this provides any real value.

(And, as a general issue of security, providing any sort of feedback for truly invalid data creates additional surface area as the target for an attack. Different types of invalid data can be fired at the endpoint in attempts to acquire information about internal data structures, leading to targeted attacks on those structures. In general, it’s why a production Django deployment produces vastly different responses for 500 errors in test and production.)

Yes, possible validations. There are running several business logic validations, many of them need only a subset of the input values.

Product management don‘t want the user need to submit the form multiple times. They want to provide the user with as much validation infos as possible.

And also yes, security. But is is an intern enterprise application only provided to colleagues. (Till we want to build an mobile version, I guess). Good point.

Would you anyway try to reject this requirement?

Except, in the case of syntactically invalid JSON, you don’t know what you’re validating. For example, if you’re missing a close-brace ( } ) that is supposed to terminate a dict, you have no way to determine that the following key should be validated on the parent object instead of the current dict. Any conclusions you come to based on that information is not valid.

There are very good reasons why syntactically invalid JSON is, and should be, summarily rejected without any further processing.

If they’re manually creating JSON data for submission to a site, you’ve got a different problem…

Depending upon how you categorize the data, somewhere between 20% and 50% of all security breaches are the result of internal actions. To say that an application is safe because it’s available only to employees ignores that fact. (Of course, if the data being leaked isn’t sensitive or particularly valuable, then it may not matter - but only an appropriate risk assessment can determine that.)

As a general principle, Absolutely. Without qualification or reservation. As highlighted above, it potentially provides as much misleading information as information of value - which can only create more confusion as the original problem is solved and an error now shows up in an area once thought to be safe because no problem was identified in that location of the data.

Thank you. I’ll discuss that.