I’m working on a Django project using Django Rest Framework (DRF) and PostgreSQL, with a Scan
model that tracks different phases of a label scan. I’ve set a unique constraint on the combination of label_id
and phase
fields to prevent duplicate scans for the same label in the same phase.
Here’s my Scan
model:
class Scan(models.Model):
user = models.ForeignKey("accounts.User", on_delete=models.CASCADE)
label = models.ForeignKey("labels.Label", on_delete=models.CASCADE)
phase = models.IntegerField(
choices=ScanPhaseChoice.choices,
default=ScanPhaseChoice.unknown.value,
)
distribution = models.ForeignKey("factories.Distribution", null=True, blank=True, on_delete=models.CASCADE)
store = models.ForeignKey("factories.Store", null=True, blank=True, on_delete=models.CASCADE)
created_date = models.DateTimeField(auto_now_add=True)
updated_date = models.DateTimeField(auto_now=True)
class Meta:
unique_together = ["label", "phase"]
When I try to create a new scan, I get this error:
django.db.utils.IntegrityError: duplicate key value violates unique constraint "labels_scan_label_id_phase_81ec6788_uniq"
DETAIL: Key (label_id, phase)=(413, 1) already exists.
I’ve verified that the combination of label_id
and phase
already exists in the database, but I’m not sure why this error is raised when I’ve used full_clean()
to validate data in the save()
method.
What I’ve Tried
- Ensuring the constraint is defined in both the database and the model’s
Meta
class. - Adding
validate_unique
in theclean
andsave
methods, but it didn’t prevent the error. - Using
full_clean()
in thesave
method, expecting it to check for unique constraints.
Questions
- Why doesn’t
full_clean()
catch the unique constraint violation in thesave()
method? I thoughtfull_clean()
would validate uniqueness, but it seems this isn’t happening here. - Is there a best practice for handling unique constraint violations in Django when using DRF and PostgreSQL for cases where a combination may exist already?
- Should I check for duplicates manually in the
save()
method, or is there a more efficient way to prevent this error?
Any insights on why this is happening and suggestions for how to handle it would be greatly appreciated. Thank you!