This is going to be less of a technical question and more of a software engineering / Django-idiomatically-specific question. I’d like some feedback as to how I organized the code pertaining to a feature of my Django app, as I’m starting to second guess the way I’ve designed it.
I’m making a school application and the part that will be discussed here is how answers to questions (from now on called exercises) are graded.
The what
Let’s summarize the problem at hand, then I’ll show the way I implemented it:
- exercises can be of different types, from multiple choice, to open answer, to even coding exercises. Each type of exercise has a different set of criteria for grading a submission
- quizzes can be created in order to present students a series of exercises. A quiz isn’t directly related to questions: it uses a mechanism where a rule model determines how to pick the exercises to assign to student; this allows for different students to be assigned different exercises in the same quiz.
- each exercise carries a maximum attainable grade: scoring that grade means the exercise is 100% correctly answered; however, the actual score the student gets for that exercise is determined by a property in the rule that assigned it. If an exercise has a max score of 4, the rule that assigned it carries of weight of 3, and a student scores 2, their grade for that exercise will be 2/4*3=1.5. This allows teachers to define exercises independently of quizzes and re-use them with different weights.
The how
When a student participates in a quiz, a Participation
is created. For each rule defined in the quiz by the teacher, a ParticipationSlot
is created: this has a fk to the assigned exercise, and is the only interface through which the student can interact with it. This model also encapsulates the fields necessary for submitting an answer (for example a m2m of selected choices, or a text field for open answer exercises). If also has a score
property that computes the obtained score/grade. This is what I’m wanting to get some feedback on.
Here’s the ParticipationSlot
model:
class EventParticipationSlot(models.Model):
participation = models.ForeignKey(
EventParticipation,
related_name="slots",
on_delete=models.CASCADE,
)
exercise = models.ForeignKey(
Exercise,
related_name="in_slots",
on_delete=models.CASCADE,
)
populating_rule = models.ForeignKey( # <-- this is where the slot's weight comes from
Rule,
)
selected_choices = models.ManyToManyField(ExerciseChoice)
answer_text = models.TextField(blank=True)
_score = models.DecimalField(
max_digits=5,
decimal_places=2,
null=True,
blank=True,
)
@property
def score(self):
if self._score is None:
return get_assessor_class()(self).assess()
return self._score
@score.setter
def score(self, value):
self._score = value
As you can see, whatever the automatically assigned grade is, it can be manually overridden by assigning to it. This is vital for types of exercises like open answer, where an automatic grade cannot really be assigned.
Now, in a separate module called logic
, here’s the definition of get_assessor_class
:
def get_assessor_class():
return SubmissionAssessor
and here’s the class:
class SubmissionAssessor:
def __init__(self, participation_slot):
self.participation_slot = participation_slot
self.slot_weight = self.participation_slot.populating_rule.weight
def get_multiple_choice_submission_correctness(self, slot):
selected_choices = slot.selected_choices.all()
return sum([c.correctness for c in selected_choices])
def get_programming_submission_correctness(self, slot):
passed_testcases = len([t for t in slot.execution_results["tests"] if t["passed"]])
return passed_testcases
def get_manual_submission_correctness(self, slot):
return None
def get_submission_correctness(self, slot):
from courses.models import Exercise
exercise_type = slot.exercise.exercise_type
if exercise_type in [Exercise.OPEN_ANSWER, Exercise.ATTACHMENT]:
return self.get_manual_submission_correctness(slot)
if exercise_type in [
Exercise.MULTIPLE_CHOICE_SINGLE_POSSIBLE,
Exercise.MULTIPLE_CHOICE_MULTIPLE_POSSIBLE,
]:
return self.get_multiple_choice_submission_correctness(slot)
if exercise_type in [Exercise.JS, Exercise.C]:
return self.get_programming_submission_correctness(slot)
def assess(self):
exercise_max_score = self.participation_slot.exercise.get_max_score()
if exercise_max_score is None or exercise_max_score is 0:
return 0
submission_correctness = self.get_submission_correctness(
self.participation_slot
)
if submission_correctness is None:
return None
return (
Decimal(submission_correctness)
/ Decimal(exercise_max_score)
* Decimal(self.slot_weight or 0)
)
As you can see, accessing the score
property on a ParticipationSlot
instantiates a SubmissionAssessor
and calls assess
passing the slot, which in turn calls a specialized assessment method depending on the type of the exercise in the slot. The reason I have a get method instead of directly instantiating the class is to allow for future extensions where I might want different behaviors depending on the type of quiz.
A “correctness” value is computed for the submission, then it’s divided by the maximum attainable score of the exercise (more on this in a second), and ultimately scaled by the weight of the slot/rule.
(as an aside, if you’re wondering by I am passing slot
to all of the assess methods instead of using self.participation_slot
, it’s because slots are actually recursive – although it’s omitted here – and to grade some exercises you have to recursively evaluate the sub-slots)
The last bit is how the maximum possible grade for an exercise is determined. This is a method in the model Exercise
:
def get_max_score(self):
if self.exercise_type in [Exercise.OPEN_ANSWER, Exercise.ATTACHMENT]:
return None
if self.exercise_type in [Exercise.JS, Exercise.C]:
return self.testcases.count()
if self.exercise_type == Exercise.MULTIPLE_CHOICE_SINGLE_POSSIBLE:
max_score = (self.choices.all().aggregate(Max("correctness")))[
"correctness__max"
]
return max_score
if self.exercise_type == Exercise.MULTIPLE_CHOICE_MULTIPLE_POSSIBLE:
correct_choices = self.choices.filter(correctness__gt=0)
return sum([c.correctness for c in correct_choices])
So as you can see, we go from accessing the score
property, which is inside a model, to instantiating a SubmissionAssessor
, which lives in a separate module altogether, to once again tying into the models by calling get_max_score
on the related exercise.
I understand that, as per the active record and fat model pattern, most business logic code should live inside the models.
Do you have any feedback as to this architecture? How could it be made clearer and tidier?
I’m thinking the get_submission_correctness
method in the assessor class might become a method in ParticipationSlot
, and at that point the score
property would just involve calling that, dividing by the max exercise score, and scaling for the slot weight. But then I’d also have to move all the different assess
methods inside the model, possibly bloating it up too much.
One more thing I don’t particularly like, althought I can’t really think of a different way, is to have all this logic “repeated” for every type of exercise: how to evalue a submission, how to get the max score depending on the type, etc.