Abstracting repeated unit tests

I’ve been focusing on unit testing on my last project, and quickly I’ve ran into a lot of duplicated code. Thinking about how to reduce my test files’ sizes, I thought about something: what if I could create a set of predefined tests that every test case should have?

As a minimal example, let’s say that for every model in my apps I want to test that I’ve correctly defined the string representation, __str__. Up until now I’ve been defining a test for that in every test case for every model, like so

def test_str_representation(self):
        self.assertEqual(str(self.book), "Dune")

but my thoughts were: what if I could do something like create a base class that has a generic “test string representation” test, and then every time I want to create a model test case, I just subclass that test case, pass some data (like, what the string representation in that model test case should return) and that’s it?

I’ve been experimenting a bit with the idea, and have succeeded at a first solution by creating “mixin” test classes that implement generic tests, and the subclassing my actual test cases from those. However, while fun, I’m left to wonder if this might be a bad practice, and whether there’s an established, better way to accomplish what I’m looking for.

So I guess I have two questions:

  1. Is it a bad practice to create generic tests instead of writing all of them by hand? And if no,
  2. What is a good way to create/abstract generic tests for specific test cases?

There’s a line that comes from experience. On one end, tests too “DRY” can be a bad thing. If your tests have complicated setups, logic, and assertions, not only can it be confusing, but you may lose trust in the tests. There’s nothing worse than a bug in test code and you’re not testing what you think. On the other hand, unittest subtests (and pytest parametrization) are great abstractions to use.

The biggest tip I can give you is don’t fall into the trap of refactoring for incidental duplication

Your specific example about calling str is hard to comment on. I don’t think I would test it directly. I’d rely on my coverage tool to ensure that it was being run during other tests. Conceptually though, I don’t think there’s anything wrong with writing a subtest that would iterate over a list of models and call a method. The list of models could even be generated from the django app registry. The challenge is, is there an actual pattern? how do you initialize the models and know what value to assert against? If there is an obvious pattern, use parametrization. Otherwise it’s just incidental duplication.

Thanks for the suggestions! I think I’ll look a bit into parametrization, although I’m not too sure it’s what I’m looking for, or at least how to use it to fix my situation.

If it may be of any help, I have more specific examples of what I was unit testing. My project is mostly an API backend with DRF, and the most important unit tests are the ones testing the ViewSets. What happens is that for now I’ve been creating a different APITestCase for each viewset/model, and so I have several test cases where all of them have tests such as test_create_endpoint which is pretty much the exact same code for all of them, except what model or what URL I am using. They are something like:

    def test_community_process_list_endpoint(self):
        # Make a request to the endpoint
        response = self.client.get(self.list_url)
        self.assertEqual(response.status_code, status.HTTP_200_OK)

        # Assert that the correct viewset is used
        resolver = resolve(self.list_url)
        self.assertEqual(resolver.func.__name__, self.view_name)

        # Assert that the correct data was retrieved
        self.assertEqual(response.data[0]["name"], self.community_process.name)
        # ... more assertions

Everytime I create a new ViewSet for a new model that I expose through the API I end up having to copy paste 200+ lines of code and simply change a few variables here and there. And that’s what got me thinking whether there’d be a better way to organize all of this.

That sounds like exactly what parametrize or subTest can help with. This is an example of pytest code for testing the happy path of list viewset.

@pytest.mark.django_db
@pytest.mark.parametrize(
    "url_name,Factory",
    [
        ("dog-list", DogFactory),
        ("cat-list", CatFactory),
        ("bird-list", BirdFactory),
    ],
)
def test_list(
    url_name,
    Factory,
    api_client,
):
    Factory()
    url = reverse(url_name)
    response = api_client.get(url)
    assert response.status_code == 200

An equivalent could be done in a subTest