Convention for blank tests?

I have a project that’s in the pre-release phase, and I’m currently writing tests for critical aspects of the project. I love writing tests; I’m finding lots of little issues that certainly would have come up in the release phase.

As I’m writing these critical tests, I’m identifying lots of tests I’d like to write if the project gets any traction. For those issues, I’m creating blank tests like this:

def test_noncritical_feature():
    pass

Is there a well-established way to mark these as blank tests? It’s been really satisfying to see my number of tests grow, but now that I have these blank tests that number doesn’t accurately reflect the number of meaningful tests.

Also, is there a name for these blank tests? I thought that’s what a stub was, but a stub is something different.

If we’re using one of the Django test classes that inherit from SimpleTestCase (which itself inherits from unittest.TestCase in the Python standard library), then we can:

from unittest import skip

@skip("Don't want to test")
def test_the_thing():
    # Yummy test code here

Other test frameworks should have something similar available.

-Jorge

2 Likes

Basically what @jlgimeno said is solid advice.

These days I mostly write tests with pytest and I tend to assert False and mark the tests with xfail which motivates me to come back and fix them.

@pytest.mark.xfail(reason="known parser issue")
def test_the_thing():
    assert False

That said, I don’t think there is a right answer. Just a preference of if how you see the output when you run your tests.

1 Like

Thank you both. I started using @pytest.mark.skip(), and at first I didn’t like seeing the s results sprinkled in. But now I’m liking it, because I can easily see how many blank tests I have at any given point.

1 Like

Ooh, I did just discover a drawback of this approach. I was filling in some blank tests just now, and all of my new tests were passing. I’m okay at writing tests, but not that good. Then I realized I couldn’t make my tests fail, even with an assert False.

I forgot to remove the @pytest.mark.skip().

You can use a strict xfail if you’re afraid you’re going to forget to remove the decorator later. With strict xfails, if the tests starts passing unexpectedly, pytest will mark it as a failure (because it’s the opposite of what you expected, i.e., a failure).

It looks like there is also a pytest setting that will make xfails behave this way by default.