Hi there! After writing approximately the same bunch of test functions at two jobs, I’ve extracted a little pytest library for doing expectation testing the way that I like to, and published in the usual places. So, I thought I’d tell y’all about it.

Essentially, it allows lots of variations of this kind of test:

def test_compute(resources):  
    input = resources.load_json("input")  
    output = compute(input)  
    resources.expect_json(output, "output")  

The input data and expected results will be located in predictable places relative to the test function, and in a format that is nice to work with as a developer. No magic pickles or multiple expectations squashed into a single file that you’re afraid to edit.

It works well with text data, json and pydantic models and others may appear over time. Test file path creation is completely configurable, but I think the defaults are reasonable.

I’d love for it to see some usage and to get some feedback about how to make it better.

That’s it!

    • logi@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      19 days ago

      That looks very nice and seems to hit most use cases as far as I can see in my just-woken state. If it had been easier to search for, then I might now have written mine.

      I’d have to try it to see how it deals with my common case of rather a lot of numpy arrays in hierarchies if ducts or pydantic models and the floating point precision issues. Maybe it’ll do that brilliantly.