Shakespeare, The Merry Wives of Windsor
Act III, Scene IV, Line 10
I've tested it but only for the cases that I thought of.
This is the testing dilemma in a nutshell, we know that we need to test our software, but we also know that we cannot test our software for every single state and permutation of input. Therefore we need our testing utility belt to be as full as possible.
When I first started to do Haskell I came across this testing framework called QuickCheck, it did not conform to my idea of how a testing framework should work so I discarded it found HUnit. It was not until Stuart Halloway's SCNA 2013 presentation on Datomic that I took notice of the idea of Property Testing (or Generative Testing as I've also heard it called). Fast forward a little bit to when I saw a blog posted entitled, FsCheck + XUnit = The Bomb on The Morning Brew, I figured I'd look more into Property Testing.
With Unit Testing you typically have two parts, your Code to be Tested and your Unit Tests which contain your Test Cases and Test Data.
With Property Testing you decouple your Test Data from your Tests. Instead of coming up with both your Test Cases and Test Data you instead simply describe your functionality and shape of data to be used to test said functionality. The test runner then takes this description and runs 100 different permutation of Test Data against it.
I believe I hear, an example would be nice now, so let us look at one.
Using Property Testing on FizzBuzz
Looking at the Gist above we see a Fact that does nothing. This is in place to allow NCrunch to pick up our Property test below it. Simon Dickson goes over this in his blog post on FsCheck.
After that we see a comment about having less restrictive tests. With Property Testing you can do one of two things, you can either be very restrictive about the Test Data that is allowed to run against your Test Description or you can reshape the data so that it fits your needs. We'll first look at reshaping the data.
Looking at lines 18-22 we see the following:
First you have a test name, F# is great in that you can use backticks "`" to allow for a more readable name. Next we create our FizzBuzzer class from the C# code.
Now we are ready to use FsCheck. We'll use the ==> operator from FsCheck this has two parts. The part on the left side of the ==> operator is used to restrict the Test Data. In this example we are telling the Property Test Runner we want an integer and it has to not be divisible by 5. Why is that? Well whatever value the Property Test Runner gives us we will be multiplying it by 3, so if we allowed a number divisible by 5, we would end up with a number divisible by 15 which should be FizzBuzz not Fizz.
The part on the right side of the ==> operator is used to assert our property holds with the given Test Data. In this example we are calling the Translate method on the FizzBuzzer object and multiplying the Test Data value given by 3. The output of the Translate method should then be the string Fizz. This property will be run with a 100 different Test Data values!
Running this Property Test 100 times with very little control over the Test Data values has a way of finding all kinds of edge cases that you have not thought of.
Let us look at a more restrictive example on lines 55-60:
First we see that we have a comment so we have failed in some sense, but we'll get to why that comment and value of MaxTest=50 are there in a little bit.
The set up for the restrictive test looks very similar to our less restrictive tests except that we are not reshaping the Test Data on the right side of the ==> operator. Instead of reshaping the Test Data we are being more restrictive on what the Property Test Runner can give us. This is why we need the comment and MaxTest=50, you see the Property Test Runner is "randomly" generating Test Data and at some point it will just give up on find values that are divisible by 15 (think of how rare that would be).
I found with my usage if I set the MaxTest=50 the property will pass every time. If I used a value greater than 50 like 75 I would get an "Arguments exhausted" on most but not all of my test runs. Having inconclusive test results is bad, so I would rather limit the number of test cases if I feel the need to use a more restrictive test set. Personally I'd rather use the the less restrictive test set with the reshaping of the Test Data, but to each their own.
There you have it another tool to add to your testing utility belt.