I’ve previously described how to run your QUnit tests and produce a coverage report on every commit. This works great if you happen to have chosen QUnit as your unit testing framework.
Being able to get fast feedback from failing unit tests is useful. And if you could easily identify the parts of your code that you still need to write tests for, you would get more comprehensive feedback.
Being able to check the quality of your code and run all unit tests in your project on every commit is useful. If you could do it on every save you would get even faster feedback.
Update as of April 7, 2013: This blog series has now been extended with Jasmine and Istanbul. Update as of March 28, 2013: This blog series has now been upgraded to use Grunt-0.4.
A client of mine asked me this question today:
One of our developers wants to write her BDD features using SpecFlow instead of our existing Cucumber JVM approach. Isn’t it a bad idea having two separate approaches? What are your thoughts on this.
I think it is important that you constantly ask youself how you . . . → Read More: Tool Improvement Spikes
If you are like most test driven developers, you write automated tests for your software to get fast feedback about potential problems. Most of the tests you write will verify the functional behaviour of the software: When we call this function or press this button, the expected result is that value or that message.
But . . . → Read More: Automated Performance Testing
Have you ever faced the problem of writing unit tests which relies on textual test data? This is a classic issue where you usually end up putting the test data in a string variable or in an external file, depending on the amount of text. Neither of these options are particularly elegant. In this article . . . → Read More: CommentReader – Place your test data next to the test code
This week I gave a presentation for a client, starting on Test and Behavior Driven Development.