We develop new software,implement packaged solution or develop on top of packaged solution – for a reason. Most of the time – or at least until I started my entrepreneurial journey, I never thought about the value of the code I was writing right now or solution I was implementing.
It was only when I started selling; did I think what am I selling? What problem of my client am I solving? What ROI was I giving to the client with me or my team’s
development and services? During this journey, I came across Automated testing.
Manual software testing is performed by a human sitting in front of a computer carefully going through application screens, trying various usage and input
combinations, comparing the results to the expected behavior and recording their observations.
An automated testing tool is able to playback pre-recorded and predefined actions, compare the results to the expected behaviour and report the success or failure of these manual tests to a test engineer. Once automated tests are created they can easily be repeated and they can be extended to perform tasks
impossible with manual testing.
Most of the tools in the automated testing relays on pre-recording. This requires the users to input and record the test cases (often not their own system). The issue I noticed with this approach is that, test cases are not maintained with
the speed with which the system is being built. Hence the automated
test(s) becomes irrelevant over a period.
This is when I came across Robot framework, which utilizes “Keyword” approach to testing. Users can create test cases with higher level keywords. So the users could create a test case with predefined keywords for the target system in a text file and the test cases could be automated for testing with RobotFramework.
If you simplify this testcase, this is nothing but the business requirement as an example. This example or requirement are discovered through discussion with the user or business owner or stakeholder by asking question like “Imagine the system to be finished. How would you use it and what would you expect from it?”
This way of getting executable requirements where examples and automatable tests are used for specifying requirements is called ATDD (Acceptance Test Driven Development).
As quoted in Practices for Scaling Lean & Agile Development by Craig Larman and Bas Vodde with reference to Martin and Melnik IEEE Software article ) “Test and Requirements, Requirements and Test: A Möbius strip,”)
… for early writing of
acceptance tests as a requirements-engineering technique. We believe
that concrete requirements blend with acceptance tests in much the same
way as the two sides of a strip of paper become one side in a Möbius
strip. In other words, requirements and tests become indistinguishable,
so you can specify system behaviour by writing tests and then verify
that behaviour by executing the tests.
Acceptance test-driven development fuzzes the distinction between test and requirements analysis, hence the topic of this blog Testing is No Longer testing!!!