Tuesday, October 5, 2010

Acceptance Test Driven Development: A Testimonial

The following is a testimonial I submitted to Ken Pugh recently - Ken is working on a book about Acceptance Test Driven Development (ATDD).  Though it is far from a cure-all, in my current context, I have come to regard ATDD as a foundational development and testing practice...

In 2006, I started an assignment with a group that published service APIs to other parts of our company for the purpose of retrieving data from external vendors. At the time, they were testing most of their services manually, using the GUIs of the calling applications. Their testing was dependent upon the availability of both their own and their clients’ test environments, and disruptions were common. Additionally, their service request and response schemata consisted of thousands of fields, but only dozens of those fields were exposed directly in the client GUIs. The rest were calculated, defaulted or simply ignored. Not surprisingly, the test team felt squeezed by schedule pressure and quality problems.

Their management decided that they were simply out-gunned by the technical challenge before them, so they recruited some programmers with testing skills (and vice versa) to join the team. That’s where I and a couple of other test engineers came onto the scene. One of the first things we did was raise awareness about the low level of coverage for these relatively complex interfaces. (This was somewhat disconcerting for veteran team members, and a tribute to the maturity of all involved that these conversations were rarely contentious.) We also began surveying other test teams within our company to see if there were any tools already in-house that we could use to circumvent the client GUIs and go directly at our service interfaces. We discovered a group that was using Fit for a similar purpose and it was love at first sight.

We copied their implementation (Fit and OpenWiki with some customizations) to our environment and within days we were creating and executing tests for some of our larger projects. Within a few weeks we had these tools well integrated into our infrastructure and processes. Tests were now being defined during or shortly after requirements definition, frequently serving to clarify requirements, but we didn’t know then to call it ATDD. Soon developers were asking for our tests to run before check-in, and were helping with fixture design and development.

The number of tests for our systems typically increased five-fold as we introduced our implementation of automated ATDD, and we moved from executing a handful of test passes per project to a handful of passes per day. Defects discovered in our QA environment dropped dramatically because we were running the tests in predecessor environments - the tests became informal entry criteria. Test projects were costing about the same, and taking about the same amount of time, but quality was increasing significantly. In fact, the team’s quality ranking within the company, based on production availability of our systems, improved from “worst to first” in about a two year period.

In addition to the quality improvements, we gained a great deal of confidence in our ability to refactor our systems and move them through environments because coverage had increased substantially and test execution had become relatively effortless. Furthermore, the clarity, usability and credibility of the tests led to more collaborative test failure investigations. It was not uncommon to see developers, testers and business analysts huddled around a screen, or camped in a conference room, discussing the significance of patterns of red cells on a test result table - discovering and resolving issues in minutes where formerly it had taken hours or days of asynchronous communication. While their are many other ways that we have continued to improve our testing, nothing has been as “game changing” as our move to automated ATDD with Fit.

Labels: , ,