Monday, September 13, 2010

CAST 2010

Now that I've started the blog, I'm going to reach back into the events of recent history for a few posts. One of those events was CAST 2010, the Conference of the Association for Software Testing.


Overview

CAST is a highly collaborative conference of testing professionals, smaller than the STAR conferences, but densely populated with smart, passionate, vocal people. Among those I met for the first time were Cem Kaner, Harry Robinson, Doug Hoffman, Scott Barber, Matt Heusser, Becky Fiedler, Ben Simo, Tim Coulter, Selena Delesie, Michael Hunter, Cristina Lalley, and Joe Harter. It was also great to renew connections with Michael Bolton, Rob Sabourin, Eric Proegler, Paul Holland, and Michael Bonnar. I know I am leaving somebody out - please feel free to yell at me.

(For you quants, that was 17 people. Conference attendance was about 105. That means that I personally networked with at least 16 percent of the conference attendees. Though the number would be much higher if I had recorded observations more carefully, this is still not bad for an extreme introvert - and a tribute to the nature and quality of this conference.)

CAST is conducted by prominent practitioners, not a corporation or group of vendors. (There are vendors sponsoring the conference, but they do not seem to be the focal point as they are at some other conferences. Actually, I felt sorry for them, at times, due to the lack of attention they received at their booths.) Many of the attendees pay their own way. As such, the level of 'engagement' is very high. The material presented is based on real-world testing. The viewpoints discussed are based on real-world experience. This is not a 'vacation' conference. I came away both energized by new ideas and exhausted from the constant mental stimulation.

My favorite take-away was a set of techniques for large scale testing that I believe will be directly applicable to improving testing in my current context. These techniques were outlined mostly in Harry Robinson's 'Exploratory Test Automation' tutorial and the session on 'Testing Large Scale Scientific Computations: The Short Circuit Method' given by Gaston Gonnet and Monica Wodzislawski, and they were built upon brilliantly in subsequent discussions with several other attendees.


My Presentation – Testability and Technical Skill

Overall, I think my presentation went okay. I rushed a bit, and people were a bit tired since it was the afternoon of the last day. I still have significant room for improvement with my presentation skills, but I walk away from this encouraged to continue improving.

Interestingly, this did not seem to be a controversial topic at all for the audience at this conference. They seem to accept and assume that testers benefit from technical skill. I was hoping to stir up at least a little challenge, but nada. I wonder why it is so controversial in my shop? Is this a localized phenomenon? Does it correlate to the aforementioned level of energy and commitment among the attendees?


Some Highlights
(There were many more, possibly excellent, sessions that I did not attend; these are just some notable points from the sessions I did attend.)

Exploratory Test Automation – Harry Robinson
  • Shared some creative ideas for generating large-scale random inputs for systems.
  • Described two specific approaches
    • Production grammar
    • State modeling
  • More creative ideas for creating light-weight dynamic test oracles
  • Put your machines to work while you are away from the office.
  • You can have crisp handoffs or quality code, but probably not both.
  • We need testers who can design.
  • I was able to spend a significant amount of time talking to Harry after the tutorial and he helped brainstorm ideas about how we can use these techniques to test our product.
Keynote on Estimating – Tim Lister.
  • Covered some common issues with estimating
  • Presented a method for measuring estimates – EQF.
  • Interesting analogy between estimating and hurricane forecasting.
    • I spent some time afterward discussing this analogy with Tim. The hurricane does not provide estimates and the forecasters don’t live in the hurricane. Does this mean we should try to have external parties estimate our projects? Hmmm.
Technical vs. Non-technical Skills in Test Automation – Dorothy Graham

  • Covered some of the basics of test automation skills.
  • Interesting discussion around whether tool independence is a worthy goal. It depends, of course. This is likely to be an issue we are discussing in my shop in the near future.
  • Others generally agreed with my observations that they have seen programmers learn how to test effectively more frequently than they have seen testers learn how to program automation effectively.

Investment Modeling as Exemplar for Exploratory Test Automation – Cem Kaner
  • As an avid amateur investor, this talk was interesting to me, but I never clearly made a connection to exploratory test automation. There was a lot of material in the slides, I need to review it again.
  • A controversial point: “GUI level regression testing is thought to be one of the industry’s worst practices.”
  • There was an interesting point raised by an audience member – we testers need to go to the conferences that our customers are going to, not just constantly talk amongst ourselves.

Testing Large Scale Scientific Computations: The Short Circuit Method – Gaston Gonnet and Monica Wodzislawski
  • This presentation was on a higher technical plane than any of the other talks I attended.
  • How to test complex long running programs with simple inputs and outputs, e.g. weather modeling programs.
  • Testability suggests where faults can hide from testing, and testability does not need an oracle. (That's deep, man.)
  • They enumerated four techniques for creating dynamic oracles.
  • This was a fantastic complement to Harry Robinson’s talk.
So that's a quick tour of CAST 2010 from my perspective. I thought it was a very positive experience, and will most likely try to attend CAST 2011, which will be chaired by Jonathan Bach in Seattle, Washington, sometime in July. Maybe I will see you there.

Labels: , ,

1 Comments:

At September 15, 2010 at 8:56 PM , Blogger Unknown said...

Testability and technical skill is controversial in our shop because we work at an insurance company that sells insurance. The often cited conceit of being a technology company that happens to sell insurance does not square with reality. We have some really smart, highly technical employees with exceptional abilities (e.g., you), but our shop is dominated by a significant population of employees who are products of the company's evolution.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home