Wednesday, August 4, 2010

My take on Adam Goucher's Six Shocking Automation Truths.

Adam Goucher has a wonderful blog entry over at Software Test Professionals relaying Six Shocking Automation Truths.  If you have not yet read the article I wholly recommend it as it does provide some high quality grade A discussion material within it.

According to Adam the six shocking automation truths are:
  1. Truth One: You do not need to automate everything
  2. Truth Two:  Going forwards in reverse is still going backwards
  3. Truth Three: The automation language does not have to be the same as the development language
  4. Truth Four: There is not one tool to rule them all
  5. Truth Five:   There is no 'right' or 'wrong' way to automate (though there are better and worse)
  6. Truth Six:   Your automation environment is not production
These six ideas coalesce around several myths that many in our industry have not yet come to realize are myths. I find I agree with most of these Truths, or at least the blanket summary that Adam gives them, but I'd like to take a moment to expound upon what I these truths mean to me.

The first one, "Truth One: You do not need to automate everything" is easy.  You can't automate everything rings very true for me.  Some parts of a particular software may not be possible to attempt automation either as artifacts or side effects of how they are designed, or due to the nature and quality of tools available.  What's more if you are like me, with several hundred test plans comprised of thousands of steps that cover the modules of an existing software, trying to automate everything could take a really long time.  Likely much longer than the client is willing to pay you for which leads me into the next Truth.

The second truth: "Truth Two:  Going forwards in reverse is still going backwards", follows from the first.  If you have a large number of tests in need of automating, and only limited time to script/record/code/setup the automation tests, then given that time to complete the automation is very likely limited you have to be judicious about where you actually use automation.    Now some may argue that it makes more sense to start with the most stable older portions of a code base to automate. 

I can understand the deceptive and seductive nature of this.  Repeating these test scripts by hand every iteration may seem like a waste of time.  This would seem especially true if they are often finding few if any defects to report.  Yet that section of the application must always be regression tested, and is somehow of more value, despite the lower chance of defect occurrence.  In addition, automation is desired even if no change in features actually intersects it.  To this I find myself in disagreement. 

First, old tests, that are used to regress the software do not always equate with relevance to the current build and software release.  I have worked on projects where test plans from Release A or C were changed, or completely rebuilt from the ground up.  Should they continue to regress tests and force automation to built upon test cases that are now obsolete?  My answer is no.   There is nothing more costly than trying to automate tests which are invalid, obsolete, and not an accurate reflection of the current software's behavior. Therefore if your definition of Old is actually referring to dated perhaps obsolete and regression checks then maybe that's not what you want to automate.

Now Adam argues that the best place to start automating is on the new sections of an application.  I can understand that thinking, but it also is not always necessarily possible to do that.  In some cases it will be, and this may also be dependent on the kinds of tests included in your automation framework.  If you are just starting Unit Testing for example, It makes loads of sense to focus on code that is currently in development rather than trying to cover old dated code.  If however you are using a different type of automation that may not make sense especially given the pace at which code may change in a particular feature as it is developed.  

So what then is the middle ground?  I think a better way in many cases would be to focus upon the areas of the software that were most recently released.  Recent release may imply stability, and though a recent artifact in the release it is most likely the most fresh in the minds of the team in general.   Newer modules may have a higher probability of being touched again as their functionality is expanded with additional features in subsequent releases.   This of course will not always be the case, but the chief concern of this truth is to remember the Pesticide Paradox. 

The Pesticide Paradox simply stated is that "defect clusters have a tendency to change over time, so if the same set of tests is conducted repeatedly, they will fail to discover new defects."  Or at least that's the paraphrased definition from a online course I recently completed.  Or as another tester explained it to me, as bugs are found around particular code segments, reported, fixed, and retested, the old tests will begin to prove repeatedly that those bugs are gone each time they are run.   The problem though, is that these kinds of old proof tests may give a false sense of confidence about the stability of some sections of a site leading the team to focus on testing and developing the more raw parts of the application.   This is why we must maintain and especially update at tweak even old tests as new releases come out in order for them to remain relevant.

The third truth that you need not test using the same language that the code uses, seems a rather obvious one to me, but then I come from a development background before I became a full time tester.  It should stand to reason though that if multiple languages can accomplish the same tasks, that it thus is not necessary for the tester to be fluent with that coding style, and in some ways may help enforce separation between development and testing areas.

The fourth truth, like the third, also seems like an obvious one to me, that there is no one size fits all tool.  I remember when I was a young Boy Scout that another scout showed me his Swiss Army Knife. That thing had twenty five or more gadgets and was so wide I couldn't hold it in my hand to cut.  Contrast that with the two pocket knives I used as a boy, the basic five gadget one complete with can opener, awl, bottle opener, large and small blades, and a cork screw, and the second a simple 3 bladed carbon steel knife (Three blades of differing lengths).  I got more use out of those two knives and they provided all the basic functions I needed from a knife at that time.  Today I carry a set of folding pliers one large one small that also have screw drivers, and scissors, and a blade on it, but I still find myself using that regular knife blade more than anything.  So it doesn't matter if a tool has more functions than its competitors, if its difficult or cumbersome to use, or if it doesn't cooperate with tools other developers are working with.  (I remember using the Ankh extension for Visual Studio several years ago, and had to uninstall it because my install would crash unexpectedly when it was being used.)  The same is true for testing tools.

Truth five is in my opinion the hallmark of good testing, and especially for those who ascribe to be part of the context driven school.  No test exists in a vacuum, and therefore consideration to the environment, the parties that will use the application, and risks involved should all be considered when testing approaches are mapped out. 

The last truth, "Truth Six:   Your automation environment is not production" is the only one I really have some issue with.  Sometimes it is easier and better to understand a software, especially one that you've only recently been brought into, if you see the actual data, or a good facsimile of what it may imply.   I do agree that it does not necessarily make sense to hunker down a local networked instance for test via secure HTTPs, but I am not ready to say that it should never be tested on a test instance.  If your client, or process rules require your test instance to be exactly as it will be in production then I can see why a team may have no choice but to do things this way.  However, my extraction from truth six would be that to do such should always be done with caution to keep in mind the importance of keeping the application as testable as possible.

To conclude, Adam Goucher's Six Shocking Automation Truths are concepts that all automation testers and the stakeholders planning to leverage automation in their projects should be considered before they have the testers hunkered down in their make shift bomb shelter cubes and putting the software through its paces.  I think remembering these things will save many headaches for both the tester and the consumers of their testing efforts.

No comments:

Post a Comment