I recently have pondered how things have changed in the quest to self educate myself concerning the software field, testing or other areas of interest. When I was still a youth, I remember going to the mall and visiting Waldenbooks, and thinking what a great place this is, they have so many books. Oh the days of browsing just for fiction, thinking life was good.
Then moving to College, I found libraries with thousands of tomes, perhaps a million books on shelves, and bookstores that carried a lot of books not just texts. This is also the place where I learned the evils of the text book industry. Some books you might want to sell back, finding little utility in them, may get stuck in your possession due to a new edition, or a semester where no class is utilizing the text. I learned early in my college career to be wary of where I spent my hard earned monies on books related to technical interest.
At a trade show, I once bought a new hard drive for my computer, a book on Access 97 published by Que, and a Java book, that was pretty worthless the moment I bought it. It was a hard lesson to learn, and for a time after graduating I steered clear of books as much as possible, not knowing how to pick a good book from the bad.
How many volumes can be written about a piece of productivity software? I don't know, but it was frustrating at times to find the Books a Million had two shelves full of books on Office, Outlook, and Word, but slightly less devoted to areas that actually would have been of help to me as a young software developer. We can look at publishers, Microsoft Press, or O'Reilly, and hopefully develop a familiarity for how books are written. This book may be a quick start guide, this one claims to teach you in 24 hours, or 30 days.
The reality is often different. Some books are good solid references, almost printed documentation of the language you are coding in which may be useful if there was no quality documentation online (Such as MSDN). Others spend a great deal of time on one set of features, yet seem to not cover one area of advanced use that is what you are really looking for. So how do you determine what is a good book and what isn't?
One way is to look at Ratings. You can go to Amazon, or some other book seller site and read the reviews of the book. Do the reviewers mention parts that you were hoping to get understanding from that book, do they down it? Do they list the table of context, or index to give you a partial idea of topics covered? Do they describe the book as far too long to cover so little? Yet Ratings games, like search engine optimization can still be played in the online market place, just as it could be played in a brick and mortar store where a book with a better price may actually contain significantly lower substance than a book of just a few dollars more . So, it is not always clear whom you can trust as an Author or Publisher at times.
When I first came on board as a full time tester on this project, I had not read a significant text related to testing, in several years. I dare say, the last mention was probably in a software engineering text book, that I have opened now and then just to refresh my knowledge. I learned to browse the shelves of fellow developers, sometimes to ask if they could recommend a good book.
As strange as it may sound, word of mouth was responsible for a number of quality books I bought when I first moved to Hinton, WV. A lot of data can be found online these days, and that's great for helping you through tricky areas of the work, but sometimes, nothing beats a good old fashioned dead tree to plop in your lap and devour. Sure, I may read faster in an online format than I do in print, but there is something to be said about having those pages a few finger tips away.
However, what if no one you know has a text in that area you might wish to explore? That is the hard part. How do you figure out who to read when there may be so many texts? Some may contain ideas that may not be all that useful for what you or your team wish to accomplish. Recently I've discovered a new and better way to learn about books, through Webinars, YouTube Videos, Blogs and Twitter.
I can't count the number of interesting blog entries I've read in the last few months, the thousands of tweets or YouTube Videos I have browsed. I know it has been a treasure trove of information, both about the areas I have been researching, and about the individuals themselves. I admit, I was a bit of a skeptic about Twitter in the past. I found Facebook to be more trouble than it was usually worth, and was not sure quite how twitter would really help. At some point I actually tried to follow some sports writers and bloggers on Twitter, hoping to get insight into this years NFL Draft. A lot of the commentary linked to places that required paid subscriptions to read, and over time, I realized it was not as valuable as I would have thought.
Then I began to watch Webinars by several testers and software development 'coaches' as I'll call them, and I realized that there was more going on twitter than just simply Facebook status messages limited version 2.1. I think Lanette Creamer was the first tester I followed on twitter, and shortly there after the Bach brothers, Marlena Compton, Michael Bolton, David Burns, and Adam Goucher and a host of others. I admit, I have never had a good understanding of how to network with people in my field, especially given the geographic region I work in. But Twitter has turned out to be a godsend for not just meeting and listening to conversations related to testing, but a major tool in expanding and learning how to be better at what I do.
Then I found out that, some of these folks, actually had written whole, or contributed part to any number of texts. So I began my quest to seek out these tomes, to see what may have been written, and over time I read and absorb as much as I possibly can. Like many testers, I've not had any formalized training, barring a few webinars, and free web courses I've taken. I had been thrown into the fire and reacted the best that I could, and tried to be as thorough as I could be as a tester.
It's interesting, because word of mouth brought me to purchase the first few books I acquired after moving here. Now it is not just word of mouth of coworkers and friends, but perhaps of the authors, editors, or even other readers whose opinions I have come to value highly as I consider what tomes to add next to my shelf.
To conclude, if you find the search for better technical publications to be a bit of a maze to navigate, I highly recommend seeking out individuals in the field. Look for papers they've written, articles in magazines, or even follow them on Twitter. Getting to know the author, is almost like establishing a relationship with someone as a trusted person in the community you live in, and grants that extra bit of confidence. I've found that makes the learning far more enjoyable. Thanks to Twitter, my desire to expand my knowledge into new horizons has ascended again, and I hope that many others will find the fires of Prometheus flame rekindled in their desire to learn.
Contemplation, introspection, transpection, and other thoughts related to testing and life long learning.
Wednesday, August 25, 2010
Navigating self learning
Labels:
books,
self learning
Monday, August 23, 2010
Testing as an inter-disciplinary skill
I know it has been a while since I last wrote a blog entry. Unfortunately lack of sleep and cramped schedules deprived me of much writing time of any kind the last few weeks. Hopefully I can find a way to fix time for this purpose, or at least leverage some for a crowbar before my ideas go stale. (Note to self, start carrying the notebook again.)
With the obligatory apology completed, now to some discussion. This past weekend, I had some time to spend with my father. He's a Chemical Engineering graduate from West Virginia University, and has worked in a number of different environments. Without giving you an entire biographical sketch on my father, let me just say that he began working for FMC in South Charleston, WV shortly after graduation, and now, through an ironic quirk of fate now works at the same plant, that has since been spun off and changed owners a few times as a Chemical Operator.
My dad, often reflects on how things are at his plant, a place I even had the privilege and honor of interning at for a summer, and learned a great deal about just how complex an industrial plant can be. This honor allows me to have a much better idea of the parts of the plant that he's referring to, even though it seems as though they've re-purposed some of the land for other things in his description I can usually get at least a basic understanding of what he is talking about.
I of course am not a Chemical Engineer, my last Chemistry class was Chem 16, and I was glad to be done with it, the likes of organic chemistry was not something I felt I needed to learn, although at one time I had considered that field. So while I may not understand all the complexities of how a plant is laid out and run, I found some interesting parallels to experiences I have had in the Software field.
For example, take a particular process, any process you can think of, be it software or industrial. Imagine this process produces a raw or virtual product. The process has rules, and procedures that are to be followed by the floor workers, the programmers, the so called 'QA' group and so forth. Maybe they have cross functional teams, or maybe they work in segmented distinct sections of the 'plant' to produce the product. Whether they are stamping out bolts and panels for a car, or controlling the flow of material inputs, you can see how the manufacturing sector and the Software Sector may seem to have a lot in common, at least on the surface.
What does this have to do with testing? Well, one area of interest to me has to do with how to improve the process in which software is developed on the teams I have worked in, and I have worked in several different situations. The first, and perhaps most frustrating happened near the end of my College career. I was taking an Operating System Class, and each class divided into smaller teams to build and code the projects. Somehow I ended up with a trio of other students, a pair of them Grad Students. Well, we began by planning, and figuring out how to divide up the work load. Things initially seemed to be going well, and then the worst thing possible could happen. The one other undergrad guy, dropped the course, which increased our individual work load, and then, soon to the Grad students disappeared from class. I suddenly was in a situation where I never wanted to be, alone trying to solve a massive project. Fortunately I was able to transition to another team later in the semester, but there are so many things I could have learned better if I had just been with them at the start. The team literally imploded under its own weight as people were pulled in different directions.
The same thing happens in software teams in industry. How many people have I known who share multiple hats? Lead architect, chief tester, database engineer, technical writer, system configuration admin, etc. Those are just generic hats, when you add in possible technologies like AJAX, jQuery, Flash, and other libraries and techniques, the distribution of knowledge in a team can be spread out. This may not necessarily be a bad thing, but it can hurt teams when people are retasked to other projects, or move on to other opportunities.
On one particular project, I was given the responsibility of picking up a section of the site, a search page to port it to the latest version of .Net. I had seen the search in action before, and thought I had a pretty good idea of what was going on, well that was my first mistake. Once I dug under the hood, I realized the intricate classes and data connections related to this page were very complex. The change to the latest version of .Net meant the manner in which data was passed around had changed, and it was not just a simple matter of pointing to a different data source. In short, maintaining that now 'legacy' piece of code became a headache, one that I prayed I'd never have to work through again.
Note it wasn't that this particular page was prone to error, it was literally a swiss army knife, with a multitude of possibilities, and that was before you started saving various searching configurations. The draw back was that it was a difficult mechanism to extend, and that ultimately lead us to redevelop the module and to take advantage of another Reporting Service technology inherent within the version of SQL Server we were running at the time.
My father described what seemed to be a similar situation. Changes in how they ran the plant, where or how certain inputs were calibrated, and how they ultimately had certain effects on the outcome. Most teams in software will encounter bumps in the road that they have to learn to overcome, and it seemed that this was a similar process for my dad's company. If you encounter a problem in the process, you examine it, and try to determine how to fix the problem, and hopefully develop a procedure to avoid repeating the mishap.
There's just one problem, that in my Dad's case gave me concern. What if there was a situation where an error happened, but the process was not at fault? In software we can tweak and tweak until the cows come home, but with each new procedure, each new piece of red tape, how much do we slow our ability to produce and test code as we go?
My father went on to describe some of the different teams that worked in the plant. They have their managers, their QA people, their maintenance guys, their operators in various sections of the plant and so forth. I found myself asking my father what he thought Quality was. Truth be told, I blame Jerry Weinberg for the question as I recently borrowed an old copy of Quality Software Management: Systems Thinking from a friend at work thinking it was out of print. (It turns out I was mistaken on this. Thanks to Jerry, for pointing out to me that it actually is still in print, just not in Amazon's roster. It is still available with Dorset House Quality Software Management: Vol. 1: Systems Thinking by Gerald M. Weinberg. My apologies for listing it as out of print in error.)
I found myself wondering how my Dad, a chemical engineer doing operator work saw these things, and then I described how Jerry in QSM describes quality simply as value to some person. Now I've seen a number of other tester bloggers wrestle with the ideas concerning quality. Unfortunately none of those entries were fresh in my head on this particular day. However, one point I did remember from the chapter I had just finished reading, was that because quality is subject to interpretation to some person, there are going to be different definitions, and expectations depending upon who the person may be. Jerry does an excellent job of describing this phenomena in the first chapter of that book.
This then prompted another question, "Is the Quality Assurance Group" the only people who need to be concerned with quality in the plant? I was trying to drive home that perhaps one of the issues in his plant was that people believed Quality is something that only that group in the labs is concerned with, everyone else is just a laborer doing what they are told.
I'll be honest I've never understood that mentality. I have always pushed myself to do the best of my ability and in a team setting to do whatever I can to make that group successful. The situation reminds me of one I've seen on paper in a few webinars. A software group may have a group of Business Analysts that try to figure out the requirements. They then pass those on to some architect, or developer group who tries to implement them in the chosen coding convention. Then those same requirements are then passed on along with the builds for the project onto the testers who have to then parse those requirements and try to figure out how to determine whether a product 'passes' or 'fails'?
Sound familiar? My dad basically described this situation, where the Analysts would get an idea at a high level of something they'd like to see happen, but often times they don't know the capabilities of a particular piece of equipment or hardware, whether it has constraints that may limit how it is used. I'm reminded of the complaints about highway plans where I went to college. So many described the roads as poor and unorganized, and often as if there had been no plan at all. I often heard people say, that like the roads the best way to fail was to start by failing to plan.
Having lived through some similar situations I can attest that it can be very difficult. My first role as a Software Developer, I never imagined that they'd require me to do quite so much testing. Software Engineering Classes for BS Computer Engineers focused more on how testing was often a separate phase done by a somewhat independent group. Yet here I was soon after starting my first professional job for pay, and I discovered myself having to test this module. Sometimes, all I had was the name of the module, no requirements, or notion of how it should work. Even worse, initially I was only given the code, and had to parse through them buy hand to figure out what was changed, plug it into a newer build, and then test it.
It didn't take me long to realize that this was an untenable situation. How much time and effort was being wasted integrating someone's code into our project only to have to back it back out when we discovered it was not as mature a feature as we had thought. I prefer not to think about that, but I began to push back and ask for at least some basic requirements or description of the features, and eventually we began requiring a demonstration build of the project as a proof in concept that I could put through its paces and explore to see if it lived up to what we were expecting.
Honestly, it was a very hard experience. My dad says that Engineers go to College to learn how to think and how to teach yourself about the disciplines you encounter, but that you don't really have any clue how you will use what you've learned until you are out and working in industry. In short, as a fresh out of college graduate, I didn't really know jack about how to do my job well. There were so many things that I'd not encountered that I had to learn those first couple of years, and without any real guidance from the more senior developers in the team, I was left to figure things out on my own. Fortunately I am a fast learner, but even then I know that I probably made more mistakes that first year than I ever imagined possible.
This is not me being critical. Just like I look back at stories penned before I finished High School, and today I can barely understand what I was writing or how I formed lines of thought to compose one line of text and weave it into another. Truly they were embarrassing times, but they were learning times. They drove me to work harder, to try to be better at each little thing I did, and I felt I was making progress up until I got retasked to working the Tech Room. In truth, it was probably for the best, there were a number of glitches in our process for including and releasing new builds, and I was fortunate enough that my ability to work with people on the phone enabled me to switch hats, yet still learn a great deal about our product. It was that connection to the customers that finally made things start to click.
In any event, just like things were in flux in that first assignment, I wonder how such changes affect my dad at his work. How can they expect to keep their production levels high, when a key component goes down? How would I expect a web site to operate if a key database or server went out? This is what I was driving at with my dad. That maybe, just maybe, the ideas of Systems Engineering for software, of testing, and development, do not just exist in their own bubble, but are perhaps shades of things that could help improve things at his plant.
I really enjoyed the chat with my father. I wish I could have more such talks with him, to exchange ideas about how to think through problems as an engineer, or a tester. One thing is for sure, the more I read about software development, the more I wonder why more work place components are not striving to improve their processes instead of staying at the ad-hoc level.
PS. Thanks to Gerald Weinberg for writing Quality Software Management: Systems Thinking, and thanks to my friend Craig for letting me borrow it from his book shelf as the book seems to be out of print due to its age. I look forward to completing reading it, even though some of the ideas presented within may be a bit dated. One of the joys, that being a full time tester has brought back to me, is the joy of digging for new ideas and knowledge through books, blogs, and the internet. I'm not sure I would have had this wonderful conversation if not for having started that book, so Jerry, thanks a bunch.
With the obligatory apology completed, now to some discussion. This past weekend, I had some time to spend with my father. He's a Chemical Engineering graduate from West Virginia University, and has worked in a number of different environments. Without giving you an entire biographical sketch on my father, let me just say that he began working for FMC in South Charleston, WV shortly after graduation, and now, through an ironic quirk of fate now works at the same plant, that has since been spun off and changed owners a few times as a Chemical Operator.
My dad, often reflects on how things are at his plant, a place I even had the privilege and honor of interning at for a summer, and learned a great deal about just how complex an industrial plant can be. This honor allows me to have a much better idea of the parts of the plant that he's referring to, even though it seems as though they've re-purposed some of the land for other things in his description I can usually get at least a basic understanding of what he is talking about.
I of course am not a Chemical Engineer, my last Chemistry class was Chem 16, and I was glad to be done with it, the likes of organic chemistry was not something I felt I needed to learn, although at one time I had considered that field. So while I may not understand all the complexities of how a plant is laid out and run, I found some interesting parallels to experiences I have had in the Software field.
For example, take a particular process, any process you can think of, be it software or industrial. Imagine this process produces a raw or virtual product. The process has rules, and procedures that are to be followed by the floor workers, the programmers, the so called 'QA' group and so forth. Maybe they have cross functional teams, or maybe they work in segmented distinct sections of the 'plant' to produce the product. Whether they are stamping out bolts and panels for a car, or controlling the flow of material inputs, you can see how the manufacturing sector and the Software Sector may seem to have a lot in common, at least on the surface.
What does this have to do with testing? Well, one area of interest to me has to do with how to improve the process in which software is developed on the teams I have worked in, and I have worked in several different situations. The first, and perhaps most frustrating happened near the end of my College career. I was taking an Operating System Class, and each class divided into smaller teams to build and code the projects. Somehow I ended up with a trio of other students, a pair of them Grad Students. Well, we began by planning, and figuring out how to divide up the work load. Things initially seemed to be going well, and then the worst thing possible could happen. The one other undergrad guy, dropped the course, which increased our individual work load, and then, soon to the Grad students disappeared from class. I suddenly was in a situation where I never wanted to be, alone trying to solve a massive project. Fortunately I was able to transition to another team later in the semester, but there are so many things I could have learned better if I had just been with them at the start. The team literally imploded under its own weight as people were pulled in different directions.
The same thing happens in software teams in industry. How many people have I known who share multiple hats? Lead architect, chief tester, database engineer, technical writer, system configuration admin, etc. Those are just generic hats, when you add in possible technologies like AJAX, jQuery, Flash, and other libraries and techniques, the distribution of knowledge in a team can be spread out. This may not necessarily be a bad thing, but it can hurt teams when people are retasked to other projects, or move on to other opportunities.
On one particular project, I was given the responsibility of picking up a section of the site, a search page to port it to the latest version of .Net. I had seen the search in action before, and thought I had a pretty good idea of what was going on, well that was my first mistake. Once I dug under the hood, I realized the intricate classes and data connections related to this page were very complex. The change to the latest version of .Net meant the manner in which data was passed around had changed, and it was not just a simple matter of pointing to a different data source. In short, maintaining that now 'legacy' piece of code became a headache, one that I prayed I'd never have to work through again.
Note it wasn't that this particular page was prone to error, it was literally a swiss army knife, with a multitude of possibilities, and that was before you started saving various searching configurations. The draw back was that it was a difficult mechanism to extend, and that ultimately lead us to redevelop the module and to take advantage of another Reporting Service technology inherent within the version of SQL Server we were running at the time.
My father described what seemed to be a similar situation. Changes in how they ran the plant, where or how certain inputs were calibrated, and how they ultimately had certain effects on the outcome. Most teams in software will encounter bumps in the road that they have to learn to overcome, and it seemed that this was a similar process for my dad's company. If you encounter a problem in the process, you examine it, and try to determine how to fix the problem, and hopefully develop a procedure to avoid repeating the mishap.
There's just one problem, that in my Dad's case gave me concern. What if there was a situation where an error happened, but the process was not at fault? In software we can tweak and tweak until the cows come home, but with each new procedure, each new piece of red tape, how much do we slow our ability to produce and test code as we go?
My father went on to describe some of the different teams that worked in the plant. They have their managers, their QA people, their maintenance guys, their operators in various sections of the plant and so forth. I found myself asking my father what he thought Quality was. Truth be told, I blame Jerry Weinberg for the question as I recently borrowed an old copy of Quality Software Management: Systems Thinking from a friend at work thinking it was out of print. (It turns out I was mistaken on this. Thanks to Jerry, for pointing out to me that it actually is still in print, just not in Amazon's roster. It is still available with Dorset House Quality Software Management: Vol. 1: Systems Thinking by Gerald M. Weinberg. My apologies for listing it as out of print in error.)
I found myself wondering how my Dad, a chemical engineer doing operator work saw these things, and then I described how Jerry in QSM describes quality simply as value to some person. Now I've seen a number of other tester bloggers wrestle with the ideas concerning quality. Unfortunately none of those entries were fresh in my head on this particular day. However, one point I did remember from the chapter I had just finished reading, was that because quality is subject to interpretation to some person, there are going to be different definitions, and expectations depending upon who the person may be. Jerry does an excellent job of describing this phenomena in the first chapter of that book.
This then prompted another question, "Is the Quality Assurance Group" the only people who need to be concerned with quality in the plant? I was trying to drive home that perhaps one of the issues in his plant was that people believed Quality is something that only that group in the labs is concerned with, everyone else is just a laborer doing what they are told.
I'll be honest I've never understood that mentality. I have always pushed myself to do the best of my ability and in a team setting to do whatever I can to make that group successful. The situation reminds me of one I've seen on paper in a few webinars. A software group may have a group of Business Analysts that try to figure out the requirements. They then pass those on to some architect, or developer group who tries to implement them in the chosen coding convention. Then those same requirements are then passed on along with the builds for the project onto the testers who have to then parse those requirements and try to figure out how to determine whether a product 'passes' or 'fails'?
Sound familiar? My dad basically described this situation, where the Analysts would get an idea at a high level of something they'd like to see happen, but often times they don't know the capabilities of a particular piece of equipment or hardware, whether it has constraints that may limit how it is used. I'm reminded of the complaints about highway plans where I went to college. So many described the roads as poor and unorganized, and often as if there had been no plan at all. I often heard people say, that like the roads the best way to fail was to start by failing to plan.
Having lived through some similar situations I can attest that it can be very difficult. My first role as a Software Developer, I never imagined that they'd require me to do quite so much testing. Software Engineering Classes for BS Computer Engineers focused more on how testing was often a separate phase done by a somewhat independent group. Yet here I was soon after starting my first professional job for pay, and I discovered myself having to test this module. Sometimes, all I had was the name of the module, no requirements, or notion of how it should work. Even worse, initially I was only given the code, and had to parse through them buy hand to figure out what was changed, plug it into a newer build, and then test it.
It didn't take me long to realize that this was an untenable situation. How much time and effort was being wasted integrating someone's code into our project only to have to back it back out when we discovered it was not as mature a feature as we had thought. I prefer not to think about that, but I began to push back and ask for at least some basic requirements or description of the features, and eventually we began requiring a demonstration build of the project as a proof in concept that I could put through its paces and explore to see if it lived up to what we were expecting.
Honestly, it was a very hard experience. My dad says that Engineers go to College to learn how to think and how to teach yourself about the disciplines you encounter, but that you don't really have any clue how you will use what you've learned until you are out and working in industry. In short, as a fresh out of college graduate, I didn't really know jack about how to do my job well. There were so many things that I'd not encountered that I had to learn those first couple of years, and without any real guidance from the more senior developers in the team, I was left to figure things out on my own. Fortunately I am a fast learner, but even then I know that I probably made more mistakes that first year than I ever imagined possible.
This is not me being critical. Just like I look back at stories penned before I finished High School, and today I can barely understand what I was writing or how I formed lines of thought to compose one line of text and weave it into another. Truly they were embarrassing times, but they were learning times. They drove me to work harder, to try to be better at each little thing I did, and I felt I was making progress up until I got retasked to working the Tech Room. In truth, it was probably for the best, there were a number of glitches in our process for including and releasing new builds, and I was fortunate enough that my ability to work with people on the phone enabled me to switch hats, yet still learn a great deal about our product. It was that connection to the customers that finally made things start to click.
In any event, just like things were in flux in that first assignment, I wonder how such changes affect my dad at his work. How can they expect to keep their production levels high, when a key component goes down? How would I expect a web site to operate if a key database or server went out? This is what I was driving at with my dad. That maybe, just maybe, the ideas of Systems Engineering for software, of testing, and development, do not just exist in their own bubble, but are perhaps shades of things that could help improve things at his plant.
I really enjoyed the chat with my father. I wish I could have more such talks with him, to exchange ideas about how to think through problems as an engineer, or a tester. One thing is for sure, the more I read about software development, the more I wonder why more work place components are not striving to improve their processes instead of staying at the ad-hoc level.
PS. Thanks to Gerald Weinberg for writing Quality Software Management: Systems Thinking, and thanks to my friend Craig for letting me borrow it from his book shelf as the book seems to be out of print due to its age. I look forward to completing reading it, even though some of the ideas presented within may be a bit dated. One of the joys, that being a full time tester has brought back to me, is the joy of digging for new ideas and knowledge through books, blogs, and the internet. I'm not sure I would have had this wonderful conversation if not for having started that book, so Jerry, thanks a bunch.
Labels:
inter-disciplinary skills,
process
Wednesday, August 4, 2010
My take on Adam Goucher's Six Shocking Automation Truths.
Adam Goucher has a wonderful blog entry over at Software Test Professionals relaying Six Shocking Automation Truths. If you have not yet read the article I wholly recommend it as it does provide some high quality grade A discussion material within it.
According to Adam the six shocking automation truths are:
The first one, "Truth One: You do not need to automate everything" is easy. You can't automate everything rings very true for me. Some parts of a particular software may not be possible to attempt automation either as artifacts or side effects of how they are designed, or due to the nature and quality of tools available. What's more if you are like me, with several hundred test plans comprised of thousands of steps that cover the modules of an existing software, trying to automate everything could take a really long time. Likely much longer than the client is willing to pay you for which leads me into the next Truth.
The second truth: "Truth Two: Going forwards in reverse is still going backwards", follows from the first. If you have a large number of tests in need of automating, and only limited time to script/record/code/setup the automation tests, then given that time to complete the automation is very likely limited you have to be judicious about where you actually use automation. Now some may argue that it makes more sense to start with the most stable older portions of a code base to automate.
I can understand the deceptive and seductive nature of this. Repeating these test scripts by hand every iteration may seem like a waste of time. This would seem especially true if they are often finding few if any defects to report. Yet that section of the application must always be regression tested, and is somehow of more value, despite the lower chance of defect occurrence. In addition, automation is desired even if no change in features actually intersects it. To this I find myself in disagreement.
First, old tests, that are used to regress the software do not always equate with relevance to the current build and software release. I have worked on projects where test plans from Release A or C were changed, or completely rebuilt from the ground up. Should they continue to regress tests and force automation to built upon test cases that are now obsolete? My answer is no. There is nothing more costly than trying to automate tests which are invalid, obsolete, and not an accurate reflection of the current software's behavior. Therefore if your definition of Old is actually referring to dated perhaps obsolete and regression checks then maybe that's not what you want to automate.
Now Adam argues that the best place to start automating is on the new sections of an application. I can understand that thinking, but it also is not always necessarily possible to do that. In some cases it will be, and this may also be dependent on the kinds of tests included in your automation framework. If you are just starting Unit Testing for example, It makes loads of sense to focus on code that is currently in development rather than trying to cover old dated code. If however you are using a different type of automation that may not make sense especially given the pace at which code may change in a particular feature as it is developed.
So what then is the middle ground? I think a better way in many cases would be to focus upon the areas of the software that were most recently released. Recent release may imply stability, and though a recent artifact in the release it is most likely the most fresh in the minds of the team in general. Newer modules may have a higher probability of being touched again as their functionality is expanded with additional features in subsequent releases. This of course will not always be the case, but the chief concern of this truth is to remember the Pesticide Paradox.
The Pesticide Paradox simply stated is that "defect clusters have a tendency to change over time, so if the same set of tests is conducted repeatedly, they will fail to discover new defects." Or at least that's the paraphrased definition from a online course I recently completed. Or as another tester explained it to me, as bugs are found around particular code segments, reported, fixed, and retested, the old tests will begin to prove repeatedly that those bugs are gone each time they are run. The problem though, is that these kinds of old proof tests may give a false sense of confidence about the stability of some sections of a site leading the team to focus on testing and developing the more raw parts of the application. This is why we must maintain and especially update at tweak even old tests as new releases come out in order for them to remain relevant.
The third truth that you need not test using the same language that the code uses, seems a rather obvious one to me, but then I come from a development background before I became a full time tester. It should stand to reason though that if multiple languages can accomplish the same tasks, that it thus is not necessary for the tester to be fluent with that coding style, and in some ways may help enforce separation between development and testing areas.
The fourth truth, like the third, also seems like an obvious one to me, that there is no one size fits all tool. I remember when I was a young Boy Scout that another scout showed me his Swiss Army Knife. That thing had twenty five or more gadgets and was so wide I couldn't hold it in my hand to cut. Contrast that with the two pocket knives I used as a boy, the basic five gadget one complete with can opener, awl, bottle opener, large and small blades, and a cork screw, and the second a simple 3 bladed carbon steel knife (Three blades of differing lengths). I got more use out of those two knives and they provided all the basic functions I needed from a knife at that time. Today I carry a set of folding pliers one large one small that also have screw drivers, and scissors, and a blade on it, but I still find myself using that regular knife blade more than anything. So it doesn't matter if a tool has more functions than its competitors, if its difficult or cumbersome to use, or if it doesn't cooperate with tools other developers are working with. (I remember using the Ankh extension for Visual Studio several years ago, and had to uninstall it because my install would crash unexpectedly when it was being used.) The same is true for testing tools.
Truth five is in my opinion the hallmark of good testing, and especially for those who ascribe to be part of the context driven school. No test exists in a vacuum, and therefore consideration to the environment, the parties that will use the application, and risks involved should all be considered when testing approaches are mapped out.
The last truth, "Truth Six: Your automation environment is not production" is the only one I really have some issue with. Sometimes it is easier and better to understand a software, especially one that you've only recently been brought into, if you see the actual data, or a good facsimile of what it may imply. I do agree that it does not necessarily make sense to hunker down a local networked instance for test via secure HTTPs, but I am not ready to say that it should never be tested on a test instance. If your client, or process rules require your test instance to be exactly as it will be in production then I can see why a team may have no choice but to do things this way. However, my extraction from truth six would be that to do such should always be done with caution to keep in mind the importance of keeping the application as testable as possible.
To conclude, Adam Goucher's Six Shocking Automation Truths are concepts that all automation testers and the stakeholders planning to leverage automation in their projects should be considered before they have the testers hunkered down in their make shift bomb shelter cubes and putting the software through its paces. I think remembering these things will save many headaches for both the tester and the consumers of their testing efforts.
According to Adam the six shocking automation truths are:
- Truth One: You do not need to automate everything
- Truth Two: Going forwards in reverse is still going backwards
- Truth Three: The automation language does not have to be the same as the development language
- Truth Four: There is not one tool to rule them all
- Truth Five: There is no 'right' or 'wrong' way to automate (though there are better and worse)
- Truth Six: Your automation environment is not production
The first one, "Truth One: You do not need to automate everything" is easy. You can't automate everything rings very true for me. Some parts of a particular software may not be possible to attempt automation either as artifacts or side effects of how they are designed, or due to the nature and quality of tools available. What's more if you are like me, with several hundred test plans comprised of thousands of steps that cover the modules of an existing software, trying to automate everything could take a really long time. Likely much longer than the client is willing to pay you for which leads me into the next Truth.
The second truth: "Truth Two: Going forwards in reverse is still going backwards", follows from the first. If you have a large number of tests in need of automating, and only limited time to script/record/code/setup the automation tests, then given that time to complete the automation is very likely limited you have to be judicious about where you actually use automation. Now some may argue that it makes more sense to start with the most stable older portions of a code base to automate.
I can understand the deceptive and seductive nature of this. Repeating these test scripts by hand every iteration may seem like a waste of time. This would seem especially true if they are often finding few if any defects to report. Yet that section of the application must always be regression tested, and is somehow of more value, despite the lower chance of defect occurrence. In addition, automation is desired even if no change in features actually intersects it. To this I find myself in disagreement.
First, old tests, that are used to regress the software do not always equate with relevance to the current build and software release. I have worked on projects where test plans from Release A or C were changed, or completely rebuilt from the ground up. Should they continue to regress tests and force automation to built upon test cases that are now obsolete? My answer is no. There is nothing more costly than trying to automate tests which are invalid, obsolete, and not an accurate reflection of the current software's behavior. Therefore if your definition of Old is actually referring to dated perhaps obsolete and regression checks then maybe that's not what you want to automate.
Now Adam argues that the best place to start automating is on the new sections of an application. I can understand that thinking, but it also is not always necessarily possible to do that. In some cases it will be, and this may also be dependent on the kinds of tests included in your automation framework. If you are just starting Unit Testing for example, It makes loads of sense to focus on code that is currently in development rather than trying to cover old dated code. If however you are using a different type of automation that may not make sense especially given the pace at which code may change in a particular feature as it is developed.
So what then is the middle ground? I think a better way in many cases would be to focus upon the areas of the software that were most recently released. Recent release may imply stability, and though a recent artifact in the release it is most likely the most fresh in the minds of the team in general. Newer modules may have a higher probability of being touched again as their functionality is expanded with additional features in subsequent releases. This of course will not always be the case, but the chief concern of this truth is to remember the Pesticide Paradox.
The Pesticide Paradox simply stated is that "defect clusters have a tendency to change over time, so if the same set of tests is conducted repeatedly, they will fail to discover new defects." Or at least that's the paraphrased definition from a online course I recently completed. Or as another tester explained it to me, as bugs are found around particular code segments, reported, fixed, and retested, the old tests will begin to prove repeatedly that those bugs are gone each time they are run. The problem though, is that these kinds of old proof tests may give a false sense of confidence about the stability of some sections of a site leading the team to focus on testing and developing the more raw parts of the application. This is why we must maintain and especially update at tweak even old tests as new releases come out in order for them to remain relevant.
The third truth that you need not test using the same language that the code uses, seems a rather obvious one to me, but then I come from a development background before I became a full time tester. It should stand to reason though that if multiple languages can accomplish the same tasks, that it thus is not necessary for the tester to be fluent with that coding style, and in some ways may help enforce separation between development and testing areas.
The fourth truth, like the third, also seems like an obvious one to me, that there is no one size fits all tool. I remember when I was a young Boy Scout that another scout showed me his Swiss Army Knife. That thing had twenty five or more gadgets and was so wide I couldn't hold it in my hand to cut. Contrast that with the two pocket knives I used as a boy, the basic five gadget one complete with can opener, awl, bottle opener, large and small blades, and a cork screw, and the second a simple 3 bladed carbon steel knife (Three blades of differing lengths). I got more use out of those two knives and they provided all the basic functions I needed from a knife at that time. Today I carry a set of folding pliers one large one small that also have screw drivers, and scissors, and a blade on it, but I still find myself using that regular knife blade more than anything. So it doesn't matter if a tool has more functions than its competitors, if its difficult or cumbersome to use, or if it doesn't cooperate with tools other developers are working with. (I remember using the Ankh extension for Visual Studio several years ago, and had to uninstall it because my install would crash unexpectedly when it was being used.) The same is true for testing tools.
Truth five is in my opinion the hallmark of good testing, and especially for those who ascribe to be part of the context driven school. No test exists in a vacuum, and therefore consideration to the environment, the parties that will use the application, and risks involved should all be considered when testing approaches are mapped out.
The last truth, "Truth Six: Your automation environment is not production" is the only one I really have some issue with. Sometimes it is easier and better to understand a software, especially one that you've only recently been brought into, if you see the actual data, or a good facsimile of what it may imply. I do agree that it does not necessarily make sense to hunker down a local networked instance for test via secure HTTPs, but I am not ready to say that it should never be tested on a test instance. If your client, or process rules require your test instance to be exactly as it will be in production then I can see why a team may have no choice but to do things this way. However, my extraction from truth six would be that to do such should always be done with caution to keep in mind the importance of keeping the application as testable as possible.
To conclude, Adam Goucher's Six Shocking Automation Truths are concepts that all automation testers and the stakeholders planning to leverage automation in their projects should be considered before they have the testers hunkered down in their make shift bomb shelter cubes and putting the software through its paces. I think remembering these things will save many headaches for both the tester and the consumers of their testing efforts.
Labels:
Automation
Subscribe to:
Posts (Atom)