It has been a while since I've had time to pick up my bloggers pen. October is traditionally a hellish month for me, even when work isn't trying, the Cub Scouts have kept me busy all but a single weekend that I recall, and then on Sundays our son plays soccer. However, things quickly went off the rails this year. The whole family came down sick over the span of a month, and one week, we were all sick at the same time. Yikes! Praise God, we are better now, but that isn't the only change.
My duties on my project have shifted back to more coder oriented tasks, and less focused on testing. While I enjoy both programming and testing pursuits, I'll admit, I miss the testing aspects of what I was doing before. It is funny in a way, when I first got approached about a 'Automation testing' position in 2009, I was worried about being pigeonholed as a tester and excluded from some kind of elite club of programmers. Yet that was a bit a naive thing to worry about in hindsight. Testing brought back my love of learning in a way I had not felt since college. It returned to me part of who I always was, but had kept silent in order to make ends meet. I've learned a lot as I ran that course, and I wouldn't trade the decision for the world.
My current role on project has me pondering though. I've heard it debated around twitter, about whether you can be both a programmer and a tester. I know I can do either, but at some point do you not need to decide which to specialize in? The reality is there are only so many hours in the day for study and growth, and the opportunity cost of each new learning investment, in effect is at a loss for learning something else. This is a reality that I've now come face to face with in the last two months. I still have the knowledge from what I learned as a tester, but it has been hard to try and keep up on my learning where testing is concerned, especially when my current responsibilities require me to act in a more code-centric role.
This feeling has left me feeling a bit lost internally because I know I can succeed at anything I choose to focus my efforts upon, it wouldn't matter if it was testing and programming, or some other group of tasks from which I must choose. I have the drive to do what is necessary to succeed. Still, I find myself at this cross roads because I enjoy doing things that provide value to the teams that i work with, no matter how small or great the achievement may be. Up till now it hasn't mattered whether I was a tester, programmer, or performing some other service to the project team. As long as it was value that I added, I was happy, content and felt fulfilled inside, Yet I find myself feeling as though I am stuck at a fork in the road.
I feel as if I have paused at a great fork where two rivers meet. One is a possible focused career on testing, the other, a continued focus on programming, its methodology, and potential as a generalist programmer. Either fork in the river looks potentially enjoyable from a learning stand point, with its opportunity to pause to fish, relax, or just skip a rock to the other side. Like most rivers though, I realize that I can only paddle up one stream at a time. Although the left fork might be easier, the right might be the more fulfilling, or the converse could be true.
For nine years, I have worked professionally to develop, test, and support various software efforts. I have learned something from every experience that I have been fortunate to endure. I wouldn't trade those experiences away as they define a bit of who I am personally and professionally. As I enter my tenth year of service in software development, I find myself looking back over the peaks and valleys behind, and ahead up the forks in the river, yet seeing behind the first bend of either is impossible. So I am presently anchored, where I am at this fork, pausing to consider and reflect upon what my dreams are for the next ten years. Where do I want to be? What roads will I need to travel to get there? These are questions that I have no answer for currently. So given that the new year is around the corner, I can see myself at least initially focusing greatly upon what exactly it is that I most want to do, and the realities of that choice which may require not just myself, but my whole family to adapt as well.
It may take some time for me to come to some answers, and the pot has clearly become foggy and hard to see how its contents will turn out when I finally reach that conclusion, but I want to consider things more closely, set a plan and then rush after it to attain it. Perhaps it is the nature of how I 'fell into' my current assignment that is at the heart of this muddled mind of mine. At least I know it is something I can do for now, while I sort through my feelings and make what could possibly be the biggest personal, and professional decision I have made in my life thus far.
But enough about me, as you read this and other blogs, I imagine you may be reflecting on recent events, just as I have been. Where do you stand? What's your dream? How will you decide what to focus upon this year, and as a result, what areas that get left behind will you perhaps miss when we reach this point a year from now?
Contemplation, introspection, transpection, and other thoughts related to testing and life long learning.
Friday, December 30, 2011
Reflections on 2011, a year of trial, growth, and questions
Labels:
First Questions,
Reflection,
What's your dream?
Saturday, October 1, 2011
Diary of a Soccer Coach: Week 4
You've been there before a well worn meeting room with your team gathered around a table going over a list of action items related to the project you've been working. Sometimes they are new requirements, perhaps they are refinements of existing functionality, or tweaks of the deployment procedures taking into account lessons learned in that first deployment of the software. The first practice after that first game, is very much like this. Often there isn't enough time to discuss defensive or offensive tactics with the players before the first game.
It is sometimes a matter of perspective, a coach after the first game of the season, or a manager or lead on a team reviewing the steps they took on that first ever critical deployment, the first game, the first actions of substance as far as the customer might see. So in hindsight, that first practice, that first meeting, is often a discussion of the aftermath. For our kindergartners, we discuss the issues I noticed during that first game. There are almost always a few areas to correct, and they aren't always the same from Game 1 in one year versus any of the others.
Typically a reminder of the rules is necessary. A reminder about which goal we are attacking, which one we defend, a reminder not to use hands except for the Throw-ins, and an encouragement to stop play when the whistle blows and quickly bring the ball to the referee when there is a stoppage of play for going out of bounds. While errors can occur in any game, I try to point out the mistake, and correct the behavior without singling out any particular team member. The point after all is not that someone erred, but that we play the game correctly to reduce stoppages of play.
With the instructional league sometimes this is difficult. Some players have an over arching desire for the ball, and they may indulge this by diving at the ball. This is a behavior we try to discourage. For one thing, falling to the ground is as bad as waiting flat footed for the ball. They aren't upright able to move with the ball, and if they are down where the ball is, there's a higher possibility of injury as other players go for the ball around them. Sometimes they fall down, and stay on the ground, and again even if the ball isn't near them, this isn't behavior we want to encourage. Sometimes its a sign that the kid is tired, but they rarely will get into shape if they sit down when they are supposed to be in the game, and the ball could come at any time too.
Ever been on a team where similar behavior happens? Where they get tunnel vision, seeing only the one task before them at the expense of what is going on around them on the field? How can we avoid this behavior? What's worse is what if a team mate ends up blocked or stuck in some area and doesn't realize it? As professionals we can try to encourage, give a second set of eyes to these issues, but in the end its really up to the individual to get themselves back on track. We can encourage that team mate to come back on board, but honestly, if that person cannot take the initiative, there may be little we can do to really fix an issue that is internal to them.
Just like with my soccer players, I take them back to basics, breaking down the basics of the pass, of corner kicks and goal kicks, of throw-ins and kick-offs. Only so much time can be spent on correcting the past, as new challenges and new games await. So before the second game we spend a little time talking about defense. Of reminding the kids that in our league, there are no goalies and therefor noone should go into the goal arc even to go after the ball, but more importantly we teach them what to do to protect their own goal.
First involves positioning, if a player is following an opponent bringing the ball up, we show them how they can move their feet with out crossing them, using the balls of their feet to have better response time in their jockeying back and forth. We show them how to encourage the ball handler to dribble a particular direction, to funnel them away from a straight shot on goal, or to where we hope additional team mates can cut off their lane of advance. We also try to show them that having everyone covering one person leaves open lanes of passing to the opposition, it leaves area of the field uncovered, and opens up easy attacks on their teams goal.
In software development, testers play a part of defense, not from bugs scoring on them, but from preventing threats to the value of the product. If the goal as a team is to release a product with value that's usable by the client, then anything that allows the product to be misused, leaves features less than fully implemented, or just plain not covered is a threat that we as testers try to find. The one difference here is that unlike in soccer where we can see the ball coming many times before it arrives near our zone of defense, in testing we don't have the ability to look at the software and say a bug is coming from here or there. We have to instead visualize it with our mind.
How can we visualize where bugs might be? One way is to be involved early in the process, be in with the conversations with the customer or client and helping to determine how the software may be used. We also must consider the negative, the view of what invalid data, or improper operations might do to the software. What if a file consumed is missing settings, does the software resort to a default and store that in the configuration for next time? If you start typing before the software can fully load, will it cause an unexpected behavior? We can brain storm a horde of test ideas to try to cover the entire areas of the application, but the reality is just like in soccer, we are just one tester, we can only cover so much ground in eight hours of work time.
What about opposition tendencies? It may be possible in soccer to see that certain players tend to favor an attack on goal from the right or left side. Some players may prefer passing the ball forwards, or looping back rather than continuing forward at a bad angle. As testers, we can evaluate the software for tendencies, are there certain areas that seem more bug prone, are there areas that are more critical, or more likely to be highly used and thus could cause more risk? Is there a particular feature set which sets your software apart from another, then that is an area I'd be sure to test.
Then a foul may be called. Maybe one player pushed or tripped another, maybe it was a hand ball. Maybe there's an area of your software that is of particular risk to the customer. They need that feature to work, quickly, to solve a time critical problem. Whatever the case may be, we try as hard as we can to find every single bug there may be, but the reality is we can't cover the whole of a software that's anything but trivial. The nature of software and the myriad of systems it may be installed upon create such a large volume of possibilities that we cannot test it all, so we use techniques to break the software down into areas that we can cover. We find ways to distill problems to a range of possible outcomes, and we try to think of new ways to test old functionality, because you just never know when a new feature may impact an old one.
It is sometimes a matter of perspective, a coach after the first game of the season, or a manager or lead on a team reviewing the steps they took on that first ever critical deployment, the first game, the first actions of substance as far as the customer might see. So in hindsight, that first practice, that first meeting, is often a discussion of the aftermath. For our kindergartners, we discuss the issues I noticed during that first game. There are almost always a few areas to correct, and they aren't always the same from Game 1 in one year versus any of the others.
Typically a reminder of the rules is necessary. A reminder about which goal we are attacking, which one we defend, a reminder not to use hands except for the Throw-ins, and an encouragement to stop play when the whistle blows and quickly bring the ball to the referee when there is a stoppage of play for going out of bounds. While errors can occur in any game, I try to point out the mistake, and correct the behavior without singling out any particular team member. The point after all is not that someone erred, but that we play the game correctly to reduce stoppages of play.
With the instructional league sometimes this is difficult. Some players have an over arching desire for the ball, and they may indulge this by diving at the ball. This is a behavior we try to discourage. For one thing, falling to the ground is as bad as waiting flat footed for the ball. They aren't upright able to move with the ball, and if they are down where the ball is, there's a higher possibility of injury as other players go for the ball around them. Sometimes they fall down, and stay on the ground, and again even if the ball isn't near them, this isn't behavior we want to encourage. Sometimes its a sign that the kid is tired, but they rarely will get into shape if they sit down when they are supposed to be in the game, and the ball could come at any time too.
Ever been on a team where similar behavior happens? Where they get tunnel vision, seeing only the one task before them at the expense of what is going on around them on the field? How can we avoid this behavior? What's worse is what if a team mate ends up blocked or stuck in some area and doesn't realize it? As professionals we can try to encourage, give a second set of eyes to these issues, but in the end its really up to the individual to get themselves back on track. We can encourage that team mate to come back on board, but honestly, if that person cannot take the initiative, there may be little we can do to really fix an issue that is internal to them.
Just like with my soccer players, I take them back to basics, breaking down the basics of the pass, of corner kicks and goal kicks, of throw-ins and kick-offs. Only so much time can be spent on correcting the past, as new challenges and new games await. So before the second game we spend a little time talking about defense. Of reminding the kids that in our league, there are no goalies and therefor noone should go into the goal arc even to go after the ball, but more importantly we teach them what to do to protect their own goal.
First involves positioning, if a player is following an opponent bringing the ball up, we show them how they can move their feet with out crossing them, using the balls of their feet to have better response time in their jockeying back and forth. We show them how to encourage the ball handler to dribble a particular direction, to funnel them away from a straight shot on goal, or to where we hope additional team mates can cut off their lane of advance. We also try to show them that having everyone covering one person leaves open lanes of passing to the opposition, it leaves area of the field uncovered, and opens up easy attacks on their teams goal.
In software development, testers play a part of defense, not from bugs scoring on them, but from preventing threats to the value of the product. If the goal as a team is to release a product with value that's usable by the client, then anything that allows the product to be misused, leaves features less than fully implemented, or just plain not covered is a threat that we as testers try to find. The one difference here is that unlike in soccer where we can see the ball coming many times before it arrives near our zone of defense, in testing we don't have the ability to look at the software and say a bug is coming from here or there. We have to instead visualize it with our mind.
How can we visualize where bugs might be? One way is to be involved early in the process, be in with the conversations with the customer or client and helping to determine how the software may be used. We also must consider the negative, the view of what invalid data, or improper operations might do to the software. What if a file consumed is missing settings, does the software resort to a default and store that in the configuration for next time? If you start typing before the software can fully load, will it cause an unexpected behavior? We can brain storm a horde of test ideas to try to cover the entire areas of the application, but the reality is just like in soccer, we are just one tester, we can only cover so much ground in eight hours of work time.
What about opposition tendencies? It may be possible in soccer to see that certain players tend to favor an attack on goal from the right or left side. Some players may prefer passing the ball forwards, or looping back rather than continuing forward at a bad angle. As testers, we can evaluate the software for tendencies, are there certain areas that seem more bug prone, are there areas that are more critical, or more likely to be highly used and thus could cause more risk? Is there a particular feature set which sets your software apart from another, then that is an area I'd be sure to test.
Then a foul may be called. Maybe one player pushed or tripped another, maybe it was a hand ball. Maybe there's an area of your software that is of particular risk to the customer. They need that feature to work, quickly, to solve a time critical problem. Whatever the case may be, we try as hard as we can to find every single bug there may be, but the reality is we can't cover the whole of a software that's anything but trivial. The nature of software and the myriad of systems it may be installed upon create such a large volume of possibilities that we cannot test it all, so we use techniques to break the software down into areas that we can cover. We find ways to distill problems to a range of possible outcomes, and we try to think of new ways to test old functionality, because you just never know when a new feature may impact an old one.
Labels:
Soccer,
step by step learning
Tuesday, September 27, 2011
If its not random, how to decipher the pattern?
Earlier this week, I wrote about the software fault in the Staunton, Virginia, teacher payroll system. I talked at length about the concept of 'random', and the importance of distinguishing between something that which is truly random, from something that is better described as unexpected, unpredictable, or just 'having no discernible pattern to me as far as my sense go'. Using precise language when describing defects in software benefits everyone on the team, including the customer.
Unfortunately, the fault in Staunton, Virgnia's payroll system wasn't found by a tester, instead it was discovered by someone researching the finances of the county's school system. Now we may not know exactly how this fault was first brought to the attention of this school district. That doesn't preclude speculation how an investigation of a similar fault on a hypothetically similar system could be conducted.
So imagine a hypothetical payroll system for an organization with multiple locations, accounting for user entities of diverse pay grades and positions similar to the school systems. The more layers you add to the structure of the system, the greater its complexity. Now let's suppose the vendor of this software received word about an apparent bug. This bug affects certain persons within the system who would receive an unexpected, and here to fore unnoticed pay increase. So if you as a software tester for this software vendor, receive this notice where would you start?
Reporting and analyzing a defect that a tester stumbled upon through his or her own investigation of the software is one thing. Trying to track down a flaw someone else found and reports is quite another. If we follow the example of the of the system we discussed earlier we can imagine the reports taking the form of output, potentially pay stubs, ledger logs, bank statements, etc. In short, we possess a log or evidence that the problem occurred, but this evidence may be far enough from the system itself to not be able to produce the same conditions without a bit more digging.
So how can we reproduce these conditions and figure out where the real defect resides? More information is required, and like a software Sherlock Holmes we must examine the evidence, and piece together the story of what happened. In the case of the pay roll system it is likely important to know how many individuals were impacted. Might a search for more information related to the individuals effected, reveal each user to be part of particular entities or organizations with in the system? Did they work at particular locations, or have their data maintained at a particular data center? An exhaustive analysis of whatever data can be culled from the system could help establish a definitive relationship between the affected users.
From the headlines, it sounds the School district performed an analysis just like this. The result seemed to have something to do with individuals who went to a particular school. Now the age of the defect in the system may not be clear. If a lot of time has gone by, it may be possible that the connection is more subtle, and won't track to any particular organization, or be so obvious; however, in this case we strike pay dirt. One piece of the puzzle is in place.
Given that all those results might track to a particular organization within the software, this may lead to our first hunch. Were all the people assigned to this organization, also receiving the same bug? This might be where the first bump in the investigation may be encountered. Maybe they aren't all affected. That idea may lead to a belief that our initial hunch was wrong, but it could be that there's a reason why they turned out to be the exception.
It's at this point in the defect analysis where a history of debugging similar enterprise applications could prove beneficial. From reviewing some of the articles around the defect, a number of ideas come to mind, all of them based on similar behavior I've encountered in other projects I have worked. If these employees all worked at a facility that was shuttered, what happens to their accounts when the facility is shut down? Are they transferred to a new facility? Are they suspended out right? Are they removed from the system?
I recall once with a customer relation management system that we encountered a bug when a user account was removed from the system. All the records linked to it, would cascade and delete, or disappear and not show up in the system when searched. Could a data integrity issue regarding the integrity of the data for these closed locations be responsible for this behavior?
Another possibility that occurs to me, is that a system that freezes pay for all employees may apply to a group of employees by group. Might a group that these employees belonged to be used to freeze all of their pay for some time period? Might failing to belong to a group due to the original group being inactivated cause the issue of applying this freeze to miss these accounts?
It may be difficult to see the cause from just reading the few reports you receive from the user, but a simple logical, and step by step examination of the system could help reveal how the issue happened, and if it was a case of the system being used in a manner that was unplanned by the software vendor, it may indicate a fault in the business rules, or lack of training for the users of the system. Whatever the case, the team is now on its way to finding where this issue occurred. Where would you test next?
Unfortunately, the fault in Staunton, Virgnia's payroll system wasn't found by a tester, instead it was discovered by someone researching the finances of the county's school system. Now we may not know exactly how this fault was first brought to the attention of this school district. That doesn't preclude speculation how an investigation of a similar fault on a hypothetically similar system could be conducted.
So imagine a hypothetical payroll system for an organization with multiple locations, accounting for user entities of diverse pay grades and positions similar to the school systems. The more layers you add to the structure of the system, the greater its complexity. Now let's suppose the vendor of this software received word about an apparent bug. This bug affects certain persons within the system who would receive an unexpected, and here to fore unnoticed pay increase. So if you as a software tester for this software vendor, receive this notice where would you start?
Reporting and analyzing a defect that a tester stumbled upon through his or her own investigation of the software is one thing. Trying to track down a flaw someone else found and reports is quite another. If we follow the example of the of the system we discussed earlier we can imagine the reports taking the form of output, potentially pay stubs, ledger logs, bank statements, etc. In short, we possess a log or evidence that the problem occurred, but this evidence may be far enough from the system itself to not be able to produce the same conditions without a bit more digging.
So how can we reproduce these conditions and figure out where the real defect resides? More information is required, and like a software Sherlock Holmes we must examine the evidence, and piece together the story of what happened. In the case of the pay roll system it is likely important to know how many individuals were impacted. Might a search for more information related to the individuals effected, reveal each user to be part of particular entities or organizations with in the system? Did they work at particular locations, or have their data maintained at a particular data center? An exhaustive analysis of whatever data can be culled from the system could help establish a definitive relationship between the affected users.
From the headlines, it sounds the School district performed an analysis just like this. The result seemed to have something to do with individuals who went to a particular school. Now the age of the defect in the system may not be clear. If a lot of time has gone by, it may be possible that the connection is more subtle, and won't track to any particular organization, or be so obvious; however, in this case we strike pay dirt. One piece of the puzzle is in place.
Given that all those results might track to a particular organization within the software, this may lead to our first hunch. Were all the people assigned to this organization, also receiving the same bug? This might be where the first bump in the investigation may be encountered. Maybe they aren't all affected. That idea may lead to a belief that our initial hunch was wrong, but it could be that there's a reason why they turned out to be the exception.
It's at this point in the defect analysis where a history of debugging similar enterprise applications could prove beneficial. From reviewing some of the articles around the defect, a number of ideas come to mind, all of them based on similar behavior I've encountered in other projects I have worked. If these employees all worked at a facility that was shuttered, what happens to their accounts when the facility is shut down? Are they transferred to a new facility? Are they suspended out right? Are they removed from the system?
I recall once with a customer relation management system that we encountered a bug when a user account was removed from the system. All the records linked to it, would cascade and delete, or disappear and not show up in the system when searched. Could a data integrity issue regarding the integrity of the data for these closed locations be responsible for this behavior?
Another possibility that occurs to me, is that a system that freezes pay for all employees may apply to a group of employees by group. Might a group that these employees belonged to be used to freeze all of their pay for some time period? Might failing to belong to a group due to the original group being inactivated cause the issue of applying this freeze to miss these accounts?
It may be difficult to see the cause from just reading the few reports you receive from the user, but a simple logical, and step by step examination of the system could help reveal how the issue happened, and if it was a case of the system being used in a manner that was unplanned by the software vendor, it may indicate a fault in the business rules, or lack of training for the users of the system. Whatever the case, the team is now on its way to finding where this issue occurred. Where would you test next?
Labels:
Defect Analysis
Wednesday, September 21, 2011
Pay Freeze, slightly melted, a random bug? Maybe.
Every now and then I read about a problem in a software system that makes the news. I look at the article and read what is described as the problem, and I often wonder, how this supposed flaw got into the system. In my experience it can be easy to fault the software for an error. There have certainly been enough cases of odd failures for the general public to believe them, but is it really the software?
This week I heard about the story from Staunton, Virginia. Apparently the school board had frozen pay for all of its employees for some period of time, and as the article stated, the glitch went uncaught by a number of employees who spot checked this up until a news station requested records for salaries under the freedom of information act. This is when the discrepancy was apparently noticed. Now this glitch appears like something of a scandal. The political black eye alone could be enough to make anyone nervous about the 'quality' of the application in question.
What concerns me though is that this glitch is being described as completely random. First, do we really understand what it means when something is truly random? According to Dictionary.com, random has four customary definitions. The first means 'proceeding, made, or occurring without definite aim, reason, or pattern. The second is its use in statistics, specifically the concept of a process of selection whereby each item of a set has an equal likelihood of being selected. The third definition applies to physical trades, where a part, parcel, or piece of land or item may appear non uniformly shaped. The last one is an informal use implying that it was an occurrence that was completely unexpected, or unpredictable.
Let's take a moment and consider the story for a moment. The first definition implies that there is no rhyme or reason, no discernible pattern to something which may make it random. Is that the case here? Reading further I notice the following:
So it appears that this bug affected teachers that had all been assigned to a school, that closed two years ago. (No doubt around the time of the glitch actually occurring.) Would you call this random? No I see a pattern, so it doesn't hold on the first definition. The Second definition doesn't hold up to the story at this point either, as given a sampling so large, would you really expect to find just a handful of salaries that are wrong? I don't buy that either. The third definition doesn't apply in this context, which leaves us with the remaining informal definition: simply that it was odd or unpredictable
This fact I do not doubt, no one predicted this to happen. Now I'm not writing this to criticize the vendor or the county in question where this happened. That's not the point of this article. Instead, my hope is to make you think. As a tester, developer, user, consumer of computing appliances, how often do we encounter behavior that surprises us? How often do we not only get surprised but feel the event to be unpredictable, with no reason it should be happening?
I imagine this happens more than we might like to admit. How many times do we sit at our computers, doing something normal. We're checking our email in our client of choice, we have had no problems with our service and expect to get a no messages found if the service has none waiting for us. We hit the send/receive button, and wait gleefully hoping to find/not find email. Then we get a message that it was unable to connect to the server. That catches us by surprise, maybe we think its an aberration, so we click the button again.
That second click does what? It allows us to check to see if it was a hiccup, a momentary failure, or perhaps a sign of a long term issue. I've had this happen from time to time on web pages I may visit frequently. A forum for a football team may load very fast during the week, but on game day as people are checking up on their team, it slows to a crawl, and a dependency like a style sheet, or images fails to download due to the sudden hit to bandwidth serving the multitude of requests at one time. It might even take minutes before you get that white page with some structure, and no formatting. Do we immediately think, wow that's random, this forum is really bugged? But I know from experience, this isn't a fault of the software itself, at least as far as I can tell, but instead it is a function of a high load on a system that may not be able to keep up with a sudden increase in demand.
As testers, simply finding and reporting bugs is wholly insufficient to communicate to the developer the nature and scope of the fault we've encountered. In the case of the forum software, a subsequent refresh might fix the page, and it may load fine for several hours thereafter, unable to have the issue reproduced. Whatever the issue is, we must dig, and see if we can prune down the steps that we followed. We can try to see if the bug happens if we hit another location, try a different path through the software, or perhaps try a different role or persona. The point here is it is our job as testers to imagine how this bug could have occurred. What would your tester instincts tell you to go to prove and find this error so it could be fixed? Do you have the answer?
Hold that thought, because I am going to revisit this question later in the week. For now, just remember that just because we can't see the pattern for a bug, doesn't mean there isn't one, and as testers in particular, our use of language should be careful so as to not mislead the public, our developers, our clients, or our managers.
This week I heard about the story from Staunton, Virginia. Apparently the school board had frozen pay for all of its employees for some period of time, and as the article stated, the glitch went uncaught by a number of employees who spot checked this up until a news station requested records for salaries under the freedom of information act. This is when the discrepancy was apparently noticed. Now this glitch appears like something of a scandal. The political black eye alone could be enough to make anyone nervous about the 'quality' of the application in question.
What concerns me though is that this glitch is being described as completely random. First, do we really understand what it means when something is truly random? According to Dictionary.com, random has four customary definitions. The first means 'proceeding, made, or occurring without definite aim, reason, or pattern. The second is its use in statistics, specifically the concept of a process of selection whereby each item of a set has an equal likelihood of being selected. The third definition applies to physical trades, where a part, parcel, or piece of land or item may appear non uniformly shaped. The last one is an informal use implying that it was an occurrence that was completely unexpected, or unpredictable.
" the pay increase malfunction was random and included three teachers at Bessie Weller Elementary School, four at McSwain Elementary, four at Ware Elementary and a speech teacher and a secondary special education teacher."Several of these teachers had one thing in common, they attended one of three elementary schools nearby. Wait does that mean what I think it means, could this be the beginning of an actual pattern emerging, enough to discount the perceived randomness? It could be, but as testers in this situation, our job is to determine the nature of the fault, not just give our 'best guesses'. We know a fault happened, therefore we must find a way to duplicate it. If we continued on this analysis, we'd likely have a couple of test ideas to begin testing, we'd look at the data for all of the affected persons and see just what is it that happened. Is the over payment of salaries here the problem, or is it a side effect of some other hidden flaw that just became visible due to some quality of the instance that we are examining? Fortunately, I did a bit more research and found another article on this on MSNBC's site. Now I will note that MSNBC's article is dated the sixteenth of September, and the other article earlier on the Second day of September, however, as I read I find another nugget that seems to confirm my suspicion.
"All the affected teachers had previously worked at Dixon Elementary School and were reassigned to other schools after Dixon closed two years ago."
So it appears that this bug affected teachers that had all been assigned to a school, that closed two years ago. (No doubt around the time of the glitch actually occurring.) Would you call this random? No I see a pattern, so it doesn't hold on the first definition. The Second definition doesn't hold up to the story at this point either, as given a sampling so large, would you really expect to find just a handful of salaries that are wrong? I don't buy that either. The third definition doesn't apply in this context, which leaves us with the remaining informal definition: simply that it was odd or unpredictable
This fact I do not doubt, no one predicted this to happen. Now I'm not writing this to criticize the vendor or the county in question where this happened. That's not the point of this article. Instead, my hope is to make you think. As a tester, developer, user, consumer of computing appliances, how often do we encounter behavior that surprises us? How often do we not only get surprised but feel the event to be unpredictable, with no reason it should be happening?
I imagine this happens more than we might like to admit. How many times do we sit at our computers, doing something normal. We're checking our email in our client of choice, we have had no problems with our service and expect to get a no messages found if the service has none waiting for us. We hit the send/receive button, and wait gleefully hoping to find/not find email. Then we get a message that it was unable to connect to the server. That catches us by surprise, maybe we think its an aberration, so we click the button again.
That second click does what? It allows us to check to see if it was a hiccup, a momentary failure, or perhaps a sign of a long term issue. I've had this happen from time to time on web pages I may visit frequently. A forum for a football team may load very fast during the week, but on game day as people are checking up on their team, it slows to a crawl, and a dependency like a style sheet, or images fails to download due to the sudden hit to bandwidth serving the multitude of requests at one time. It might even take minutes before you get that white page with some structure, and no formatting. Do we immediately think, wow that's random, this forum is really bugged? But I know from experience, this isn't a fault of the software itself, at least as far as I can tell, but instead it is a function of a high load on a system that may not be able to keep up with a sudden increase in demand.
As testers, simply finding and reporting bugs is wholly insufficient to communicate to the developer the nature and scope of the fault we've encountered. In the case of the forum software, a subsequent refresh might fix the page, and it may load fine for several hours thereafter, unable to have the issue reproduced. Whatever the issue is, we must dig, and see if we can prune down the steps that we followed. We can try to see if the bug happens if we hit another location, try a different path through the software, or perhaps try a different role or persona. The point here is it is our job as testers to imagine how this bug could have occurred. What would your tester instincts tell you to go to prove and find this error so it could be fixed? Do you have the answer?
Hold that thought, because I am going to revisit this question later in the week. For now, just remember that just because we can't see the pattern for a bug, doesn't mean there isn't one, and as testers in particular, our use of language should be careful so as to not mislead the public, our developers, our clients, or our managers.
Labels:
Defect Analysis,
Random
Tuesday, September 20, 2011
Diary of a Soccer Coach: Week 3 and First Game!
I'm a bit behind on blogging due to duties last week, but I'll catch up by throwing the third week of practice alongside the first game. In our league the third week of practice heralds two things, first the last practice before our first game, and the arrival of our team rosters and uniforms. On this particular day, the 'head coach' of the league, who has been helping out with our Kindergartners had to distribute the uniforms to all the different divisions. This left me alone to do a lot of work with the kids on my own.
This worked out fine, and as I always try to keep the kids moving it worked out great. The Third practice is where we honed in on basic shooting skills. For most introductory soccer players, the more advanced steps are not always easy to pass on. At this practice I focused in on keeping their eye on the ball and following through as they shot. Also, as with most practices I got and kept them moving as much as possible.
I started by having them stand next to the ball and shoot it stationary. After each player had tried this a few times, I had them try shooting the ball by first running up on the ball and then kicking it into the goal. Afterwards, I made it more difficult by having the players dribble the ball and then kick it into the goal.
On many development projects I've seen a similar step by step building up to completion. A feature might start out very simplistic, or it may seem that way so we start by taking our first shot at it, just as my players might in their third practice. Sometimes we may not understand some of the nuance to a requirement. It may appear simple, just like striking that ball, but there are intricacies and un revealed flavoring that needs added for the code to really pull off what is intended. So as a team maybe you work up to these features, adding a little more speed, a bit more control, and higher accuracy in its calculations.
I've found similar patterns in testing. The first time through, you may just be poking around in an exploration of the application under test. You may not have a full grasp of the features, how to activate or use them, or the intent, but you build a bit of confidence and then take another test run at the software. Then you might discover that this type of software is documented to have a particular susceptibility to one kind of fault, and begin tailoring your exploratory testing to hit those weaknesses.
The first game of a soccer season is always exciting. Its the first time the kids are in their new uniforms, and you just never know how much the kids have absorbed from the limited practices you've had thus far. Each year is a little bit different. One year, one team may have a very good grasp of the game and create a lot of goals in that first game. Others might find it difficult to juggle defending the approaching ball, redirecting it to the goal they are attacking, or they may even get a little winded as they aren't used to moving so much at one time.
The first year I coached, an older coach told me, "You'll see the most improvement between the Second and Third games." I wasn't so sure how to take that, but later I realized what he meant. Suffice it to say, many kids may not listen early in the practice. Until they see how they can apply it in a game situation, they just may not realize the advice you are giving them. I've seen this happen on development teams too. A tester might make a suggestion about how to improve a process or function within the application, and might be ignored, because its simply not their job, or because the developer is too much ' in the zone' to stop and see what is being said. There might even be, as is common in our first game of soccer, a lot of stops and starts as you build to a sustainable pace for development.
Bottom line though, remember it's just the first game. A lot can change over the course of time on a project. Change is inevitable in many projects, and how we handle and respond to it sets a strong light on our teams and how we cope with that change.
This worked out fine, and as I always try to keep the kids moving it worked out great. The Third practice is where we honed in on basic shooting skills. For most introductory soccer players, the more advanced steps are not always easy to pass on. At this practice I focused in on keeping their eye on the ball and following through as they shot. Also, as with most practices I got and kept them moving as much as possible.
I started by having them stand next to the ball and shoot it stationary. After each player had tried this a few times, I had them try shooting the ball by first running up on the ball and then kicking it into the goal. Afterwards, I made it more difficult by having the players dribble the ball and then kick it into the goal.
On many development projects I've seen a similar step by step building up to completion. A feature might start out very simplistic, or it may seem that way so we start by taking our first shot at it, just as my players might in their third practice. Sometimes we may not understand some of the nuance to a requirement. It may appear simple, just like striking that ball, but there are intricacies and un revealed flavoring that needs added for the code to really pull off what is intended. So as a team maybe you work up to these features, adding a little more speed, a bit more control, and higher accuracy in its calculations.
I've found similar patterns in testing. The first time through, you may just be poking around in an exploration of the application under test. You may not have a full grasp of the features, how to activate or use them, or the intent, but you build a bit of confidence and then take another test run at the software. Then you might discover that this type of software is documented to have a particular susceptibility to one kind of fault, and begin tailoring your exploratory testing to hit those weaknesses.
The first game of a soccer season is always exciting. Its the first time the kids are in their new uniforms, and you just never know how much the kids have absorbed from the limited practices you've had thus far. Each year is a little bit different. One year, one team may have a very good grasp of the game and create a lot of goals in that first game. Others might find it difficult to juggle defending the approaching ball, redirecting it to the goal they are attacking, or they may even get a little winded as they aren't used to moving so much at one time.
The first year I coached, an older coach told me, "You'll see the most improvement between the Second and Third games." I wasn't so sure how to take that, but later I realized what he meant. Suffice it to say, many kids may not listen early in the practice. Until they see how they can apply it in a game situation, they just may not realize the advice you are giving them. I've seen this happen on development teams too. A tester might make a suggestion about how to improve a process or function within the application, and might be ignored, because its simply not their job, or because the developer is too much ' in the zone' to stop and see what is being said. There might even be, as is common in our first game of soccer, a lot of stops and starts as you build to a sustainable pace for development.
Bottom line though, remember it's just the first game. A lot can change over the course of time on a project. Change is inevitable in many projects, and how we handle and respond to it sets a strong light on our teams and how we cope with that change.
Labels:
Soccer,
step by step learning
Monday, September 12, 2011
Nature Vs Nurture: Do we train the tester out of our Kids?
While working through a serious of tests on our automation framework today, a thought came to my mind. Do we train our kids to lose the very attributes that could make them a great tester? Do we risk killing their curiosity, or train them to accept what is told, because that's how our schools are run? Psychologists and scientists have argued nature versus nurture for a long time now, but I never really framed it in this way before.
We have two young children, and I've had the privilege of watching my first born grow into a smart young boy. Now our almost two year old daughter is starting to pass 'little milestones' hand over fist. I remarked to my wife today that she looked like she had grown three inches since breakfast. Then later this evening while putting her to bed, my eyes did another visual inspection of her height; this time against something I knew was constant: The height of the bed rail for her baby crib.
Earlier this week, we had to take down the pack and play yard because she had discovered a way to easily climb out of it. Plus we knew she had grown to a size that had become too big for it anyways. So we knew she was getting bigger, growing as all kids inevitably do. Then tonight was the kicker, I saw her gymnast style on an uneven parallel bars nearly pick herself up to climb over her crib's bed rail. Then switching to a new tactic, she climbed up one of the spindles of the crib, one side of one foot on one side, and the other on the opposite. She was climbing as I've imagined or seen climbers on TV working an ascent on many a wall faces of a mountain, before slinging her leg over the rail, just as I reached out and caught her in shock and awe of all I have seen.
I love my little girl, she's been a blessing even when she was born and admitted to the Neo-natal Intensive Care Unit (NICU), but she continues to amaze me. Born a couple of weeks early, you'd never know it to look at her now. She's a runner, a climber, a ball player, and a wrestler. Not to mention she sometimes likes to practice tackling her much bigger brother from behind. She's my little explorer, my future Venture Girl, and I wouldn't trade that for the world. Both of my children are special, and highly intelligent, but lately as I grow as a tester and parent to her and to him, my mind ponders how best to raise her.
The natural inclination of a parent is to want to protect, and keep their little ones safe from as many dangers as possible. Yet we as parents know the perception of our provided security is is not complete, as much as we may want, we cannot protect them from every possible hurt or injury. Just as testers realize that many of the security features we test, provide only a facade of protection, in this current day and age.
The process of rearing children has me pondering this fact. Parents set up rules for their kids behavior and activities in and around their home. Some of these rules may seem unnecessary or excessive at times, but they provide a structure, a framework around which they can begin their learning experience. Then later, if you follow the public model, they go off to school, or other extra curricular activities that provide additional rules, and layers of precepts that try to mold the child into a particular form.
Sometimes I wonder, just what are we trying to achieve? Are we stifling creativity by requiring them to paint within the lines? Are we killing their spirit by requiring them to sit like mindless zombie automatons. As I've watched my children in recent days I'm amazed at how many times they look fresh at some toy or item in our house, and find another unique way to interact or play with it. Many times this is fine and worth encouraging, other times it could be something they are doing is unsafe. Our urge is to jump, rescue, and shield them from this dangerous situation, but are we doing more harm than good?
I wonder. More with our younger child than our oldest, I hope to hone and focus that curiosity. I'm very cautious about how I deal with her when in the course of exploring she is doing something that could bring harm to her. It's like walking a tight rope though. I want to encourage the curiosity, embrace the questions, and the goofy ideas that may come. I want to give her the freedom to learn without constraining her to the factory school of thought.
We opted to keep our son home for his first year. Everything we'd read about child psychology suggested that boys may do better if not thrown into the structured environment of elementary school. Three years later we are still home schooling. What started off as an experiment to provide him room to grow paid high dividends. He has grown as he has learned, and it amazes me how much he can learn in a short span of time. Seeing his progress makes me sad at times, because I know that we may cover as much if not more than what a single day of school might cover, and yet he absorbs ever more. Heck this kid in second grade was upset that we hadn't taught him multiplication tables yet.
Our youngest isn't yet of schooling age, but I already see her reaching out and testing the boundaries the environment provides. Some of them are provided by us her parents, and some of them are a structure of nature and design of the furniture, or artifacts in our home. Yet I'm more cognizant of the decisions we make to correct, or alert her to dangers in her environment. If you're reading this entry and pondering the same things, I'm curious as to your perspective. Does our rule, and school structure result in breeding out the curiosity, the intellectual spark that may draw a child to be a creator or investigator of the world around them?
If like me, you have thought about this, and come to the conclusion that these factors do affect the development of the mind of a potential tester. Do they affect them for the worse, or the better? How can we improve them to harness that curiosity and prepare people to test the applications and services of tomorrow? I don't have the answer to these questions, but I may continue to ponder them for some time.
We have two young children, and I've had the privilege of watching my first born grow into a smart young boy. Now our almost two year old daughter is starting to pass 'little milestones' hand over fist. I remarked to my wife today that she looked like she had grown three inches since breakfast. Then later this evening while putting her to bed, my eyes did another visual inspection of her height; this time against something I knew was constant: The height of the bed rail for her baby crib.
Earlier this week, we had to take down the pack and play yard because she had discovered a way to easily climb out of it. Plus we knew she had grown to a size that had become too big for it anyways. So we knew she was getting bigger, growing as all kids inevitably do. Then tonight was the kicker, I saw her gymnast style on an uneven parallel bars nearly pick herself up to climb over her crib's bed rail. Then switching to a new tactic, she climbed up one of the spindles of the crib, one side of one foot on one side, and the other on the opposite. She was climbing as I've imagined or seen climbers on TV working an ascent on many a wall faces of a mountain, before slinging her leg over the rail, just as I reached out and caught her in shock and awe of all I have seen.
I love my little girl, she's been a blessing even when she was born and admitted to the Neo-natal Intensive Care Unit (NICU), but she continues to amaze me. Born a couple of weeks early, you'd never know it to look at her now. She's a runner, a climber, a ball player, and a wrestler. Not to mention she sometimes likes to practice tackling her much bigger brother from behind. She's my little explorer, my future Venture Girl, and I wouldn't trade that for the world. Both of my children are special, and highly intelligent, but lately as I grow as a tester and parent to her and to him, my mind ponders how best to raise her.
The natural inclination of a parent is to want to protect, and keep their little ones safe from as many dangers as possible. Yet we as parents know the perception of our provided security is is not complete, as much as we may want, we cannot protect them from every possible hurt or injury. Just as testers realize that many of the security features we test, provide only a facade of protection, in this current day and age.
The process of rearing children has me pondering this fact. Parents set up rules for their kids behavior and activities in and around their home. Some of these rules may seem unnecessary or excessive at times, but they provide a structure, a framework around which they can begin their learning experience. Then later, if you follow the public model, they go off to school, or other extra curricular activities that provide additional rules, and layers of precepts that try to mold the child into a particular form.
Sometimes I wonder, just what are we trying to achieve? Are we stifling creativity by requiring them to paint within the lines? Are we killing their spirit by requiring them to sit like mindless zombie automatons. As I've watched my children in recent days I'm amazed at how many times they look fresh at some toy or item in our house, and find another unique way to interact or play with it. Many times this is fine and worth encouraging, other times it could be something they are doing is unsafe. Our urge is to jump, rescue, and shield them from this dangerous situation, but are we doing more harm than good?
I wonder. More with our younger child than our oldest, I hope to hone and focus that curiosity. I'm very cautious about how I deal with her when in the course of exploring she is doing something that could bring harm to her. It's like walking a tight rope though. I want to encourage the curiosity, embrace the questions, and the goofy ideas that may come. I want to give her the freedom to learn without constraining her to the factory school of thought.
We opted to keep our son home for his first year. Everything we'd read about child psychology suggested that boys may do better if not thrown into the structured environment of elementary school. Three years later we are still home schooling. What started off as an experiment to provide him room to grow paid high dividends. He has grown as he has learned, and it amazes me how much he can learn in a short span of time. Seeing his progress makes me sad at times, because I know that we may cover as much if not more than what a single day of school might cover, and yet he absorbs ever more. Heck this kid in second grade was upset that we hadn't taught him multiplication tables yet.
Our youngest isn't yet of schooling age, but I already see her reaching out and testing the boundaries the environment provides. Some of them are provided by us her parents, and some of them are a structure of nature and design of the furniture, or artifacts in our home. Yet I'm more cognizant of the decisions we make to correct, or alert her to dangers in her environment. If you're reading this entry and pondering the same things, I'm curious as to your perspective. Does our rule, and school structure result in breeding out the curiosity, the intellectual spark that may draw a child to be a creator or investigator of the world around them?
If like me, you have thought about this, and come to the conclusion that these factors do affect the development of the mind of a potential tester. Do they affect them for the worse, or the better? How can we improve them to harness that curiosity and prepare people to test the applications and services of tomorrow? I don't have the answer to these questions, but I may continue to ponder them for some time.
Labels:
instruction,
kids,
Knowledge and Learning,
nature vs nurture,
upbringing
Friday, September 9, 2011
Off on the trail of testing, but wait, I forgot this one thing
How many times in life, do we surrender to the habitual nature of our human psyche? Do we capitulate and allow what seems to be an established routine, repetitive task, and in our minds we've got it down to an art, so why just flip a button, and cruise on through each step of the process, without giving much time to pause between steps to evaluate where we are going?
News flash, tester or not, we all do this! In fact, I did it twice today, without even realizing it. Oh I wasn't 'testing' software at the time, but I allowed my inferred conviction of my own understanding of the early day's chores to not only lull me into a state of numbness, but introduced my own sort of performance speed bump that wasted hours of my time.
What started out today, as an early morning jaunt to accomplish to simple chore, turned into an exercise to remind me why falling into the assumption and autopilot trap are so dangerous. It started simply enough, just two chores and then I'd be home for the rest of the day, became an afternoon of back tracking, acknowledging a flaw in my own understanding, correcting it, and then executing essentially the same process, but in a slightly varied way. It started with a trip to the court house.
I was rather pleased with myself, because I thought I could complete two tasks in one visit. I might have even boasted internally, that I was a genius to take care of these two tasks at the same time. The first task, was to renew my license registration, at the Sherrif's office to bring my license plate current, and get the sticker to indicate to any law enforcement officer that might be checking my progress, that yes, I had paid my taxes and fees, and was not violating the registration laws for operating a vehicle in our state.
The second task, is one that maybe no one else will have experienced, but it required a trip to the County Clerk's office. You see, as a member of our community at larger, I stepped up back in 2006 to serve the community as a Poll worker for a number of different election cycles within our county. I do this as a service, because to be honest, the rate they pay for thirteen hours of open poll service, almost two additional hours of setup and tear down, is not a pay rate I'd accept for any of the professional work I do. However, as a concerned member of my community, an Eagle Scout, and a person of faith, I value the integrity of elections, and believe that doing so is an important part in ensuring our elections are fair. In order to participate after being selected, I have to fill out, sign, and return a form to the clerk's office. It was this task that I desired to complete today, because I had delayed sending it in earlier, due to uncertainty with my current job situation, and did not want to commit if I felt in good conscience that I would not be able to serve.
Those were the first two tasks of the day. I stepped out the door with the letter in hand, and my registration as well, and drove to the court house. I was pleased because I found a parking meter open within an easy walk, exited my car, added some time to the meter, and off I went, and that's when the first error dawned on me. I had my insurance statement, the registration, and the signed form for the Clerk, but a nagging thing in the back of mind then came into focus. 'Does the Sheriff's office take check cards, for payment?' Why I didn't ask this question before I left for the court house is unknown, but it proved to be the pivotal question, because they in fact did not take it. I asked the Sheriff's Clerks whether they accepted it as a form of payment, knowing already in my mind that the answer was probably no, and received the confirmation that, no, they only accepted, cash, check, or potentially a money order. I didn't have any of those options on my person, and truthfully I don't usually carry a check book with me unless I know I need it.
Now after reflecting I realize I could have potentially, walked over to the post office, paid the fee for a money order and then did it that way, but I'd have no record, no good one in my register to verify when it was paid. So the first task of the day, I struck out. I went ahead down to the County Clerk's office, handed the form to one of the workers, asked if it was yes, (it was), and was happy that I had at least one task done. I would have to return home, find our only check book and return to complete the initial first part of the plan.
Have you as a tester ever started working through a problem, you jumped to some assumptions, maybe you feel they are good ideas about how it in theory should work in your own mind's eye, and proceed to work through the process of massaging through the interface. Ever stop at some point to realize, you know, I wonder what would happen if I had done something differently that previous step, only to find that hitting the back button on your browser really isn't a good way to check this new test idea? It happens to the best of us. Sometimes a test that would make more sense to apply first, isn't, and has to be run again at the start of another iteration through the process. That can seem frustrating, but it is part of the learning experience we go through as testers.
Well, this did not just happen to me once, it happened twice. See I was also looking for information about a repair on my car. I traveled to the mechanic's garage I have grown to trust, and proceeded to inquire about an estimate on how they might do it, how long it would take, and at what cost. Surprisingly, they told me they couldn't do this kind of repair. They had an idea of how the repair could be done, but there was something particular about my engine that required something that they lacked, something that did not give them confidence they could complete the repair in a timely fashion.
What a bummer that was. However, I countered with a question. "Okay, if you can't perform the repair, as I understand it, then is there another shop who you might recommend to perform this fix? I know its not a critical issue on my car, but I would like to get this fixed just as soon as humanly possible." They gave me a name of another shop, and I then asked, if they thought simply calling them would be enough to get an estimate. They didn't really know, but suggested that maybe that would work, and it would save myself some money on gas driving out there unnecessarily. I liked that idea, and returned home to eat lunch with my family.
After lunch, I began looking for the phone number of the shop, first in a few paper phone books, then turned to google, and superpages, but could not find it. I found other shops, but not this one. So later this afternoon, I hopped in my car, and drove out to where the shop was (having received directions earlier). In hindsight, I wish I had pressed for a contact number before I left the first shop, but I honestly didn't believe that finding it would be that much of a hassle. That was my mistake, and yet another lesson learned.
Ever start testing a piece of software, and then at some point just stop because a question comes to the forefront, that you almost feel, man I wish I had asked that before I started? Sure, it happens, maybe more often than we like. We are creatures that learn and grow, and as Testers, we are many times going to develop ruts and habits. Try to break those habits from time to time, maybe you'll discover a new way to flex the software, to bend and contort it to find a brand new class of defects.
Ultimately, we must try our best to avoid jumping to assumptions. Never assume you already know the answer, if you've never even asked the question. Never assume that the developer obviously must have done something a particular way if you've never had a conversation about it, and never assume that you've brought and used the right tool for a type of test. Now this does not mean that we can never make assumptions, if we do, we need to realize that our testing is based on certain assumptions, maybe its a platform they are running on, or a particular style of device, that may be an educated enough of a guess to allow us to proceed, but we should remember that they are fallible assumptions, and present that as part of our test story when the time comes.
However, today's experience reminds me of something from earlier in my life. It's funny in a way, since college, my first rule of life has always been to avoid making assumptions. This was even before I considered, or even had a clue at all what it meant to test anything. Western's Rule #1: "Assume nothing, for when you assume, you are usually wrong!" At least that's how I wrote it as a freshman in college. Today I'd transform that rule to read more like this. "Make no assumption absent evidence, for assumptions are often based on illusions, that when that illusion is removed or proven false this can result in a great embarrassment to you in life." Honestly, it isn't that our assumptions are wrong, or that they might be based on inaccurate intelligence of the situation in our projects, it's the false confidence, it can breed, and the blindness it can bestow that limits our ability to test accurately and effectively that are the risk. It's the shock and awe that an illusion the team may have held as true, once removed can cloud judgment on the value in the product.
To conclude, monitor yourself as you test from day to day. Check to see if you feel yourself itching to turn on the autopilot. Recognize it, take a step back, and find another way forward. Use the pomodoro technique (Thank you Markus Gärtner), or some other focusing/de-focusing heuristic. Be skeptical of even your own best ideas relating to testing, and always strive for one more thing to learn as you flex your testing muscles. Try testing with a partner (Pair testing), so that you can keep each other from falling into a rut, or team with a developer and show him what you are doing to test the software. Whatever it is you use to affect how you keep from falling into an zombie like coma while testing, do it as often as necessary, you'll thank yourself later.
News flash, tester or not, we all do this! In fact, I did it twice today, without even realizing it. Oh I wasn't 'testing' software at the time, but I allowed my inferred conviction of my own understanding of the early day's chores to not only lull me into a state of numbness, but introduced my own sort of performance speed bump that wasted hours of my time.
What started out today, as an early morning jaunt to accomplish to simple chore, turned into an exercise to remind me why falling into the assumption and autopilot trap are so dangerous. It started simply enough, just two chores and then I'd be home for the rest of the day, became an afternoon of back tracking, acknowledging a flaw in my own understanding, correcting it, and then executing essentially the same process, but in a slightly varied way. It started with a trip to the court house.
I was rather pleased with myself, because I thought I could complete two tasks in one visit. I might have even boasted internally, that I was a genius to take care of these two tasks at the same time. The first task, was to renew my license registration, at the Sherrif's office to bring my license plate current, and get the sticker to indicate to any law enforcement officer that might be checking my progress, that yes, I had paid my taxes and fees, and was not violating the registration laws for operating a vehicle in our state.
The second task, is one that maybe no one else will have experienced, but it required a trip to the County Clerk's office. You see, as a member of our community at larger, I stepped up back in 2006 to serve the community as a Poll worker for a number of different election cycles within our county. I do this as a service, because to be honest, the rate they pay for thirteen hours of open poll service, almost two additional hours of setup and tear down, is not a pay rate I'd accept for any of the professional work I do. However, as a concerned member of my community, an Eagle Scout, and a person of faith, I value the integrity of elections, and believe that doing so is an important part in ensuring our elections are fair. In order to participate after being selected, I have to fill out, sign, and return a form to the clerk's office. It was this task that I desired to complete today, because I had delayed sending it in earlier, due to uncertainty with my current job situation, and did not want to commit if I felt in good conscience that I would not be able to serve.
Those were the first two tasks of the day. I stepped out the door with the letter in hand, and my registration as well, and drove to the court house. I was pleased because I found a parking meter open within an easy walk, exited my car, added some time to the meter, and off I went, and that's when the first error dawned on me. I had my insurance statement, the registration, and the signed form for the Clerk, but a nagging thing in the back of mind then came into focus. 'Does the Sheriff's office take check cards, for payment?' Why I didn't ask this question before I left for the court house is unknown, but it proved to be the pivotal question, because they in fact did not take it. I asked the Sheriff's Clerks whether they accepted it as a form of payment, knowing already in my mind that the answer was probably no, and received the confirmation that, no, they only accepted, cash, check, or potentially a money order. I didn't have any of those options on my person, and truthfully I don't usually carry a check book with me unless I know I need it.
Now after reflecting I realize I could have potentially, walked over to the post office, paid the fee for a money order and then did it that way, but I'd have no record, no good one in my register to verify when it was paid. So the first task of the day, I struck out. I went ahead down to the County Clerk's office, handed the form to one of the workers, asked if it was yes, (it was), and was happy that I had at least one task done. I would have to return home, find our only check book and return to complete the initial first part of the plan.
Have you as a tester ever started working through a problem, you jumped to some assumptions, maybe you feel they are good ideas about how it in theory should work in your own mind's eye, and proceed to work through the process of massaging through the interface. Ever stop at some point to realize, you know, I wonder what would happen if I had done something differently that previous step, only to find that hitting the back button on your browser really isn't a good way to check this new test idea? It happens to the best of us. Sometimes a test that would make more sense to apply first, isn't, and has to be run again at the start of another iteration through the process. That can seem frustrating, but it is part of the learning experience we go through as testers.
Well, this did not just happen to me once, it happened twice. See I was also looking for information about a repair on my car. I traveled to the mechanic's garage I have grown to trust, and proceeded to inquire about an estimate on how they might do it, how long it would take, and at what cost. Surprisingly, they told me they couldn't do this kind of repair. They had an idea of how the repair could be done, but there was something particular about my engine that required something that they lacked, something that did not give them confidence they could complete the repair in a timely fashion.
What a bummer that was. However, I countered with a question. "Okay, if you can't perform the repair, as I understand it, then is there another shop who you might recommend to perform this fix? I know its not a critical issue on my car, but I would like to get this fixed just as soon as humanly possible." They gave me a name of another shop, and I then asked, if they thought simply calling them would be enough to get an estimate. They didn't really know, but suggested that maybe that would work, and it would save myself some money on gas driving out there unnecessarily. I liked that idea, and returned home to eat lunch with my family.
After lunch, I began looking for the phone number of the shop, first in a few paper phone books, then turned to google, and superpages, but could not find it. I found other shops, but not this one. So later this afternoon, I hopped in my car, and drove out to where the shop was (having received directions earlier). In hindsight, I wish I had pressed for a contact number before I left the first shop, but I honestly didn't believe that finding it would be that much of a hassle. That was my mistake, and yet another lesson learned.
Ever start testing a piece of software, and then at some point just stop because a question comes to the forefront, that you almost feel, man I wish I had asked that before I started? Sure, it happens, maybe more often than we like. We are creatures that learn and grow, and as Testers, we are many times going to develop ruts and habits. Try to break those habits from time to time, maybe you'll discover a new way to flex the software, to bend and contort it to find a brand new class of defects.
Ultimately, we must try our best to avoid jumping to assumptions. Never assume you already know the answer, if you've never even asked the question. Never assume that the developer obviously must have done something a particular way if you've never had a conversation about it, and never assume that you've brought and used the right tool for a type of test. Now this does not mean that we can never make assumptions, if we do, we need to realize that our testing is based on certain assumptions, maybe its a platform they are running on, or a particular style of device, that may be an educated enough of a guess to allow us to proceed, but we should remember that they are fallible assumptions, and present that as part of our test story when the time comes.
However, today's experience reminds me of something from earlier in my life. It's funny in a way, since college, my first rule of life has always been to avoid making assumptions. This was even before I considered, or even had a clue at all what it meant to test anything. Western's Rule #1: "Assume nothing, for when you assume, you are usually wrong!" At least that's how I wrote it as a freshman in college. Today I'd transform that rule to read more like this. "Make no assumption absent evidence, for assumptions are often based on illusions, that when that illusion is removed or proven false this can result in a great embarrassment to you in life." Honestly, it isn't that our assumptions are wrong, or that they might be based on inaccurate intelligence of the situation in our projects, it's the false confidence, it can breed, and the blindness it can bestow that limits our ability to test accurately and effectively that are the risk. It's the shock and awe that an illusion the team may have held as true, once removed can cloud judgment on the value in the product.
To conclude, monitor yourself as you test from day to day. Check to see if you feel yourself itching to turn on the autopilot. Recognize it, take a step back, and find another way forward. Use the pomodoro technique (Thank you Markus Gärtner), or some other focusing/de-focusing heuristic. Be skeptical of even your own best ideas relating to testing, and always strive for one more thing to learn as you flex your testing muscles. Try testing with a partner (Pair testing), so that you can keep each other from falling into a rut, or team with a developer and show him what you are doing to test the software. Whatever it is you use to affect how you keep from falling into an zombie like coma while testing, do it as often as necessary, you'll thank yourself later.
Labels:
assumptions,
attention blindness,
autopilot,
Testing,
zombie testing
Thursday, September 8, 2011
Oh that Bug? Yeah it happens all the time, don't worry about it.
Tell me if you've heard this one before. A User calls into a help desk saying, hey when I go to do X with my software, instead of doing X, something else totally unexpected happens? And at some time in the past a root cause analysis is done on this, and they discover what has happened. The user has done something to the software, perhaps they've configured some optional setting that isn't a part of the normal settings, or maybe an incompatibility with another piece of hardware or software results in it being unable to perform the problem.
Normally you would expect the team to find, and smash this bug, and fix the defect right? Well what if that wasn't what they wanted? Or what if it was a feature they wanted to leave as it was. Maybe its a link to some documentation that moved on a website. It might be easy to fix, but getting a patch might be more expensive than simply telling the user another way to get that data? This situation comes to mind as I viewed today's Wizard of Id Comic:
If you've been on any software team long enough, odds are you'll eventually come across a defect or bug that you see as a potential loss of value in the product. After discussion with the team, that bug may be marked as deferred, or left as designed by the developer, and not handled as it is slowly forgotten in the code base. There are times when cosmetic changes, a font size, a color, may not make much difference to the overall user experience, but what if this deferred bug turns out to be something more, more insidious? What if it could be the bug that begins to build to a buffer overflow vulnerability that could result in your system being compromised and hacked?
As testers, its important that we maintain objectivity as we are testing. Sometimes, the development team may not all see eye to eye on what is of value to change for the customer, but we must be every cautious when a somewhat mundane bug is deferred. Deferred bugs may never get fixed, and as they get left in their unfixed state. Sometimes this may be fine, and something we have to accept as we strive to produce the most value for our clients, but we must always be careful that the thing we are putting off could be something serious that could put our customer, our client data, or even our own companies at serious risk.
Normally you would expect the team to find, and smash this bug, and fix the defect right? Well what if that wasn't what they wanted? Or what if it was a feature they wanted to leave as it was. Maybe its a link to some documentation that moved on a website. It might be easy to fix, but getting a patch might be more expensive than simply telling the user another way to get that data? This situation comes to mind as I viewed today's Wizard of Id Comic:
If you've been on any software team long enough, odds are you'll eventually come across a defect or bug that you see as a potential loss of value in the product. After discussion with the team, that bug may be marked as deferred, or left as designed by the developer, and not handled as it is slowly forgotten in the code base. There are times when cosmetic changes, a font size, a color, may not make much difference to the overall user experience, but what if this deferred bug turns out to be something more, more insidious? What if it could be the bug that begins to build to a buffer overflow vulnerability that could result in your system being compromised and hacked?
As testers, its important that we maintain objectivity as we are testing. Sometimes, the development team may not all see eye to eye on what is of value to change for the customer, but we must be every cautious when a somewhat mundane bug is deferred. Deferred bugs may never get fixed, and as they get left in their unfixed state. Sometimes this may be fine, and something we have to accept as we strive to produce the most value for our clients, but we must always be careful that the thing we are putting off could be something serious that could put our customer, our client data, or even our own companies at serious risk.
Labels:
Deferred Bugs,
Deferred Surprises,
Risk
Wednesday, September 7, 2011
Diary of a Soccer Coach: Week 2 - Inter-team Communication, Noise, and Dealing with the Unexpected
Even before I woke up the day of our first Soccer practice I found myself glued to the weather channel. After seeing the massive thunderstorm and rainfall affecting so many College Football games during the past weekend, and knowing that the remnants of Tropical Storm Lee were predicted to stall out and take time to clear out I was growing in concern. Some of the weather maps predicted as many as five inches or more of rain locally. Flash Flood Watches had been issued by the National Weather Service, and there appeared a high probability of rain in the forecast for our game.
Now normally, a little rainfall would not have introduced a bit of concern about whether to have our Soccer practice or not. Typically the only thing that has ever affected that was thunder storms, or extreme bitter wind chill conditions. In truth only once in the three previous years I have coached has any of these conditions even been approached. So naturally I was a little concerned here. The place where we practice is in a low lying area, that lies in a flood plain area that has flooded in recent memory. Because of this it was important to know what the weather conditions might be for the weeks practice.
As it happened though, by mid day the rain had mostly moved on, and though still overcast, the rain was gone, and it was modestly cooler, although humidity was still high for our practice. In software teams, how often do we plan for contingencies like these that could disrupt development, or delay deployment? Sometimes the unexpected may happen. A freak ice storm could knock out power to your data center. A Nor'easter could barrel up the coast and cause localized flooding around the facility you were supposed to test remotely against. What if the shipment carrying a key part for your data center crashes forcing you to wait an additional two weeks for a customized component to be fabricated?
These are natural impediments that could affect your development process. As a Tester, a network issue could deprive you of the ability to test on your virtual laboratory. It could result in a mistaken deployment that results in a dirty configuration unlike the clean environment you are expecting and then it can cause problems as you start encountering bugs, and half to track them down. There are perhaps a hundred or more situations that could result in what I call a noise condition within the team. Some of them are internal, within the mind of the tester or team member, some of them are virtual, on the box or serve where testing is to occur, and some could be physical or natural noise that can interfere with your ability to test effectively.
Some of these impediments can be considered in your deployment process, and perhaps avoided or at least minimize the effect it has upon your testing. However even the most rigorously documented process is only as good as the people implementing it. As we are human beings, and prone to make errors, you can never avoid all of these. A missed deployment to a test environment could indicate a whole or missed script in the deployment process. Why did this script get missed? The manager may ask this question, but I find the same thing can happen in the soccer field as well.
For the second week of Soccer, I like to hone in on two key skills. The first is communication. Whether the players realize it or not, learning to talk to their team mates on the field make a big difference in how they will play down the line. For Soccer the simplest form of conversation is the pass. A simple plant of the off foot toward the targeted team mate, and then following through with the passing foot, ankle locked to connect with the center of the ball right about the inside of the ball of the foot. If done right, the ball will travel straight and follow the same line of travel that your foot was pointing. I demonstrate this once or twice to our players, who I've saved the trouble of having to find a team mate to pass to, by pairing them up, and have them start with this basic pass.
So I watch the players as they work on their first few passes. Some kids pick this up very quickly, some quickly get frustrated. Younger, smaller kids may not be able to kick the ball as far or hard, or might be more focused on kicking the ball, than the technique of the inside pass. I watch for moments like these, and let the player try a few times before stepping into correct them. Sometimes they figure it out by trial and error, but sometimes they keep doing it the same incorrect way, and I can see it could cause a bad habit to form.
At this point as the coach, I step in. I remind the kids to plant their foot toe pointing towards their team mate, to lock the ankle with their toe slightly raised towards their shin and connect with a straight swing of their foot connecting just above the mid point of the ball with the ball of their foot. A couple more passes, and a little more encouragement may be required. Keep your eye on the ball (once they get the skill down this may not be as important, but early in the drill process a it may help if the player sees as the perform the task.) After a few more times if one or another player are having difficulty, then I may step in and demonstrate again, showing what to do, and then emphasizing the difference in how I performed the pass versus how they are doing it.
One of the common early problems I notice is a player trying to kick the ball with the toe of their shoe. Kids seem to think they can get more power passing this way, but it really leads to an unpredictable movement of the ball, especially for the younger inexperienced player. This isn't something you want the kids to do early in their development. The toe is a very small area on the foot, and many shoes are 'V' or 'U' shaped meaning that if you miss the exact center of the ball and shoe you may hit it more to the right or left and the ball will go out in the corresponding direction. I may even have to demonstrate how wrong this is, so the kids can see the difference, but after doing so I get them doing the passing correctly, back and forth, and may float between pairs, repeating this process as need be.
As more of the players seem to get a hang of this simple pass, I will then offer them the option to try the same style of pass with their normal off foot (typically the left), and then give them a demonstration of a more advanced pass. This time using the outside of the foot just behind the joint of the littlest toe and driving the foot to the side, you can actually pass to the side. All the while I continue stressing, getting the team mates attention, pointing the foot in the proper direction and following through on the kick.
This may seem like a very repetitive and boring process, and for some of the older kids it might. It doesn't take but a few minutes before I begin to see the first side effects of noise on the practice field. There are other things to get the kids attention. Someone brought their dog with them, a butterfly might fly onto the field drawing attention from the drill. Kids on another field might be doing a slightly different drill and that catches the kids attention. We have the same kinds of noise in our software teams as we communicate.
A HVAC unit could be louder than normal, a team member may be mulling over some problem they've encountered as we're describing a test we just ran, and the flaw we think it uncovered. Whatever the noise may be, that noise can impede our ability to communicate effectively the point we are trying to make. So how can we avoid noise? Sometimes it may involve asking another team, that is goofing off in the cube next to you, to keep it down as their voices are starting to carry. Maybe it involves interrupting another conversation that has your team mates attention, when what you need to say is more vital. Sometimes we have to wait for the noise to pass, such as when a train goes by blaring its horn and drowning out almost everything else you might hear. Assuring we can communicate our message is key in any context.
Now these example are good if it is an audible noise, what if it is an internal noise? This is where noticing nonverbal cues is important. If your team mate is listening, but focused on reading something on a wall, or their computer screen, it may indicate their attention or focus is elsewhere, that could be internal noise. Another example, is if someone has a habit of doing something with their hands. It could be something as simple as scratching the back of their hand, playing with a toy of some kind, or twirling of a pen in their fingers. All of these are nonverbal cues that your team mate try as they might, may not be committed to the conversation.
So how can we avoid these things? In soccer, when passing I encourage my player to start the passing conversation by calling their team mates name. Then as the ability to pass becomes second nature I instruct them to keep their eyes ahead of them towards where they are passing the ball. The other player, I instruct to keep their eye looking back towards the ball as often as they can while moving around the field, so they are prepared to receive and complete that transmission of the ball across the grass of the field to their feet. Ever wonder why eye contact is often stressed in verbal communication situations? If our eyes are turned away from the team mate who is trying to communicate with us, then also our ears may be turned away and reduce the optimal ability to hear what they are saying. That is not to say that we should stare a hole into the head of the team mate we are trying to communicate with, but we should make enough eye contact to show that we value what they have to say.
What do you do when your comrade's attention is wandering, or they are busy multitasking and can't seem to keep up with the conversation? Ever been in a meeting, in person or virtual were someone is being told something and then the speaker follows with what should be a typical yes response question? "Does that make sense, John?" The initial reaction may be for the person to say Yes, but what if their attention had drifted, they might realize they didn't fully absorb the importance of what was being translated to them, and the cue, is John's chance to say, "No, I was having trouble following what you are saying, can you please repeat that?" During any conversation we can show our continued attention, not just by eye contact by other nonverbal and verbal cues. Nodding of our head, a quiet yeah, or aha can indicate we are following the chain of the conversation well.
There is one more nonverbal cue I look for when talking with a team mate. That's when their hands come up to their mouth. You've probably seen someone at some point do this. You mention something, and they may begin to cover their mouth with one or more fingers, indicating subconsciously that they are trying to parse together a question or response to what is said, but those fingers indicate that a lot of thinking is going on. This is a telltale stop sign. If you see a team mate do this, then it is highly likely that they have a different point of view, or something to contribute to the conversation. There are other mannerisms that can indicate this desire to contribute back to a conversation. Someone looks like they are trying to reach out and give you a subtle stop sign, is another.
There are so many things we may communicate through nonverbal cues. How often do we ignore these cues and keep on rambling through our point, wanting to reach its conclusion without allowing our colleagues to collaborate and fully commit to the conversation? Regardless of your place on the team, be it a tester, developer, manager, or team player. Communication is critical. Without it, the noise may increase, and the ball we are trying to pass to our team mate may end up in the wrong cue, or intercepted by the competition and then we are back tracking trying to recover, and catch back up to what we've lost.
So as you go back to your work spaces, consider these thoughts: Where in your environment does audible noise interfere with communication? What can you do to work around it? What can you do to react better to the nonverbal cues of your colleagues? How can you make sure they are able to contribute to the conversation, and thus collaborate towards a better end?
Now normally, a little rainfall would not have introduced a bit of concern about whether to have our Soccer practice or not. Typically the only thing that has ever affected that was thunder storms, or extreme bitter wind chill conditions. In truth only once in the three previous years I have coached has any of these conditions even been approached. So naturally I was a little concerned here. The place where we practice is in a low lying area, that lies in a flood plain area that has flooded in recent memory. Because of this it was important to know what the weather conditions might be for the weeks practice.
As it happened though, by mid day the rain had mostly moved on, and though still overcast, the rain was gone, and it was modestly cooler, although humidity was still high for our practice. In software teams, how often do we plan for contingencies like these that could disrupt development, or delay deployment? Sometimes the unexpected may happen. A freak ice storm could knock out power to your data center. A Nor'easter could barrel up the coast and cause localized flooding around the facility you were supposed to test remotely against. What if the shipment carrying a key part for your data center crashes forcing you to wait an additional two weeks for a customized component to be fabricated?
These are natural impediments that could affect your development process. As a Tester, a network issue could deprive you of the ability to test on your virtual laboratory. It could result in a mistaken deployment that results in a dirty configuration unlike the clean environment you are expecting and then it can cause problems as you start encountering bugs, and half to track them down. There are perhaps a hundred or more situations that could result in what I call a noise condition within the team. Some of them are internal, within the mind of the tester or team member, some of them are virtual, on the box or serve where testing is to occur, and some could be physical or natural noise that can interfere with your ability to test effectively.
Some of these impediments can be considered in your deployment process, and perhaps avoided or at least minimize the effect it has upon your testing. However even the most rigorously documented process is only as good as the people implementing it. As we are human beings, and prone to make errors, you can never avoid all of these. A missed deployment to a test environment could indicate a whole or missed script in the deployment process. Why did this script get missed? The manager may ask this question, but I find the same thing can happen in the soccer field as well.
For the second week of Soccer, I like to hone in on two key skills. The first is communication. Whether the players realize it or not, learning to talk to their team mates on the field make a big difference in how they will play down the line. For Soccer the simplest form of conversation is the pass. A simple plant of the off foot toward the targeted team mate, and then following through with the passing foot, ankle locked to connect with the center of the ball right about the inside of the ball of the foot. If done right, the ball will travel straight and follow the same line of travel that your foot was pointing. I demonstrate this once or twice to our players, who I've saved the trouble of having to find a team mate to pass to, by pairing them up, and have them start with this basic pass.
So I watch the players as they work on their first few passes. Some kids pick this up very quickly, some quickly get frustrated. Younger, smaller kids may not be able to kick the ball as far or hard, or might be more focused on kicking the ball, than the technique of the inside pass. I watch for moments like these, and let the player try a few times before stepping into correct them. Sometimes they figure it out by trial and error, but sometimes they keep doing it the same incorrect way, and I can see it could cause a bad habit to form.
At this point as the coach, I step in. I remind the kids to plant their foot toe pointing towards their team mate, to lock the ankle with their toe slightly raised towards their shin and connect with a straight swing of their foot connecting just above the mid point of the ball with the ball of their foot. A couple more passes, and a little more encouragement may be required. Keep your eye on the ball (once they get the skill down this may not be as important, but early in the drill process a it may help if the player sees as the perform the task.) After a few more times if one or another player are having difficulty, then I may step in and demonstrate again, showing what to do, and then emphasizing the difference in how I performed the pass versus how they are doing it.
One of the common early problems I notice is a player trying to kick the ball with the toe of their shoe. Kids seem to think they can get more power passing this way, but it really leads to an unpredictable movement of the ball, especially for the younger inexperienced player. This isn't something you want the kids to do early in their development. The toe is a very small area on the foot, and many shoes are 'V' or 'U' shaped meaning that if you miss the exact center of the ball and shoe you may hit it more to the right or left and the ball will go out in the corresponding direction. I may even have to demonstrate how wrong this is, so the kids can see the difference, but after doing so I get them doing the passing correctly, back and forth, and may float between pairs, repeating this process as need be.
As more of the players seem to get a hang of this simple pass, I will then offer them the option to try the same style of pass with their normal off foot (typically the left), and then give them a demonstration of a more advanced pass. This time using the outside of the foot just behind the joint of the littlest toe and driving the foot to the side, you can actually pass to the side. All the while I continue stressing, getting the team mates attention, pointing the foot in the proper direction and following through on the kick.
This may seem like a very repetitive and boring process, and for some of the older kids it might. It doesn't take but a few minutes before I begin to see the first side effects of noise on the practice field. There are other things to get the kids attention. Someone brought their dog with them, a butterfly might fly onto the field drawing attention from the drill. Kids on another field might be doing a slightly different drill and that catches the kids attention. We have the same kinds of noise in our software teams as we communicate.
A HVAC unit could be louder than normal, a team member may be mulling over some problem they've encountered as we're describing a test we just ran, and the flaw we think it uncovered. Whatever the noise may be, that noise can impede our ability to communicate effectively the point we are trying to make. So how can we avoid noise? Sometimes it may involve asking another team, that is goofing off in the cube next to you, to keep it down as their voices are starting to carry. Maybe it involves interrupting another conversation that has your team mates attention, when what you need to say is more vital. Sometimes we have to wait for the noise to pass, such as when a train goes by blaring its horn and drowning out almost everything else you might hear. Assuring we can communicate our message is key in any context.
Now these example are good if it is an audible noise, what if it is an internal noise? This is where noticing nonverbal cues is important. If your team mate is listening, but focused on reading something on a wall, or their computer screen, it may indicate their attention or focus is elsewhere, that could be internal noise. Another example, is if someone has a habit of doing something with their hands. It could be something as simple as scratching the back of their hand, playing with a toy of some kind, or twirling of a pen in their fingers. All of these are nonverbal cues that your team mate try as they might, may not be committed to the conversation.
So how can we avoid these things? In soccer, when passing I encourage my player to start the passing conversation by calling their team mates name. Then as the ability to pass becomes second nature I instruct them to keep their eyes ahead of them towards where they are passing the ball. The other player, I instruct to keep their eye looking back towards the ball as often as they can while moving around the field, so they are prepared to receive and complete that transmission of the ball across the grass of the field to their feet. Ever wonder why eye contact is often stressed in verbal communication situations? If our eyes are turned away from the team mate who is trying to communicate with us, then also our ears may be turned away and reduce the optimal ability to hear what they are saying. That is not to say that we should stare a hole into the head of the team mate we are trying to communicate with, but we should make enough eye contact to show that we value what they have to say.
What do you do when your comrade's attention is wandering, or they are busy multitasking and can't seem to keep up with the conversation? Ever been in a meeting, in person or virtual were someone is being told something and then the speaker follows with what should be a typical yes response question? "Does that make sense, John?" The initial reaction may be for the person to say Yes, but what if their attention had drifted, they might realize they didn't fully absorb the importance of what was being translated to them, and the cue, is John's chance to say, "No, I was having trouble following what you are saying, can you please repeat that?" During any conversation we can show our continued attention, not just by eye contact by other nonverbal and verbal cues. Nodding of our head, a quiet yeah, or aha can indicate we are following the chain of the conversation well.
There is one more nonverbal cue I look for when talking with a team mate. That's when their hands come up to their mouth. You've probably seen someone at some point do this. You mention something, and they may begin to cover their mouth with one or more fingers, indicating subconsciously that they are trying to parse together a question or response to what is said, but those fingers indicate that a lot of thinking is going on. This is a telltale stop sign. If you see a team mate do this, then it is highly likely that they have a different point of view, or something to contribute to the conversation. There are other mannerisms that can indicate this desire to contribute back to a conversation. Someone looks like they are trying to reach out and give you a subtle stop sign, is another.
There are so many things we may communicate through nonverbal cues. How often do we ignore these cues and keep on rambling through our point, wanting to reach its conclusion without allowing our colleagues to collaborate and fully commit to the conversation? Regardless of your place on the team, be it a tester, developer, manager, or team player. Communication is critical. Without it, the noise may increase, and the ball we are trying to pass to our team mate may end up in the wrong cue, or intercepted by the competition and then we are back tracking trying to recover, and catch back up to what we've lost.
So as you go back to your work spaces, consider these thoughts: Where in your environment does audible noise interfere with communication? What can you do to work around it? What can you do to react better to the nonverbal cues of your colleagues? How can you make sure they are able to contribute to the conversation, and thus collaborate towards a better end?
Labels:
Communication,
Noise,
Soccer
Tuesday, August 30, 2011
Diary of a Soccer Coach: Week 1 - Getting the Players Attention
As I enter my fourth season as a Soccer coach, I can't help but reflect back on the previous three years. There have been a lot of kids I've had the opportunity to work with. The prior three years I coached the 'instructional division' as it was first called to me. In essence I've had the honor of coaching up kids ranging from Pre-K through Kindergarten age. I've coached the same level for three years now, and again this year I am stepping up to do the same.
In some ways its hard to be a coach to the youngest in the league. Some of them are already quite athletic, but lacking in coordination, some are squeamish about falling down, or running into someone else. Some don't quite know what to expect from this their first athletic endeavor. It isn't always easy on the coach either. Sometimes we just barely get a season to get to know the kids before they begin to bloom and are ready to play up to the next level. Some only get the one year, then are moved up with the other kids their age. In essence each season is like starting over fresh. There are always a few faces you remember from the previous years, and you may recognize the growing boys and girls practicing on a field not too far from your own teams, but your focus is now on the next group.
For companies like my current employer, where professionals might work on multiple projects, or bounce between roles and wear different hats within or between teams as the needs become evident, to a team leader, it can seem quite a challenge to start from scratch with a team. It can also be a challenge to get the new people involved and fully engaged and invested in the project.
For the first session of the Soccer season as with any team, it is imperative to set the ground right. The first practice always starts with a quick introduction and brief overview of the most basic rules of soccer. Then we quickly transition into a couple of warm up exercises. They aren't typically long exercises, but are designed to get the players moving, loosened up, and in the mind of being at practice. Warm-up exercises could be great for team building, or for helping to transition a new team member into being part of the esprit de corps. Having transitioned to several projects in my career, I feel that many of these warm up exercises did a lot to help smooth my transition into the team.
Of course, practice does not end with these warm-ups, nor should it be the end of the team building process either. As Michael Larsen (@MKLTesthead) so eloquently put in describing the stages of team development in his emerging topics talk on teams and the EDGE method employed by the Boy Scouts of America - The team is still forming, here, it hasn't even begun to play soccer, nor has the new team or hire really progressed to a state of being fully in tune with the team's software development process.
So the next step of practice, is the first of many drills. For soccer in the introductory division, the most basic of drills typically involves dribbling, around a pair or square of cones, emphasizing technique for controlling the motion, the tempo of the dribble. For software teams, tempo, and pace are something that teams struggle to achieve, and hold once they have it. The same is true for Soccer, and our instructional division is no different. The secret you see to these kids, and I'm betting many other youth organizations, is to keep the meeting or practice as active as possible.
So we run them through a drill for five, maybe ten minutes, and then may start another. This gets the players used to moving around, and focusing on one type of task or another. However, at some point the kids need a break, so we give them a bit of time to run to the restroom, and get a drink before having our mid practice huddle. More on these huddles in a later entry, but suffice it to say, the kids almost without fail come back refreshed, and ready to do more, but it is still the First practice, and i its important to temper our expectations with that in mind.
With software teams, the same is true. Each member needs down time to assimilate lessons learned, to rest from exertion of the mind, and for testers, to de-focus or refocus on whatever the next task may be. Without this crucial down time, tasking begins to run right into the other so quickly that one may begin to lose sight of where one testing objective ends and the next begins. It also presents a danger, because as creatures of habit establishing routines that are unhealthy will inadvertently result in inattention blindness, and as tempo and feel for how the project flows is learned, the risk increases that things move at a certain pace simply because they've always moved at that pace, whether the process or project are really at a achieving and sustainin velocity or not.
For my players in their first practice of the season, I see them in the scrimmage we typically run at the end of each practice. They chase the ball without any rhyme or reason, with out any strategy or objectivity. They believe the goal is to get the ball, and that to do that they must run to the ball. There's just one problem. someone else has that ball, and is accelerating in a different direction. so each player changes their angle to try and chase the dribbler down often from behind.
At times it can look like a swarm of honey bees, buzzing to one corner of the field, then changing course and quickly buzzing to another all around and trying to catch the ball each time, and inevitably these kids will trip, fall, or bump into another. We haven't yet taught these kids how to move laterally, or even backwards when necessary, nor have we shown them the strategy of picking a place between where the ball might go to cut it off, rather than trying to chase it as if chained to the player with the ball like a rail car. Inevitably the first practice always looks like this. The older kids may not run into this scenario, as many have years of soccer under them by the time the season has started again, but for these new boots to the field of play, or to their team in the software world, they're a blank slate practically and in need of mentoring and guidance.
The question you have to ask, is how can I influence them for the better, and help them see their role as more than just chasing the ball, as chasing this bug or that bug, when they may be leaving square feet of the field uncovered that could give them an advantage? For me, it is important to remember that this is the beginning of a long journey we take together, and for better or worse we are here as part of a team striving to create that which our client needs. However, even us experienced software professionals can occasionally trip over someone else's toes and fall flat if we aren't careful. So trod carefully in those first days on a team, until you have a good picture of the terrain you must climb.
Labels:
Coaching,
mentoring,
Soccer,
Team Buy In,
Testing
Monday, August 29, 2011
Today, I Choose Yellow
Sometimes the smallest things can be inspiration for test ideas. Sometimes they make you sit back and pause and consider. After enjoying a nice lunch with my family, my wife presented me with a colored page of Dora the Explorer, with my daughter's name and the date on it. I took it to work and proudly displayed it on part of my cube wall. I examined the picture and considered how she had gone to work coloring it. Now she's not even two years old yet, so staying within the lines is not something I'd expect from a soon to be two year old.
However, two things jumped out at me. For this page she had chosen a single color: Yellow. It wasn't any particularly stark or bright shade of yellow, just a plain almost mustard color. So then I began to think, why Yellow? Dora isn't blonde, she has brown hair. Her shirt is usually pink, and she wears orange pants. Her backpack is a shade of magenta or a light lavender. So there isn't a lot of Yellow there to work with as inspiration as far as I could tell.
So when I ponder why she selected those colors, as a Tester, the initial thought was that she had a reason for choosing that color. I wasn't present when the picture was colored, but when I think back to the beginning, of other pictures she has colored, then the reason becomes clear. My little one you see, isn't logical yet, her mind not quite fully developed. She's bright and smart, charming even in her own way, but she's not even two yet. More than likely she chose the color yellow because that was the first crayon her hands came around, and then, instead of changing colors for different parts of the picture, she continued until her mind felt she had colored as much as she dared too, and then moved onto something else.
How many of us as testers and software professionals do that? How easily do we grab a hold of an emotion or an idea in any particular day and then view and work the entire day as if through that color whatever hue it may be? Does that emotion and idea affect everything we do that day? Do we let it bleed into our code, our testing, or our conversations with others? When they look at you that morning, will they see the color yellow?
I learned from a book "Anger is a choice", that many emotions, anger in particular, are neither negative or positive. It is how we react when those emotions come up that determines how we are viewed. How many of us as Test or Software Professionals have encountered something, maybe it was minor at some point and we let it go, but it nagged at us. As much as we tried to ignore, or deny this one thing, there it was the color of red, that our mind didn't want to ignore. At some point, do we then find our focus so deeply on the color that we begin to see everything as if it was tinted that shade?
There's a danger here for Testers. When our reaction and emotional make up begins to bleed through into our work, into our analysis and observation of systems under test, there is the possibility that it could lead us astray. It could cause us to miss something that we'd see if our attention were not partially focused somewhere else. It could cause us to react reflexively to something someone said that was intended in all good fun, or perhaps to help us with something with which we are grappling.
It's important I think to remember that Testers, like the clients for which we test software, are emotional beings as well. Maybe our clients will see red when this one really annoying malformed feature crops up again and again. Maybe they've ignored it thirty times through the application, then one day, when their emotional system is already spent from something that happened on the way to the office, or at home, and bang that one annoyance becomes the lit fuse that sets them off. Ever know any people like that? Ever had a client like that?
Now let's turn this on its head. Maybe there's some issue, some flaw, that the team doesn't really classify as a fault in the software, some piece by which something is not as correct as it could be, but in your mind you, or the team in general have decided that's not important. It's a minor, non-critical issue, and why would a user care if the time and date are displayed in YYYY:MM:DD form vs DD:MM:YYYY form? Maybe on a normal day they won't, but do we ever consider how emotion on the part of the client, may play to what bugs they see as critical or important?
Furthermore, do we as Testers always try to enter into our testing sessions with as blank an emotional slate as possible? Do we try to be like the Vulcan's of Star Trek utterly devoid of emotion, and creatures of pure logic for the duration of the test run? If so, why do we do this? Do we think a user will view the software with such a dispassionate view of how it works?
If, like me, you have ever had the chance to work the support lines for a company, then you know what I mean when I say that the customer has a personal way of dealing with his or her own emotions. They may be struggling to get the software to perform, maybe they've had training and have used it successfully in the past, but one little nook, one little detail escapes them as they are partially distracted by some emotional atrocity they have endured this week.
Michael Bolton, no not the singer, or guy from office space, gave a keynote talk at the 2011 Conference for Association of Software Testing. Michael talked at length about the history of testing, and at one point described how decisions on quality are "both political and emotional." Another tester there, Michael Hunter (@humbugreality) in one of the emerging topics or lightning talks also talked about the 'emotional tester'. I was able to follow some of these talks online, and I can't give enough kudos to the organizers of CAST and the hard working volunteers that worked to have those talks and the keynotes available online for those of us who were unable to attend.
How much role does emotion really play in our craft? Do we allow it to simply be, to drive our assessment, to harness, or control it? Do we inject it intentionally to see how it may color our opinion of an interface, or to see how our perception changes when our mood has changed? Those are questions I'll probably ponder for a while. It's funny how something as simple as a colored picture of a cartoon character by my youngest child could bring me back to contemplate these ideas in testing, but then maybe these ideas have been buzzing in my head since I first heard them, just like the color yellow.
(Edit: Thanks to Justin Hunter (@hexawise) for remembering it was Michael Hunter who gave the emerging talk on "Emotional Testing""
However, two things jumped out at me. For this page she had chosen a single color: Yellow. It wasn't any particularly stark or bright shade of yellow, just a plain almost mustard color. So then I began to think, why Yellow? Dora isn't blonde, she has brown hair. Her shirt is usually pink, and she wears orange pants. Her backpack is a shade of magenta or a light lavender. So there isn't a lot of Yellow there to work with as inspiration as far as I could tell.
So when I ponder why she selected those colors, as a Tester, the initial thought was that she had a reason for choosing that color. I wasn't present when the picture was colored, but when I think back to the beginning, of other pictures she has colored, then the reason becomes clear. My little one you see, isn't logical yet, her mind not quite fully developed. She's bright and smart, charming even in her own way, but she's not even two yet. More than likely she chose the color yellow because that was the first crayon her hands came around, and then, instead of changing colors for different parts of the picture, she continued until her mind felt she had colored as much as she dared too, and then moved onto something else.
How many of us as testers and software professionals do that? How easily do we grab a hold of an emotion or an idea in any particular day and then view and work the entire day as if through that color whatever hue it may be? Does that emotion and idea affect everything we do that day? Do we let it bleed into our code, our testing, or our conversations with others? When they look at you that morning, will they see the color yellow?
I learned from a book "Anger is a choice", that many emotions, anger in particular, are neither negative or positive. It is how we react when those emotions come up that determines how we are viewed. How many of us as Test or Software Professionals have encountered something, maybe it was minor at some point and we let it go, but it nagged at us. As much as we tried to ignore, or deny this one thing, there it was the color of red, that our mind didn't want to ignore. At some point, do we then find our focus so deeply on the color that we begin to see everything as if it was tinted that shade?
There's a danger here for Testers. When our reaction and emotional make up begins to bleed through into our work, into our analysis and observation of systems under test, there is the possibility that it could lead us astray. It could cause us to miss something that we'd see if our attention were not partially focused somewhere else. It could cause us to react reflexively to something someone said that was intended in all good fun, or perhaps to help us with something with which we are grappling.
It's important I think to remember that Testers, like the clients for which we test software, are emotional beings as well. Maybe our clients will see red when this one really annoying malformed feature crops up again and again. Maybe they've ignored it thirty times through the application, then one day, when their emotional system is already spent from something that happened on the way to the office, or at home, and bang that one annoyance becomes the lit fuse that sets them off. Ever know any people like that? Ever had a client like that?
Now let's turn this on its head. Maybe there's some issue, some flaw, that the team doesn't really classify as a fault in the software, some piece by which something is not as correct as it could be, but in your mind you, or the team in general have decided that's not important. It's a minor, non-critical issue, and why would a user care if the time and date are displayed in YYYY:MM:DD form vs DD:MM:YYYY form? Maybe on a normal day they won't, but do we ever consider how emotion on the part of the client, may play to what bugs they see as critical or important?
Furthermore, do we as Testers always try to enter into our testing sessions with as blank an emotional slate as possible? Do we try to be like the Vulcan's of Star Trek utterly devoid of emotion, and creatures of pure logic for the duration of the test run? If so, why do we do this? Do we think a user will view the software with such a dispassionate view of how it works?
If, like me, you have ever had the chance to work the support lines for a company, then you know what I mean when I say that the customer has a personal way of dealing with his or her own emotions. They may be struggling to get the software to perform, maybe they've had training and have used it successfully in the past, but one little nook, one little detail escapes them as they are partially distracted by some emotional atrocity they have endured this week.
Michael Bolton, no not the singer, or guy from office space, gave a keynote talk at the 2011 Conference for Association of Software Testing. Michael talked at length about the history of testing, and at one point described how decisions on quality are "both political and emotional." Another tester there, Michael Hunter (@humbugreality) in one of the emerging topics or lightning talks also talked about the 'emotional tester'. I was able to follow some of these talks online, and I can't give enough kudos to the organizers of CAST and the hard working volunteers that worked to have those talks and the keynotes available online for those of us who were unable to attend.
How much role does emotion really play in our craft? Do we allow it to simply be, to drive our assessment, to harness, or control it? Do we inject it intentionally to see how it may color our opinion of an interface, or to see how our perception changes when our mood has changed? Those are questions I'll probably ponder for a while. It's funny how something as simple as a colored picture of a cartoon character by my youngest child could bring me back to contemplate these ideas in testing, but then maybe these ideas have been buzzing in my head since I first heard them, just like the color yellow.
(Edit: Thanks to Justin Hunter (@hexawise) for remembering it was Michael Hunter who gave the emerging talk on "Emotional Testing""
Labels:
Attitude,
self determination
Wednesday, June 22, 2011
So what's your priority?
In the fast paced world of software development, sooner or later you or someone on your team will find a situation where they are faced with multiple tasks, and competing priorities. So what does a person do when an urgent list of changes to a software under development is received? Then what do you do when the expected time to deliver these features appears to be something rather aggressive?
Let's take a step back for a second to discuss the requirement process. Often requirements are not as detailed as necessary to begin to design or codify. Sometimes requirements may be in a list form, they may be rather obvious, but more likely they are obscure, cryptic, or down right confusing. Because of this, very often there is a necessary feedback loop between the team and the client just trying to make sure they understand the change, and the impact it may have on the system.
This back and forth, haggling over specifics, and gaining enough understanding to model the process necessary to be implemented can be very time consuming, and often the hardest part of the software process. Customers do not always have the computer knowledge to give sufficient feedback about what they write, and as a developer it is our jobs to not just verify that we are building the software correctly, but even more critical, we must validate that which are building actually meets the needs expressed in the requirements.
Often the understanding of a requirement will morph over time, but for simplicity's sake let's assume for a moment that the haggling over the requirements has already taken place. So your team is called into a rushed meeting to discuss a list of changes. The manager describes the ten items that need to be implemented, and expresses the hope that it can be done in a short amount of time. For sake of argument lets say its a week.
Immediately the issue of time and priority may come to the forefront. If there are ten changes to a system, ten changes that do not necessarily 'interact' with another or depend on another, then how do you prioritize them? Depending upon what model your process is based upon the answer could vary. It may be the Project Manager who sets these priorities, or in a more agile environment it could be the customer, or the person designated as the 'product owner' within the team.
So the team after a little debate comes to the decision, much as they might like to pick and choose which pieces to do in what order, they need one more layer of feedback about the requirements, what is the order of priority for these changes. Given that the team is taking some measure of risk trying to rush these ten features into the system in a week's time, they believe that this risk would be easier to stomach if they could impose some logical order to implement the features, so that if the time should prove insufficient, they could at least be sure to have the most critical features done and deployed.
So what do you think the response would be? Hopefully you'll get a list of points itemized from one to ten giving a clear order of development and importance to the customer. That would be the ideal, but what if the list comes back with six of the ten items listed as priority number one, and the other four as priority number two? I posed this question on Twitter.
I received a few different responses two of the more interesting ones were from Stephan (@S_2K) "Everything is most important." Morgan Ahlström (@Morgsterious) replied with the understanding words "That noting is REALLY important." These were very much my same sentiments when I pondered this question. If someone cannot distinguish a hierarchy between two things, let alone ten things, how then can you know which items are of most importance and tackle those ares of the software first? Even if you have multiple team members, it is only natural that some components may still have higher precedence.
So what do you do when you have this sort of situation? My hope would be that enough time would exist to push back for further definition of the structure of the requirements. However, given that the customer might not be as eager to be asked the question a second time, or might not be as readily available, the team is then left to its own deductive skills to try and figure out the proper order to do them in. This is where knowing a bit about potential dependencies within the requirements could lend aid into determining an order of construction.
Nevertheless, it is still highly probable that there can be no clear prioritization gleamed from this answer. With time being ever more tight then the team is left to its wits, and knowledge of prior conversations to try and 'best guess' the order they should tackle the changes. In the end the team might complete all ten, or they might completely only a handful of the tasks. In the later situation, the customer might then push back and ask why they completed one task but not another. The team then can remember that it didn't get the accurate feedback it requested on prioritization, and could provide this as reasoning for the order they had chosen.
The customer might not like that sort of response, but sincerely in the absence of information, it becomes better to complete at least some of the work, even if you do not know how much, or what order it should be done in. This may not be an ideal solution, but it should be evident from this how important communication with the customer must be to ensure accurate building of the software and the timeliness and sequence of delivery.
Let's take a step back for a second to discuss the requirement process. Often requirements are not as detailed as necessary to begin to design or codify. Sometimes requirements may be in a list form, they may be rather obvious, but more likely they are obscure, cryptic, or down right confusing. Because of this, very often there is a necessary feedback loop between the team and the client just trying to make sure they understand the change, and the impact it may have on the system.
This back and forth, haggling over specifics, and gaining enough understanding to model the process necessary to be implemented can be very time consuming, and often the hardest part of the software process. Customers do not always have the computer knowledge to give sufficient feedback about what they write, and as a developer it is our jobs to not just verify that we are building the software correctly, but even more critical, we must validate that which are building actually meets the needs expressed in the requirements.
Often the understanding of a requirement will morph over time, but for simplicity's sake let's assume for a moment that the haggling over the requirements has already taken place. So your team is called into a rushed meeting to discuss a list of changes. The manager describes the ten items that need to be implemented, and expresses the hope that it can be done in a short amount of time. For sake of argument lets say its a week.
Immediately the issue of time and priority may come to the forefront. If there are ten changes to a system, ten changes that do not necessarily 'interact' with another or depend on another, then how do you prioritize them? Depending upon what model your process is based upon the answer could vary. It may be the Project Manager who sets these priorities, or in a more agile environment it could be the customer, or the person designated as the 'product owner' within the team.
So the team after a little debate comes to the decision, much as they might like to pick and choose which pieces to do in what order, they need one more layer of feedback about the requirements, what is the order of priority for these changes. Given that the team is taking some measure of risk trying to rush these ten features into the system in a week's time, they believe that this risk would be easier to stomach if they could impose some logical order to implement the features, so that if the time should prove insufficient, they could at least be sure to have the most critical features done and deployed.
So what do you think the response would be? Hopefully you'll get a list of points itemized from one to ten giving a clear order of development and importance to the customer. That would be the ideal, but what if the list comes back with six of the ten items listed as priority number one, and the other four as priority number two? I posed this question on Twitter.
I received a few different responses two of the more interesting ones were from Stephan (@S_2K) "Everything is most important.
So what do you do when you have this sort of situation? My hope would be that enough time would exist to push back for further definition of the structure of the requirements. However, given that the customer might not be as eager to be asked the question a second time, or might not be as readily available, the team is then left to its own deductive skills to try and figure out the proper order to do them in. This is where knowing a bit about potential dependencies within the requirements could lend aid into determining an order of construction.
Nevertheless, it is still highly probable that there can be no clear prioritization gleamed from this answer. With time being ever more tight then the team is left to its wits, and knowledge of prior conversations to try and 'best guess' the order they should tackle the changes. In the end the team might complete all ten, or they might completely only a handful of the tasks. In the later situation, the customer might then push back and ask why they completed one task but not another. The team then can remember that it didn't get the accurate feedback it requested on prioritization, and could provide this as reasoning for the order they had chosen.
The customer might not like that sort of response, but sincerely in the absence of information, it becomes better to complete at least some of the work, even if you do not know how much, or what order it should be done in. This may not be an ideal solution, but it should be evident from this how important communication with the customer must be to ensure accurate building of the software and the timeliness and sequence of delivery.
Labels:
Priorities,
Requirements
Tuesday, June 21, 2011
Some thoughts about Version Control
In my time as a software professional I have seen many systems for maintaining and tracking changes to code in progress. Whether this is the archaic and I hope no one ever really uses this on a real project practice of passing along zip files of changed code with comment mark-up to indicate where changes were made, or to full featured source control solutions, like subversion, team foundation server, or git, every project at some point or another will have to make a decision on how to handle these files.
I could spend time dealing with the importance of choosing any source control solution, but that's not what this post is about. Instead I'd like to take a moment to describe some of my experience with source control solutions.
In general there are two sorts of source control mechanisms, one is like that used with Microsoft's Team Foundation system (and before it Visual Source safe an incremental check out/check in system, and the other is a check out the code base, and commit/update to get/add the latest changes to the current working version. There are pluses and minuses to both schemes. The incremental check out/check in system really feels like a pull system. While you can download the latest tree from a base directory, to make changes, and have them be saved with out the IDE screaming about read only or protected files, you have to make a point to check out in essence pulling on TFS to make the file ready for editing. Team Foundation Server (TFS) allows you to do this in an exclusive mode, essentially locking out any other attempts to check out and change the file by other users, or a less prohibitive more shared lock.
I've only had the privilege of working on a single project with TFS thus far, and that project relies on the exclusive kind of check out system most of the time. This means that if a developer needs to make a small change to a file that you are working on, he will have to communicate that need to you, so that you can then do a hand shake of releasing the locked edit, so that the change can be checked in. It can be a real pain at times when you want to try a quick fix and the IDE prevents you at times from trying something simply because it's marked as 'read-only'. It provides a benefit in that it limits the number of files you might accidentally touch which can be a blessing at times. It also means that anyone trying to check in another fix may not be able to do so while it is held checked out, and as has happened to me a few times, sometimes files in a project may get checked out by the IDE automatically even if you did not intend it so.
The other form of version control management is the style for which Subversion has become known. It is in essence a more push style of architecture, where by, you can check out an entire source control tree (much like getting latest for any particular folder), and then have significant freedom to make changes, erase a file and refresh/revert it from the last commit, or even make a series of changes and check them in one after another. The draw back of this system, is that because anyone can edit, there is the possibility of files getting more and more out of sync from the base or trunk as it is called in subversion. This can present problems when during a commit a major conflict arises trying to reconcile differences between the two files. In my experience though, if used judiciously and kept up to date and committed as often as necessary, these conflicts can be minimized fairly easily. Subversion also has the bonus in that it is portable to other operating system types, and is not dependent on the windows/.Net stack as much to function.
There may be other tools that fit your needs better, Wiki's can be edited inline from a browser and keep good version tracking of each meta page, content management systems like Share Point can allow you to track updates to your requirements, contacts, or serve as a portal for storing key documentation for a production. Whichever tool your team chooses for source control, it is critical that your entire team is on board with its function, and regular use to help avoid major headaches between builds. The value of source control is manifest in how it allows teams to track branches of in progress development code, tag certain releases for easy comparison later, and giving a history with comments about the update and logs of the files that have changed. This logging function can be of great value when trying to track down how a particular strain of bad code may have made its way into your product.
Then some source control systems allow you to track the progress of development from requirement, to deployment. Team Foundation Server has these features out of the box, while open source tools like Subversion require other components, for example TRAC, to help track the progress of development. I hope that whatever your team chooses to use, that you will find the source control to be a great asset, that helps facilitate your development needs.
I could spend time dealing with the importance of choosing any source control solution, but that's not what this post is about. Instead I'd like to take a moment to describe some of my experience with source control solutions.
In general there are two sorts of source control mechanisms, one is like that used with Microsoft's Team Foundation system (and before it Visual Source safe an incremental check out/check in system, and the other is a check out the code base, and commit/update to get/add the latest changes to the current working version. There are pluses and minuses to both schemes. The incremental check out/check in system really feels like a pull system. While you can download the latest tree from a base directory, to make changes, and have them be saved with out the IDE screaming about read only or protected files, you have to make a point to check out in essence pulling on TFS to make the file ready for editing. Team Foundation Server (TFS) allows you to do this in an exclusive mode, essentially locking out any other attempts to check out and change the file by other users, or a less prohibitive more shared lock.
I've only had the privilege of working on a single project with TFS thus far, and that project relies on the exclusive kind of check out system most of the time. This means that if a developer needs to make a small change to a file that you are working on, he will have to communicate that need to you, so that you can then do a hand shake of releasing the locked edit, so that the change can be checked in. It can be a real pain at times when you want to try a quick fix and the IDE prevents you at times from trying something simply because it's marked as 'read-only'. It provides a benefit in that it limits the number of files you might accidentally touch which can be a blessing at times. It also means that anyone trying to check in another fix may not be able to do so while it is held checked out, and as has happened to me a few times, sometimes files in a project may get checked out by the IDE automatically even if you did not intend it so.
The other form of version control management is the style for which Subversion has become known. It is in essence a more push style of architecture, where by, you can check out an entire source control tree (much like getting latest for any particular folder), and then have significant freedom to make changes, erase a file and refresh/revert it from the last commit, or even make a series of changes and check them in one after another. The draw back of this system, is that because anyone can edit, there is the possibility of files getting more and more out of sync from the base or trunk as it is called in subversion. This can present problems when during a commit a major conflict arises trying to reconcile differences between the two files. In my experience though, if used judiciously and kept up to date and committed as often as necessary, these conflicts can be minimized fairly easily. Subversion also has the bonus in that it is portable to other operating system types, and is not dependent on the windows/.Net stack as much to function.
There may be other tools that fit your needs better, Wiki's can be edited inline from a browser and keep good version tracking of each meta page, content management systems like Share Point can allow you to track updates to your requirements, contacts, or serve as a portal for storing key documentation for a production. Whichever tool your team chooses for source control, it is critical that your entire team is on board with its function, and regular use to help avoid major headaches between builds. The value of source control is manifest in how it allows teams to track branches of in progress development code, tag certain releases for easy comparison later, and giving a history with comments about the update and logs of the files that have changed. This logging function can be of great value when trying to track down how a particular strain of bad code may have made its way into your product.
Then some source control systems allow you to track the progress of development from requirement, to deployment. Team Foundation Server has these features out of the box, while open source tools like Subversion require other components, for example TRAC, to help track the progress of development. I hope that whatever your team chooses to use, that you will find the source control to be a great asset, that helps facilitate your development needs.
Labels:
Source Control,
Versioning.
Sunday, March 13, 2011
Some thoughts on the Programmer vs Tester Argument
Shrini Kulkarni wrote an interesting blog about arguments related to the programmer vs tester who is better at testing and why (Programmers make Excellent Testers - Arguments and Counter Arguments). The idea comes up now and then in development circles, and whether you are a programmer, a developer, a manager, or a student of Software Development, sooner or later a question like this may enter the picture.
Shrini does a pretty good job of illustrating some of the reasons why at any given snap shot moment, why a programmer or tester may be better for a particular task of testing. I've hard some of these arguments before, and I find that the context of the situation, and sometimes the budget of the team often dictate just how much a programmer or tester is asked to do. One thing that I find interesting, is why things end up being the way they are. Each of us if you will may start off as a blank slate at birth, and we learn many, many different tasks. When we enter the Software field, we even at that point have different ideas, philosophies and hopes and dreams of what a life and career in software might be. Whether that is to be an Engineer, Programmer, Designer, Architect, Manager, Lead, Tester, each of us started off with an idea of what we might get from this rewarding field. How many of us though stay on the path we initially pointed our ships bow of learning and experiences towards? I'm not sure a survey could even give a good idea for why people choose one career path from another. People are as varied and individual as grains of sand or rocks on any given beach.
So when I reflect back on my career in the software profession. I can't help feel the irony to have basically come full circle, and reminded of at least one of Shrini's points. When I Started, all I wanted to do was write Code. I had aspirations of being a "Software Engineer" and thought I had a pretty good idea of what that entailed. Though I lacked experience I had not thought beyond that little view point. So when it came time to start my first professional job, it didn't take long before the ideal, and reality clashed.
Learning is often seen as climbing a pyramid between four points. Along one Axis is technology and the closer to it the more experienced with technology, particularly specific technological contexts represents your learning and experience. At the other end of that Axis is the Second point of our Pyramid, that of working with people. The closer you get to it, the better you work with people, but perhaps you end up spending more time with people then the technology. Then off on the opposing axes would lie the other two points, in one direction is that of being more of a specialist. The closer to it, the better and more specialized you are in your context. You might call someone at that point an 'expert' of sorts, while at the other end is the generalist. They may be better at one thing than another, but their broad base of experience helps give them a larger context to evaluate their problem set.
As you move from the Technologist point of extreme towards the specialist you are really moving around the base of the Pyramid. Or if you move to generalize you may be moving back the other way along that face. However, since Computer and technological problems are inevitably people problems, noone stays on the base very long, they begin to grow upward, and toward being able to deal with people.
I recall being taught taught this 'myth' that programmers code, and testers test, and neither the twain shall meet. Thus fresh out of college there was this propensity among some of us, myself included to see testing as beneath them and not worth their time. This may have been vocalized by professors, or it may simply have been a matter that Testing was not a focal point of CS or Engineering coursework in college. The absence of a tester point of view from my Academic experience, clearly colored my early thinking about software.
I had a desire to crank out quality code, usable code, and I saw Software Engineering as a tool to get there. Just one problem. In the first shop I worked, Software Engineering principles were not necessarily being practiced. Even if they had been, I'm not convinced it would have solved the host of problems I encountered early in my time there. I came to realize very quickly that programming alone didn't insure the code was ready and met the customer expectations. Heck sometimes even other programmers don't meet other programmers expectations. Now almost 8 years later, I'm a full time tester, and now I see the other end of the equation, where some developers seem to see the majority of testing related tasks as 'not their job' or beneath their position or station.
I'm sure, I am not alone in seeing this, and I find it both sad, but in some ways understandable. After all in the technical realm you have generalists and you have specialists, to be good at one thing requires many hours of study and honing of that skill, often at the expense of others. Thinking back to that Pyramid, and admittedly, this is a very rough model or metaphor in need of tweaking and polishing, We may walk around a level of that Pyramid closer to one skill or another, We may move up it at an angle as we learn multiple skills practically at the same time, and sometimes we may move straight up.
I see the technical Specialists close on the face between the top of the Pyramid (the metaphorical pinnacle of learning success), and the Specialist and Technology points. The more specialized the developer, the closer to the specialist point they may be, and likely further up the pyramid to its zenith. For generalists, the same could be said between the Technology and generalist point to the peak. The more a person pours into one technology or into technology in particular the less time they've spent gaining other experiences. So it should come as little surprise when a twenty or thirty year experienced developer who has poured his life into coding has gotten so far away from knowledge about testing as to see it as a foreign country.
Even among programmers there are generalists and specialists in particular ideas or platforms, and each type of developer brings their own unique viewpoint to the problem at hand. What does that have to do with testing? Well we have something a bit similar do we not? We have people who focus on particular schools of thought, we have people who pour everything into automation, or pour everything into exploratory style testing. We even have some who are trying to learn as much as they can from every category, not knowing what tool they'll need next.
Now Imagine that there were two more points, a high and a low along the Z Axis the top would be Programmer and the bottom Tester. I can see from adding those points, and removing the metaphorical pinnacle and replacing it with a point for programming and testing. A person who is a good programmer and continues on that path, may learn at the expense of learning things related to testing. Just like a programmer who chooses to learn more about testing, will have less time to focus on the craft of programming. Each of us then may move upward or downward toward being more programmer or tester. Forward or backward at being more involved in the technology vs the people, or more a specialist in one area vs a Generalist in others.
So when I look at the tree of development with Development as the grand daddy of all the other nodes, and programming, and testing each on separate branches, I can't help see that there are so many varieties of testers and developers out there. Making each of us unique, or different as our experienced build upon one another. The real question I think we should be asking, is where do we fit in these areas. Are we more technical or more people oriented? Are we more specialists or generalists? More programmer or more tester?
So who is better? I'm not sure the question matters. I think the context of program budgets, team size, and backgrounds often weigh into who does what, and as a team on a project we must each do what we feel we can do that will help the project effectively. Maybe that Senior level developer with twenty years experience would be wasting his time testing, its not his strength, or area of study, so a specialist tester would be better. However, I would hesitate to ever rule out a particular individual from testing or other development tasks simply because of lack of experience. The software profession, like so many is one where dedication to life long learning is paramount. So we should bear that in mind as we forge ahead, no matter which direction we point our bow of learning to on the next course we may set.
Shrini does a pretty good job of illustrating some of the reasons why at any given snap shot moment, why a programmer or tester may be better for a particular task of testing. I've hard some of these arguments before, and I find that the context of the situation, and sometimes the budget of the team often dictate just how much a programmer or tester is asked to do. One thing that I find interesting, is why things end up being the way they are. Each of us if you will may start off as a blank slate at birth, and we learn many, many different tasks. When we enter the Software field, we even at that point have different ideas, philosophies and hopes and dreams of what a life and career in software might be. Whether that is to be an Engineer, Programmer, Designer, Architect, Manager, Lead, Tester, each of us started off with an idea of what we might get from this rewarding field. How many of us though stay on the path we initially pointed our ships bow of learning and experiences towards? I'm not sure a survey could even give a good idea for why people choose one career path from another. People are as varied and individual as grains of sand or rocks on any given beach.
So when I reflect back on my career in the software profession. I can't help feel the irony to have basically come full circle, and reminded of at least one of Shrini's points. When I Started, all I wanted to do was write Code. I had aspirations of being a "Software Engineer" and thought I had a pretty good idea of what that entailed. Though I lacked experience I had not thought beyond that little view point. So when it came time to start my first professional job, it didn't take long before the ideal, and reality clashed.
Learning is often seen as climbing a pyramid between four points. Along one Axis is technology and the closer to it the more experienced with technology, particularly specific technological contexts represents your learning and experience. At the other end of that Axis is the Second point of our Pyramid, that of working with people. The closer you get to it, the better you work with people, but perhaps you end up spending more time with people then the technology. Then off on the opposing axes would lie the other two points, in one direction is that of being more of a specialist. The closer to it, the better and more specialized you are in your context. You might call someone at that point an 'expert' of sorts, while at the other end is the generalist. They may be better at one thing than another, but their broad base of experience helps give them a larger context to evaluate their problem set.
As you move from the Technologist point of extreme towards the specialist you are really moving around the base of the Pyramid. Or if you move to generalize you may be moving back the other way along that face. However, since Computer and technological problems are inevitably people problems, noone stays on the base very long, they begin to grow upward, and toward being able to deal with people.
I recall being taught taught this 'myth' that programmers code, and testers test, and neither the twain shall meet. Thus fresh out of college there was this propensity among some of us, myself included to see testing as beneath them and not worth their time. This may have been vocalized by professors, or it may simply have been a matter that Testing was not a focal point of CS or Engineering coursework in college. The absence of a tester point of view from my Academic experience, clearly colored my early thinking about software.
I had a desire to crank out quality code, usable code, and I saw Software Engineering as a tool to get there. Just one problem. In the first shop I worked, Software Engineering principles were not necessarily being practiced. Even if they had been, I'm not convinced it would have solved the host of problems I encountered early in my time there. I came to realize very quickly that programming alone didn't insure the code was ready and met the customer expectations. Heck sometimes even other programmers don't meet other programmers expectations. Now almost 8 years later, I'm a full time tester, and now I see the other end of the equation, where some developers seem to see the majority of testing related tasks as 'not their job' or beneath their position or station.
I'm sure, I am not alone in seeing this, and I find it both sad, but in some ways understandable. After all in the technical realm you have generalists and you have specialists, to be good at one thing requires many hours of study and honing of that skill, often at the expense of others. Thinking back to that Pyramid, and admittedly, this is a very rough model or metaphor in need of tweaking and polishing, We may walk around a level of that Pyramid closer to one skill or another, We may move up it at an angle as we learn multiple skills practically at the same time, and sometimes we may move straight up.
I see the technical Specialists close on the face between the top of the Pyramid (the metaphorical pinnacle of learning success), and the Specialist and Technology points. The more specialized the developer, the closer to the specialist point they may be, and likely further up the pyramid to its zenith. For generalists, the same could be said between the Technology and generalist point to the peak. The more a person pours into one technology or into technology in particular the less time they've spent gaining other experiences. So it should come as little surprise when a twenty or thirty year experienced developer who has poured his life into coding has gotten so far away from knowledge about testing as to see it as a foreign country.
Even among programmers there are generalists and specialists in particular ideas or platforms, and each type of developer brings their own unique viewpoint to the problem at hand. What does that have to do with testing? Well we have something a bit similar do we not? We have people who focus on particular schools of thought, we have people who pour everything into automation, or pour everything into exploratory style testing. We even have some who are trying to learn as much as they can from every category, not knowing what tool they'll need next.
Now Imagine that there were two more points, a high and a low along the Z Axis the top would be Programmer and the bottom Tester. I can see from adding those points, and removing the metaphorical pinnacle and replacing it with a point for programming and testing. A person who is a good programmer and continues on that path, may learn at the expense of learning things related to testing. Just like a programmer who chooses to learn more about testing, will have less time to focus on the craft of programming. Each of us then may move upward or downward toward being more programmer or tester. Forward or backward at being more involved in the technology vs the people, or more a specialist in one area vs a Generalist in others.
So when I look at the tree of development with Development as the grand daddy of all the other nodes, and programming, and testing each on separate branches, I can't help see that there are so many varieties of testers and developers out there. Making each of us unique, or different as our experienced build upon one another. The real question I think we should be asking, is where do we fit in these areas. Are we more technical or more people oriented? Are we more specialists or generalists? More programmer or more tester?
So who is better? I'm not sure the question matters. I think the context of program budgets, team size, and backgrounds often weigh into who does what, and as a team on a project we must each do what we feel we can do that will help the project effectively. Maybe that Senior level developer with twenty years experience would be wasting his time testing, its not his strength, or area of study, so a specialist tester would be better. However, I would hesitate to ever rule out a particular individual from testing or other development tasks simply because of lack of experience. The software profession, like so many is one where dedication to life long learning is paramount. So we should bear that in mind as we forge ahead, no matter which direction we point our bow of learning to on the next course we may set.
Labels:
Knowledge and Learning,
Programmers,
Testers
Subscribe to:
Posts (Atom)