Tuesday, September 27, 2011

If its not random, how to decipher the pattern?

Earlier this week, I wrote about the software fault in the Staunton, Virginia, teacher payroll system.  I talked at length about the concept of 'random', and the importance of distinguishing between something that which is truly random, from something that is better described as unexpected, unpredictable, or just 'having no discernible pattern to me as far as my sense go'.  Using precise language when describing defects in software benefits everyone on the team, including the customer.

Unfortunately, the fault in Staunton, Virgnia's payroll system wasn't found by a tester, instead it was discovered by someone researching the finances of the county's school system.   Now we may not know exactly how this fault was first brought to the attention of this school district.  That doesn't preclude speculation how an investigation of a similar fault on a hypothetically similar system could be conducted.

So imagine a hypothetical payroll system for an organization with multiple locations, accounting for user entities of diverse pay grades and positions similar to the school systems.  The more layers you add to the structure of the system, the greater its complexity.   Now let's suppose the vendor of this software received word about an apparent bug.  This bug affects certain persons within the system who would receive an unexpected, and here to fore unnoticed pay increase.  So if you as a software tester for this software vendor, receive this notice where would you start?

Reporting and analyzing a defect that a tester stumbled upon through his or her own investigation of the software is one thing.  Trying to track down a flaw someone else found and reports is quite another. If we follow the example of the of the system we discussed earlier we can imagine the reports taking the form of output, potentially pay stubs, ledger logs, bank statements, etc.  In short, we possess a log or evidence that the problem occurred, but this evidence may be far enough from the system itself to not be able to produce the same conditions without a bit more digging.

So how can we reproduce these conditions and figure out where the real defect resides?  More information is required, and like a software Sherlock Holmes we must examine the evidence, and piece together the story of what happened.  In the case of the pay roll system it is likely important to know how many individuals were impacted.  Might a search for more information related to the individuals effected, reveal each user to be part of particular entities or organizations with in the system?  Did they work at particular locations, or have their data maintained at a particular data center?  An exhaustive analysis of whatever data can be culled from the system could help establish a definitive relationship between the affected users.

From the headlines, it sounds the School district performed an analysis just like this.  The result seemed to have something to do with individuals who went to a particular school.   Now the age of the defect in the system may not be clear.  If  a lot of time has gone by, it may be possible that the connection is more subtle, and won't track to any particular organization, or be so obvious; however, in this case we strike pay dirt.   One piece of the puzzle is in place.

Given that all those results might track to a particular organization within the software, this may lead to our first hunch.  Were all the people assigned to this organization, also receiving the same bug?  This might be where the first bump in the investigation may be encountered.   Maybe they aren't all affected.  That idea may lead to a belief that our initial hunch was wrong, but it could be that there's a reason why they turned out to be the exception.  

It's at this point in the defect analysis where a history of debugging similar enterprise applications could prove beneficial.  From reviewing some of the articles around the defect, a number of ideas come to mind, all of them based on similar behavior I've encountered in other projects I have worked.  If these employees all worked at a facility that was shuttered, what happens to their accounts when the facility is shut down?  Are they transferred to a new facility?  Are they suspended out right?  Are they removed from the system?

I recall once with a customer relation management system that we encountered a bug when a user account was removed from the system.  All the records linked to it, would cascade and delete, or disappear and not show up in the system when searched.  Could a data integrity issue regarding the integrity of the data for these closed locations be responsible for this behavior?

Another possibility that occurs to me, is that a system that freezes pay for all employees may apply to a group of employees by group.  Might a group that these employees belonged to be used to freeze all of their pay for some time period?  Might failing to belong to a group due to the original group being inactivated cause the issue of applying this freeze to miss these accounts?

It may be difficult to see the cause from just reading the few reports you receive from the user, but a simple logical, and step by step examination of the system could help reveal how the issue happened, and if it was a case of the system being used in a manner that was unplanned by the software vendor, it may indicate a fault in the business rules, or lack of training for the users of the system.  Whatever the case, the team is now on its way to finding where this issue occurred.  Where would you test next?

Wednesday, September 21, 2011

Pay Freeze, slightly melted, a random bug? Maybe.

Every now and then I read about a problem in a software system that makes the news.   I look at the article and read what is described as the problem, and I often wonder, how this supposed flaw got into the system.  In my experience it can be easy to fault the software for an error.  There have certainly been enough cases of odd failures for the general public to believe them, but is it really the software?

This week I heard about the story from Staunton, Virginia.  Apparently the school board had frozen pay for all of its employees for some period of time, and as the article stated, the glitch went uncaught by a number of employees who spot checked this up until a news station requested records for salaries under the freedom of information act.  This is when the discrepancy was apparently noticed.  Now this glitch appears like something of a scandal.  The political black eye alone could be enough to make anyone nervous about the 'quality' of the application in question.

What concerns me though is that this glitch is being described as completely random.  First, do we really understand what it means when something is truly random?  According to Dictionary.com, random has four customary definitions.  The first means 'proceeding, made, or occurring without definite aim, reason, or pattern.  The second is its use in statistics, specifically the concept of a process of selection whereby each item of a set has an equal likelihood of being selected. The third definition applies to physical trades, where a part, parcel, or piece of land or item may appear non uniformly shaped.  The last one is an informal use implying that it was an occurrence that was completely unexpected, or unpredictable.


Let's take a moment and consider the story for a moment.  The first definition implies that there is no rhyme or reason, no discernible pattern to something which may make it random.  Is that the case here?  Reading further I notice the following:
" the pay increase malfunction was random and included three teachers at Bessie Weller Elementary School, four at McSwain Elementary, four at Ware Elementary and a speech teacher and a secondary special education teacher."
Several of these teachers had one thing in common, they attended one of three elementary schools nearby.  Wait does that mean what I think it means, could this be the beginning of an actual pattern emerging, enough to discount the perceived randomness?  It could be, but as testers in this situation, our job is to determine the nature of the fault, not just give our 'best guesses'.  We know a fault happened, therefore we must find a way to duplicate it.  If we continued on this analysis, we'd likely have a couple of test ideas to begin testing, we'd look at the data for all of the affected persons and see just what is it that happened.  Is the over payment of salaries here the problem, or is it a side effect of some other hidden flaw that just became visible due to some quality of the instance that we are examining?   Fortunately, I did a bit more research and found another article on this on MSNBC's site.  Now I will note that MSNBC's article is dated the sixteenth of September, and the other article earlier on the Second day of September, however, as I read I find another nugget that seems to confirm my suspicion.

"All the affected teachers had previously worked at Dixon Elementary School and were reassigned to other schools after Dixon closed two years ago."

So it appears that this bug affected teachers that had all been assigned to a school, that closed two years ago. (No doubt around the time of the glitch actually occurring.)  Would you call this random?  No I see a pattern, so it doesn't hold on the first definition.  The Second definition doesn't hold up to the story at this point either, as given a sampling so large, would you really expect to find just a handful of salaries that are wrong?  I don't buy that either.  The third definition doesn't apply in this context, which leaves us with the remaining informal definition: simply that it was odd or unpredictable 

This fact I do not doubt, no one predicted this to happen.  Now I'm not writing this to criticize the vendor or the county in question where this happened.  That's not the point of this article.  Instead, my hope is to make you think.  As a tester, developer, user, consumer of computing appliances, how often do we encounter behavior that surprises us?  How often do we not only get surprised but feel the event to be unpredictable, with no reason it should be happening?

I imagine this happens more than we might like to admit.  How many times do we sit at our computers, doing something normal.  We're checking our email in our client of choice, we have had no problems with our service and expect to get a no messages found if the service has none waiting for us.  We hit the send/receive button, and wait gleefully hoping to find/not find email.  Then we get a message that it was unable to connect to the server.   That catches us by surprise, maybe we think its an aberration, so we click the button again.

That second click does what?  It allows us to check to see if it was a hiccup, a momentary failure, or perhaps a sign of a long term issue.  I've had this happen from time to time on web pages I may visit frequently.  A forum for a football team may load very fast during the week, but on game day as people are checking up on their team, it slows to a crawl, and a dependency like a style sheet, or images fails to download due to the sudden hit to bandwidth serving the multitude of requests at one time.  It might even take minutes before you get that white page with some structure, and no formatting.  Do we immediately think, wow that's random, this forum is really bugged?  But I know from experience, this isn't a fault of the software itself, at least as far as I can tell, but instead it is a function of a high load on a system that may not be able to keep up with a sudden increase in demand.

As testers, simply finding and reporting bugs is wholly insufficient to communicate to the developer the nature and scope of the fault we've encountered.   In the case of the forum software, a subsequent refresh might fix the page, and it may load fine for several hours thereafter, unable to have the issue reproduced.  Whatever the issue is, we must dig, and see if we can prune down the steps that we followed.  We can try to see if the bug happens if we hit another location, try a different path through the software, or perhaps try a different role or persona.  The point here is it is our job as testers to imagine how this bug could have occurred.  What would your tester instincts tell you to go to prove and find this error so it could be fixed?  Do you have the answer?

Hold that thought, because I am going to revisit this question later in the week.  For now, just remember that just because we can't see the pattern for a bug, doesn't mean there isn't one, and as testers in particular, our use of language should be careful so as to not mislead the public, our developers, our clients, or our managers.

Tuesday, September 20, 2011

Diary of a Soccer Coach: Week 3 and First Game!

I'm a bit behind on blogging due to duties last week, but I'll catch up by throwing the third week of practice alongside the first game.   In our league the third week of practice heralds two things, first the last practice before our first game, and the arrival of our team rosters and uniforms.  On this particular day, the 'head coach' of the league, who has been helping out with our Kindergartners had to distribute the uniforms to all the different divisions.  This left me alone to do a lot of work with the kids on my own.

This worked out fine, and as I always try to keep the kids moving it worked out great.  The Third practice is where we honed in on basic shooting skills.  For most introductory soccer players, the more advanced steps are not always easy to pass on.  At this practice I focused in on keeping their eye on the ball and following through as they shot.   Also, as with most practices I got and kept them moving as much as possible.

I started by having them stand next to the ball and shoot it stationary.  After each player had tried this a few times, I had them try shooting the ball by first running up on the ball and then kicking it into the goal.  Afterwards, I made it more difficult by having the players dribble the ball and then kick it into the goal.

On many development projects I've seen a similar step by step building up to completion.  A feature might start out very simplistic, or it may seem that way so we start by taking our first shot at it, just as my players might in their third practice.   Sometimes we may not understand some of the nuance to a requirement.  It may appear simple, just like striking that ball, but there are intricacies and un revealed flavoring that needs added for the code to really pull off what is intended.  So as a team maybe you work up to these features, adding a little more speed, a bit more control, and higher accuracy in its calculations.

I've found similar patterns in testing.  The first time through, you may just be poking around in an exploration of the application under test.   You may not have a full grasp of the features, how to activate or use them,  or the intent, but you build a bit of confidence and then take another test run at the software.  Then you might discover that this type of software is documented to have a particular susceptibility to one kind of fault, and begin tailoring your exploratory testing to hit those weaknesses.


The first game of a soccer season is always exciting.  Its the first time the kids are in their new uniforms, and you just never know how much the kids have absorbed from the limited practices you've had thus far.  Each year is a little bit different. One year, one team may have a very good grasp of the game and create a lot of goals in that first game.  Others might find it difficult to juggle defending the approaching ball, redirecting it to the goal they are attacking, or they may even get a little winded as they aren't used to moving so much at one time.

The first year I coached, an older coach told me, "You'll see the most improvement between the Second and Third games." I wasn't so sure how to take that, but later I realized what he meant.  Suffice it to say, many kids may not listen early in the practice.  Until they see how they can apply it in a game situation, they just may not realize the advice you are giving them.  I've seen this happen on development teams too.  A tester might make a suggestion about how to improve a process or function within the application, and might be ignored, because its simply not their job, or because the developer is too much ' in the zone' to stop and see what is being said.  There might even be, as is common in our first game of soccer, a lot of stops and starts as you build to a sustainable pace for development.

Bottom line though, remember it's just the first game.  A lot can change over the course of time on a project.  Change is inevitable in many projects, and how we handle and respond to it sets a strong light on our teams and how we cope with that change.

Monday, September 12, 2011

Nature Vs Nurture: Do we train the tester out of our Kids?

While working through a serious of tests on our automation framework today, a thought came to my mind.  Do we train our kids to lose the very attributes that could make them a great tester?  Do we risk killing their curiosity, or train them to accept what is told, because that's how our schools are run?  Psychologists and scientists have argued nature versus nurture for a long time now, but I never really framed it in this way before.

We have two young children, and I've had the privilege of watching my first born grow into a smart young boy.  Now our almost two year old daughter is starting to pass 'little milestones' hand over fist.  I remarked to my wife today that she looked like she had grown three inches since breakfast.   Then later this evening while putting her to bed, my eyes did another visual inspection of her height; this time against something I knew was constant:  The height of the bed rail for her baby crib.

Earlier this week, we had to take down the pack and play yard because she had discovered a way to easily climb out of it.  Plus we knew she had grown to a size that had become too big for it anyways.  So we knew she was getting bigger, growing as all kids inevitably do.  Then tonight was the kicker, I saw her gymnast style on an uneven parallel bars nearly pick herself up to climb over her crib's bed rail.  Then switching to a new tactic, she climbed up one of the spindles of the crib, one side of one foot on one side, and the other on the opposite.   She was climbing as I've imagined or seen climbers on TV working an ascent on many a wall faces of a mountain, before slinging her leg over the rail, just as I reached out and caught her in shock and awe of all I have seen.

I love my little girl, she's been a blessing even when she was born and admitted to the Neo-natal Intensive Care Unit (NICU), but she continues to amaze me.  Born a couple of weeks early, you'd never know it to look at her now.  She's a runner, a climber, a ball player, and a wrestler.  Not to mention she sometimes likes to practice tackling her much bigger brother from behind.  She's my little explorer, my future Venture Girl, and I wouldn't trade that for the world.  Both of my children are special, and highly intelligent, but lately as I grow as a tester and parent to her and to him, my mind ponders how best to raise her.

The natural inclination of a parent is to want to protect, and keep their little ones safe from as many dangers as possible.  Yet we as parents know the perception of our provided security is is not complete, as much as we may want, we cannot protect them from every possible hurt or injury.  Just as testers realize that many of the security features we test, provide only a facade of protection, in this current day and age.

The process of rearing children has me pondering this fact.  Parents set up rules for their kids behavior and activities in and around their home.  Some of these rules may seem unnecessary or excessive at times, but they provide a structure, a framework around which they can begin their learning experience.  Then later, if you follow the public model, they go off to school, or other extra curricular activities that provide additional rules, and layers of precepts that try to mold the child into a particular form.

Sometimes I wonder, just what are we trying to achieve?  Are we stifling creativity by requiring them to paint within the lines?  Are we killing their spirit by requiring them to sit like mindless zombie automatons.  As I've watched my children in recent days I'm amazed at how many times they look fresh at some toy or item in our house, and find another unique way to interact or play with it.  Many times this is fine and worth encouraging, other times it could be something they are doing is unsafe.  Our urge is to jump, rescue, and shield them from this dangerous situation, but are we doing more harm than good?

I wonder.  More with our younger child than our oldest, I hope to hone and focus that curiosity.  I'm very cautious about how I deal with her when in the course of exploring she is doing something that could bring harm to her.   It's like walking a tight rope though.  I want to encourage the curiosity, embrace the questions, and the goofy ideas that may come.  I want to give her the freedom to learn without constraining her to the factory school of thought.

We opted to keep our son home for his first year.  Everything we'd read about child psychology suggested that boys may do better if not thrown into the structured environment of elementary school.  Three years later we are still home schooling.  What started off as an experiment to provide him room to grow paid high dividends.  He has grown as he has learned, and it amazes me how much he can learn in a short span of time. Seeing his progress makes me sad at times, because I know that we may cover as much if not more than what a single day of school might cover, and yet he absorbs ever more. Heck this kid in second grade was upset that we hadn't taught him multiplication tables yet.

Our youngest isn't yet of schooling age, but I already see her reaching out and testing the boundaries the environment provides.  Some of them are provided by us her parents, and some of them are a structure of nature and design of the furniture, or artifacts in our home.  Yet I'm more cognizant of the decisions we make to correct, or alert her to dangers in her environment.  If you're reading this entry and pondering the same things, I'm curious as to your perspective.  Does our rule, and school structure result in breeding out the curiosity, the intellectual spark that may draw a child to be a creator or investigator of the world around them?

If like me, you have thought about this, and come to the conclusion that these factors do affect the development of the mind of a potential tester.  Do they affect them for the worse, or the better?  How can we improve them to harness that curiosity and prepare people to test the applications and services of tomorrow?  I don't have the answer to these questions, but I may continue to ponder them for some time.

Friday, September 9, 2011

Off on the trail of testing, but wait, I forgot this one thing

How many times in life, do we surrender to the habitual nature of our human psyche?  Do we capitulate and allow what seems to be an established routine, repetitive task, and in our minds we've got it down to an art, so why just flip a button, and cruise on through each step of the process, without giving much time to pause between steps to evaluate where we are going?

News flash, tester or not, we all do this! In fact, I did it twice today, without even realizing it.  Oh I wasn't 'testing' software at the time, but I allowed my inferred conviction of my own understanding of the early day's chores to not only lull me into a state of numbness, but introduced my own sort of performance speed bump that wasted hours of my time.

What started out today, as an early morning jaunt to accomplish to simple chore, turned into an exercise to remind me why falling into the assumption and autopilot trap are so dangerous.  It started simply enough, just two chores and then I'd be home for the rest of the day, became an afternoon of back tracking, acknowledging a flaw in my own understanding, correcting it, and then executing essentially the same process, but in a slightly varied way.   It started with a trip to the court house.

I was rather pleased with myself, because I thought I could complete two tasks in one visit.  I might have even boasted internally, that I was a genius to take care of these two tasks at the same time.  The first task, was to renew my license registration, at the Sherrif's office to bring my license plate current, and get the sticker to indicate to any law enforcement officer that might be checking my progress, that yes, I had paid my taxes and fees, and was not violating the registration laws for operating a vehicle in our state.

The second task, is one that maybe no one else will have experienced, but it required a trip to the County Clerk's office.  You see, as a member of our community at larger, I stepped up back in 2006 to serve the community as a Poll worker for a number of different election cycles within our county.  I do this as a service, because to be honest, the rate they pay for thirteen hours of open poll service, almost two additional hours of setup and tear down, is not a pay rate I'd accept for any of the professional work I do.   However, as a concerned member of my community, an Eagle Scout, and a person of faith, I value the integrity of elections, and believe that doing so is an important part in ensuring our elections are fair.   In order to participate after being selected, I have to fill out, sign, and return a form to the clerk's office. It was this task that I desired to complete today, because I had delayed sending it in earlier, due to uncertainty with my current job situation, and did not want to commit if I felt in good conscience that I would not be able to serve.

Those were the first two tasks of the day.  I stepped out the door with the letter in hand, and my registration as well, and drove to the court house.  I was pleased because I found a parking meter open within an easy walk, exited my car, added some time to the meter, and off I went, and that's when the first error dawned on me.  I had my insurance statement, the registration, and the signed form for the Clerk, but a nagging thing in the back of mind then came into focus.  'Does the Sheriff's office take check cards, for payment?'  Why I didn't ask this question before I left for the court house is unknown, but it proved to be the pivotal question, because they in fact did not take it.  I asked the Sheriff's Clerks whether they accepted it as a form of payment, knowing already in my mind that the answer was probably no, and received the confirmation that, no, they only accepted, cash, check, or potentially a money order.  I didn't have any of those options on my person, and truthfully I don't usually carry a check book with me unless I know I need it.

Now after reflecting I realize I could have potentially, walked over to the post office, paid the fee for a money order and then did it that way, but I'd have no record, no good one in my register to verify when it was paid.  So the first task of the day, I struck out.   I went ahead down to the County Clerk's office, handed the form to one of the workers, asked if it was yes, (it was), and was happy that I had at least one task done.   I would have to return home, find our only check book and return to complete the initial first part of the plan.

Have you as a tester ever started working through a problem, you jumped to some assumptions, maybe you feel they are good ideas about how it in theory should work in your own mind's eye, and proceed to work through the process of massaging through the interface.  Ever stop at some point to realize, you know, I wonder what would happen if I had done something differently that previous step, only to find that hitting the back button on your browser really isn't a good way to check this new test idea?  It happens to the best of us.  Sometimes a test that would make more sense to apply first, isn't, and has to be run again at the start of another iteration through the process.  That can seem frustrating, but it is part of the learning experience we go through as testers.


Well, this did not just happen to me once, it happened twice.  See I was also looking for information about a repair on my car.  I traveled to the mechanic's garage I have grown to trust, and proceeded to inquire about an estimate on how they might do it, how long it would take, and at what cost.   Surprisingly, they told me they couldn't do this kind of repair.  They had an idea of how the repair could be done, but there was something particular about my engine that required something that they lacked, something that did not give them confidence they could complete the repair in a timely fashion.

What a bummer that was.  However, I countered with a question.  "Okay, if you can't perform the repair, as I understand it, then is there another shop who you might recommend to perform this fix?  I know its not a critical issue on my car, but I would like to get this fixed just as soon as humanly possible."   They gave me a name of another shop, and I then asked, if they thought simply calling them would be enough to get an estimate.  They didn't really know, but suggested that maybe that would work, and it would save myself some money on gas driving out there unnecessarily.  I liked that idea, and returned home to eat lunch with my family.

After lunch, I began looking for the phone number of the shop, first in a few paper phone books, then turned to google, and superpages, but could not find it.  I found other shops, but not this one.  So later this afternoon, I hopped in my car, and drove out to where the shop was (having received directions earlier).  In hindsight, I wish I had pressed for a contact number before I left the first shop, but I honestly  didn't believe that finding it would be that much of a hassle.  That was my mistake, and yet another lesson learned.  

Ever start testing a piece of software, and then at some point just  stop because a question comes to the forefront, that you almost feel, man I wish I had asked that before I started?  Sure, it happens, maybe more often than we like.  We are creatures that learn and grow, and as Testers, we are many times going to develop ruts and habits.  Try to break those habits from time to time, maybe you'll discover a new way to flex the software, to bend and contort it to find a brand new class of defects.

Ultimately, we must try our best to avoid jumping to assumptions.  Never assume you already know the answer, if you've never even asked the question.  Never assume that the developer obviously must have done something a particular way if you've never had a conversation about it, and never assume that you've brought and used the right tool for a type of test.   Now this does not mean that we can never make assumptions, if we do, we need to realize that our testing is based on certain assumptions, maybe its a platform they are running on, or a particular style of device, that may be an educated enough of a guess to allow us to proceed, but we should remember that they are fallible assumptions, and present that as part of our test story when the time comes.

However, today's experience reminds me of something from earlier in my life.  It's funny in a way, since college, my first rule of life has always been to avoid making assumptions.  This was even before I considered, or even had a clue at all what it meant to test anything.   Western's Rule #1: "Assume nothing, for when you assume, you are usually wrong!"  At least that's how I wrote it as a freshman in college.  Today I'd transform that rule to read more like this.  "Make no assumption absent evidence, for assumptions are often based on illusions, that when that illusion is removed or proven false this can result in a great embarrassment to you in life."   Honestly, it isn't that our assumptions are wrong, or that they might be based on inaccurate intelligence of the situation in our projects, it's the false confidence, it can breed, and the blindness it can bestow that limits our ability to test accurately and effectively that are the risk. It's the shock and awe that an illusion the team may have held as true, once removed can cloud judgment on the value in the product.

To conclude, monitor yourself as you test from day to day.  Check to see if you feel yourself itching to turn on the autopilot.  Recognize it, take a step back, and find another way forward.  Use the pomodoro technique (Thank you Markus Gärtner), or some other focusing/de-focusing heuristic.  Be skeptical of even your own best ideas relating to testing, and always strive for one more thing to learn as you flex your testing muscles.  Try testing with a partner (Pair testing), so that you can keep each other from falling into a rut, or team with a developer and show him what you are doing to test the software.  Whatever it is you use to affect how you keep from falling into an zombie like coma while testing, do it as often as necessary, you'll thank yourself later.

Thursday, September 8, 2011

Oh that Bug? Yeah it happens all the time, don't worry about it.

Tell me if you've heard this one before.   A User calls into a help desk saying, hey when I go to do X with my software, instead of doing X, something else totally unexpected happens?   And at some time in the past a root cause analysis is done on this, and they discover what has happened.  The user has done something to the software, perhaps they've configured some optional setting that isn't a part of the normal settings, or maybe an incompatibility with another piece of hardware or software results in it being unable to perform the problem.

Normally you would expect the team to find, and smash this bug, and fix the defect right? Well what if that wasn't what they wanted?  Or what if it was a feature they wanted to leave as it was.  Maybe its a link to some documentation that moved on a website.  It might be easy to fix, but getting a patch might be more expensive than simply telling the user another way to get that data?  This situation comes to mind as I viewed today's Wizard of Id Comic:


If you've been on any software team long enough, odds are you'll eventually come across a defect or bug that you see as a potential loss of value in the product.  After discussion with the team, that bug may be marked as deferred, or left as designed by the developer, and not handled as it is slowly forgotten in the code base.   There are times when cosmetic changes, a font size, a color, may not make much difference to the overall user experience, but what if this deferred bug turns out to be something more, more insidious?  What if it could be the bug that begins to build to a buffer overflow vulnerability that could result in your system being compromised and hacked?

As testers, its important that we maintain objectivity as we are testing.  Sometimes, the development team may not all see eye to eye on what is of value to change for the customer, but we must be every cautious when a somewhat mundane bug is deferred.  Deferred bugs may never get fixed, and as they get left in their unfixed state.  Sometimes this may be fine, and something we have to accept as we strive to produce the most value for our clients, but we must always be careful that the thing we are putting off could be something serious that could put our customer, our client data, or even our own companies at serious risk.

Wednesday, September 7, 2011

Diary of a Soccer Coach: Week 2 - Inter-team Communication, Noise, and Dealing with the Unexpected

Even before I woke up the day of our first Soccer practice I found myself glued to the weather channel.   After seeing the massive thunderstorm and rainfall affecting so many College Football games during the past weekend, and knowing that the remnants of Tropical Storm Lee were predicted to stall out and take time to clear out I was growing in concern.   Some of the weather maps predicted as many as five inches or more of rain locally.   Flash Flood Watches had been issued by the National Weather Service, and there appeared a high probability of rain in the forecast for our game.

Now normally, a little rainfall would not have introduced a bit of concern about whether to have our Soccer practice or not.  Typically the only thing that has ever affected that was thunder storms, or extreme bitter wind chill conditions.  In truth only once in the three previous years I have coached has any of these conditions even been approached.   So naturally I was a little concerned here.   The place where we practice is in a low lying area, that lies in a flood plain area that has flooded in recent memory.  Because of this it was important to know what the weather conditions might be for the weeks practice.

As it happened though, by mid day the rain had mostly moved on, and though still overcast, the rain was gone, and it was modestly cooler, although humidity was still high for our practice.  In software teams, how often do we plan for contingencies like these that could disrupt development, or delay deployment?  Sometimes the unexpected may happen.  A freak ice storm could knock out power to your data center.  A Nor'easter could barrel up the coast and cause localized flooding around the facility you were supposed to test remotely against.   What if the shipment carrying a key part for your data center crashes forcing you to wait an additional two weeks for a customized component to be fabricated?

These are natural impediments that could affect your development process.  As a Tester, a network issue could deprive you of the ability to test on your virtual laboratory.  It could result in a mistaken deployment that results in a dirty configuration unlike the clean environment you are expecting and then it can cause problems as you start encountering bugs, and half to track them down.  There are perhaps a hundred or more situations that could result in what I call a noise condition within the team.  Some of them are internal, within the mind of the tester or team member, some of them are virtual, on the box or serve where testing is to occur, and some could be physical or natural noise that can interfere with your ability to test effectively.

Some of these impediments can be considered in your deployment process, and perhaps avoided or at least minimize the effect it has upon your testing.  However even the most rigorously documented process is only as good as the people implementing it.  As we are human beings, and prone to make errors, you can never avoid all of these.   A missed deployment to a test environment could indicate a whole or missed script in the deployment process.  Why did this script get missed?  The manager may ask this question, but I find the same thing can happen in the soccer field as well.

For the second week of Soccer, I like to hone in on two key skills.  The first is communication.  Whether the players realize it or not, learning to talk to their team mates on the field make a big difference in how they will play down the line.   For Soccer the simplest form of conversation is the pass.  A simple plant of the off foot toward the targeted team mate, and then following through with the passing foot, ankle locked to connect with the center of the ball right about the inside of the ball of the foot.  If done right, the ball will travel straight and follow the same line of travel that your foot was pointing.   I demonstrate this once or twice to our players, who I've saved the trouble of having to find a team mate to pass to, by pairing them up, and have them start with this basic pass.

So I watch the players as they work on their first few passes.   Some kids pick this up very quickly, some quickly get frustrated.  Younger, smaller kids may not be able to kick the ball as far or hard, or might be more focused on kicking the ball, than the technique of the inside pass.   I watch for moments like these, and let the player try a few times before stepping into correct them.  Sometimes they figure it out by trial and error, but sometimes they keep doing it the same incorrect way, and I can see it could cause a bad habit to form.

At this point as the coach, I step in.  I remind the kids to plant their foot toe pointing towards their team mate, to lock the ankle with their toe slightly raised towards their shin and connect with a straight swing of their foot connecting just above the mid point of the ball with the ball of their foot.    A couple more passes, and a little more encouragement may be required.  Keep your eye on the ball (once they get the skill down this may not be as important, but early in the drill process a it may help if the player sees as the perform the task.)  After a few more times if one or another player are having difficulty, then I may step in and demonstrate again, showing what to do, and then emphasizing the difference in how I performed the pass versus how they are doing it.  

One of the common early problems I notice is a player trying to kick the ball with the toe of their shoe.   Kids seem to think they can get more power passing this way, but it really leads to an unpredictable movement of the ball, especially for the younger inexperienced player.  This isn't something you want the kids to do early in their development.  The toe is a very small area on the foot, and many shoes are 'V' or 'U' shaped meaning that if you miss the exact center of the ball and shoe you may hit it more to the right or left and the ball will go out in the corresponding direction. I may even have to demonstrate how wrong this is, so the kids can see the difference, but after doing so I get them doing the passing correctly, back and forth, and may float between pairs, repeating this process as need be.  

As more of the players seem to get a hang of this simple pass, I will then offer them the option to try the same style of pass with their normal off foot (typically the left), and then give them a demonstration of a more advanced pass.  This time using the outside of the foot just behind the joint of the littlest toe and driving the foot to the side, you can actually pass to the side.  All the while I continue stressing, getting the team mates attention, pointing the foot in the proper direction and following through on the kick.

This may seem like a very repetitive and boring process, and for some of the older kids it might.   It doesn't take but a few minutes before I begin to see the first side effects of noise on the practice field.  There are other things to get the kids attention.  Someone brought their dog with them, a butterfly might fly onto the field drawing attention from the drill.  Kids on another field might be doing a slightly different drill and that catches the kids attention.  We have the same kinds of noise in our software teams as we communicate.

A HVAC unit could be louder than normal, a team member may be mulling over some problem they've encountered as we're describing a test we just ran, and the flaw we think it uncovered.  Whatever the noise may be, that noise can impede our ability to communicate effectively the point we are trying to make.  So how can we avoid noise?  Sometimes it may involve asking another team, that is goofing off in the cube next to you, to keep it down as their voices are starting to carry.  Maybe it involves interrupting another conversation that has your team mates attention, when what you need to say is more vital.  Sometimes we have to wait for the noise to pass, such as when a train goes by blaring its horn and drowning out almost everything else you might hear.  Assuring we can communicate our message is key in any context.

Now these example are good if it is an audible noise, what if it is an internal noise?  This is where noticing nonverbal cues is important.  If your team mate is listening, but focused on reading something on a wall, or their computer screen, it may indicate their attention or focus is elsewhere, that could be internal noise.  Another example, is if someone has a habit of doing something with their hands.  It could be something as simple as scratching the back of their hand, playing with a toy of some kind, or twirling of a pen in their fingers.  All of these are nonverbal cues that your team mate try as they might, may not be committed to the conversation.

So how can we avoid these things?  In soccer, when passing I encourage my player to start the passing conversation by calling their team mates name.   Then as the ability to pass becomes second nature I instruct them to keep their eyes ahead of them towards where they are passing the ball.    The other player, I instruct to keep their eye looking back towards the ball as often as they can while moving around the field, so they are prepared to receive and complete that transmission of the ball across the grass of the field to their feet.   Ever wonder why eye contact is often stressed in verbal communication situations?  If our eyes are turned away from the team mate who is trying to communicate with us, then also our ears may be turned away and reduce the optimal ability to hear what they are saying.   That is not to say that we should stare a hole into the head of the team mate we are trying to communicate with, but we should make enough eye contact to show that we value what they have to say.

What do you do when your comrade's attention is wandering, or they are busy multitasking and can't seem to keep up with the conversation?  Ever been in a meeting, in person or virtual were someone is being told something and then the speaker follows with what should be a typical yes response question?  "Does that make sense, John?"  The initial reaction may be for the person to say Yes, but what if their attention had drifted, they might realize they didn't fully absorb the importance of what was being translated to them, and the cue, is John's chance to say, "No, I was having trouble following what you are saying, can you please repeat that?"  During any conversation we can show our continued attention, not just by eye contact by other nonverbal and verbal cues.  Nodding of our head, a quiet yeah, or aha can indicate we are following the chain of the conversation well.


There is one more nonverbal cue I look for when talking with a team mate.  That's when their hands come up to their mouth.  You've probably seen someone at some point do this.  You mention something, and they may begin to cover their mouth with one or more fingers, indicating subconsciously that they are trying to parse together a question or response to what is said, but those fingers indicate that a lot of thinking is going on.  This is a telltale stop sign.  If you see a team mate do this, then it is highly likely that they have a different point of view, or something to contribute to the conversation.  There are other mannerisms that can indicate this desire to contribute back to a conversation.  Someone looks like they are trying to reach out and give you a subtle stop sign, is another.

There are so many things we may communicate through nonverbal cues.  How often do we ignore these cues and keep on rambling through our point, wanting to reach its conclusion without allowing our colleagues to collaborate and fully commit to the conversation?  Regardless of your place on the team, be it a tester, developer, manager, or team player.  Communication is critical.  Without it, the noise may increase, and the ball we are trying to pass to our team mate may end up in the wrong cue, or intercepted by the competition and then we are back tracking trying to recover, and catch back up to what we've lost.

So as you go back to your work spaces, consider these thoughts: Where in your environment does audible noise interfere with communication?  What can you do to work around it?  What can you do to react better to the nonverbal cues of your colleagues?  How can you make sure they are able to contribute to the conversation, and thus collaborate towards a better end?