I recently came across a passage regarding the different ways of establishing belief for hypotheses and found myself thinking about an all too common area of missed opportunity in software quality assurance. It reads:
“All swans are white- until you reach Australia and discover the black swans paddling serenely. For science built on induction, the counterexample is always the ruffian waiting to mug the innocent hypothesis as they pass by, which is why the scientific method now deliberately seeks him out, sending assumptions into the zone of maximum danger. The best experiments deduce an effect from the hypothesis and then isolate it in the very context where it may be disproved. This falsifiability is what makes a hypothesis different from a belief- and science distinct from the other towers of opinion.”
from “Chances are…adventures in probability”, Michael Kaplan & Ellen Kaplan (ISBN-13: 978-0143038344)
This paragraph started me thinking about whether or not the quality assurance and software testing practices I have observed on several recent software projects are being quite as effective as they could be. Software quality, not unlike project documentation, is too often treated as an unnecessary evil within most software projects; a purely expense item whose perceived value and importance noticeably depreciate across the lifetime of the project. Amongst the first items to be cut from the list of software deliverables, some development teams have a hard time getting management to agree to have adequate quality assurance cycles within the project plan at all. Of course once the project has launched and is in production, quality could not be any more at the fore-front of everyone’s mind. This is because it is not until the first time critical data is lost or a production line is halted that the real value of testing and quality assurance is seen. Software is visible and prototypes have buttons that can be clicked and controls that can be manipulated. Testing leaves only a void on the timeline and a hole in the budget; and when was the last time you saw a QA person giving a corporate demo? The problem here is that the value of quality assurance is not tangibly communicated until it is too late for any corrective action to be taken.
I talked to a friend of mine who has been a software quality architect now for several years and despite the lack of value for his role continues to enjoy his work and perform his job with the utmost commitment to excellence. I asked him about the difference in the projects where QA has been valued from the beginning of the project. His answer didn’t form into useful information for me until the last couple of days. My friend felt that if the QA engineers had a clear understanding of the project and its direction from the beginning of every release cycle that they could better create and craft appropriate test plans and be seen as adding value right from the beginning of the project. With a clear understanding of what the software was actually trying to do, his team was able to provide useful feedback to the developers even within the first couple of release cycles. This information was able to help shape the software designs and decisions made by the developers and became itself a tangible deliverable within the project. Such visibility of the contribution they were making became an integral part of the release planning. Instead of simply “filing bugs” from day one (which is little more than an inconvenience to a developer who “knows it isn’t working yet”), they were able to leverage their knowledge of the project’s direction to get away from the typical bug-tracking path and focus more on actually testing.
To paraphrase the contribution outlined above, the QA team had ceased to be a bug-filing machine and had migrated into validating assumptions and testing concepts; providing a tangible output for their work that influenced the course of the project.
Exhaustive testing certainly has its place and I for one am a very large proponent of code-coverage, standards adherence, and testing to a high-level of detail. However, when was the last time that a bug filed in the first release of a mid- to large- size software project had any relevance to the final version of the software that was actually shipped. Certainly there are some simple errors that can be fixed by a developer within certain software sprints, but it has been my experience that subjecting incomplete alpha-builds of software to rigorous testing simply results in proving that indeed the software is not yet ready for release. Rather than taking the typical software reaction of performing QA at the end of the software cycle, I propose the integration of QA at different levels throughout the software development process.
Software testers do just that; they “test” things. However, the role of a software tester has become very divorced from that of a software engineer to the great detriment of many projects. Quality assurance engineers are often left in the dark about project requirements and are often brought in towards the end of a project and asked to “bang away on it” for a while, filing bug reports of their findings. Ultimately, several of the “high-priority” bugs are patched within the software to make it limp less noticeably and the product is pushed into production. This is neither satisfying for the bewildered quality assurance engineer performing the banging, nor for the developer who uses the appearance of such bugs to account for the “problems with late changes in requirements.” There are many tasks that a quality engineer may perform, especially early on in the project, that do not involve blindly banging away on something.
This returns me to the paragraph above and thinking about how classical and quantum science theories are tested. Certainly the minds that attempted to locate flaws or incorrect assumptions in some of Einstein’s theories of quantum mechanics (Einstein himself being one of them) had an intimate knowledge of the system they were trying to break. Further, they focused upon certain key assumptions that were underpinnings to the general overall theory and tried to find flaws there. It is much easier to verify and validate a small assumption than to assault an entire framework in one go. This probably leads the mind towards thoughts of unit testing and test-driven development as a whole, however if I might I’d like to divert for a moment and categorize those practices as another form of exhaustive testing; albeit one more commonly linked with software developers. Rather, consider that the importance was not the unit size of the piece of software being tested; rather the knowledge of potential flaws that led Einstein (or Podolsky/Rosen) to create a particular thought experiment that attacked a very specific part of the theory. Here, the importance falls upon the knowledge of which parts of the framework to test and in which specific ways. It is in that single aspect I feel most software quality efforts fall short. If I were to attempt to disprove the general theory of relativity, I would stab wildly in the dark testing the most basic assumptions that had more than likely already been tested by the original developer of the theory many times during its creation. Yet this is precisely what happens in many software quality assurance efforts. As a software developer, I have to test and debug my code many times during the natural creation cycle of that code. I go through countless cycles of testing it in many known ways to ensure that it functions in the way that I expect. Despite this, I know these same tests are going to be repeated many times over by an under-informed QA team during a time in the project where time is most limited. The problem doesn’t stop there however, as many developers could cite areas within the system that they feel are potential weak points. This information simply never makes it to the QA team and so they are left up to their own instincts to “find” the bugs within the software.
Consider a different approach, especially early on in the project life-cycle, that involves quality assurance engineers testing simple premises. It is common within software projects for a development team to settle upon one or more patterns that are appropriate for their solution. These aren’t formal software patterns such as the oft-abused factory, more the employment of a particular data-binding technique or a decision about the structure of a data access layer. As soon as the overall system architecture has been constructed there are weak points that begin to identify themselves, areas where the developers aren’t entirely happy with the solution at hand but can’t think of another way to solve the problem within the time and materials budgets that have been set. These areas are perfect for the initial attention of the quality engineers. Just as developers must ramp up on certain technologies for a project, so too must quality engineers. I can’t count the number of times a development team becomes short-tempered with a quality assurance team because they don’t understand why a Foo control behaves in a particular way, conveniently forgetting the struggles they had with the Foo control when first learning it. Had the testing team been present throughout the learning phases of the project, they too would have had the chance to learn about the intricacies of the Foo control, probably learning its quirks from another perspective and providing additional value to the team as a whole by sharing their experience from the end-consumer side of the software.
This brings me to my point (to cut a short story long as might be said) about focusing on falsifiability. The most valuable testing of any system seeks out the worst ruffians (bugs) that are waiting to mug (crash) that system, exposing them early enough in the development life-cycle to have an actual impact on the direction of the project. These most vile flaws are often the most insidious and will appear to deliberately hide from an unsuspecting team until the worst possible moment. Developers are not looking for these flaws. They are focused upon the solution to the problem not on whether their solution will fail. They have knowledge of the potential existence of such flaws but fully intend to code around the problem in a manner robust enough to hold off any coming assault. However, quality engineers are paid exclusively to seek such problems and alert people to the coming calamity while there is still time to avoid it. In order to do this they must know where to look and must focus upon the most uncomfortable parts of the solution, the parts where assumptions fail and things go wrong. It is in this that better communication and more shared knowledge is needed and it is in this that developers have a duty to include their quality personnel and expose to them their deepest fears. Due to the overly oppressive nature of most software deadlines and the expectation that software is a finitely-provable art, most developers would rather tear off a limb than tell the world about every part of their proposed solution that might fail. However, it is in doing just such a thing that a higher-quality product is built and a more robust solution is delivered. After the software launches successfully and proves itself aside from other failing products, developer and tester alike will be held aloft in the minds of product managers (and in some cases users) for having the diligence and skills to deliver. It is in focusing upon the areas of maximum danger, by actively striving to falsify every assumption within the solution that a truly successful software project is born.