Automotive manufactures have long been on the vaunted TDD bandwagon. For years they have harnessed the amazing potential of unit testing, verifying components individually prior to the expensive and time-consuming exercise of assembling them to make a completed vehicle. Integration testing in the automotive world is very expensive and to find that a single component failure has wasted a fleet of test vehicles as well as many months of construction and fabrication is close to unacceptable. However, they have also long faced the question of when and where to design for testability; a line still not clearly defined in any industry.
It makes perfect sense that if a component is to be tested in isolation from the system it will eventually be integrated with that a certain amount of planning is necessary in order to facilitate that testing. This means that certain designs, while technically and functionally brilliant, may need to be modified if they are ever to be tested. The question is where to draw the line. All too often in software engineering the line is drawn firmly on one side or the other, rarely in a position of balance.
I can understand on a modern motor vehicle with multiple on-board computers performing a variety of related tasks that there is a need to interface with and test those computers thoroughly. I’m sure that to the component manufacturers the need for extra diagnostic access ports and programmable interfaces is a hindrance to their development; after all they are working in a very closed system. However, to the test engineers the ability to simulate all manner of failures and receive highly detailed information about everything the computer is doing is absolutely indispensible. They cannot guarantee the component functions correctly without it.
Unfortunately this desire for components to be testable can easily get out of hand. Consider that a test engineer is tasked with ensuring that the onboard computer functions at various speeds. Unforuntately his equipment is bulky and doesn’t fit inside the vehicle. Even at low speeds he finds that keeping up with a car in motion requires him to run alongside pretty quickly. Should he require that the car not go faster than 8mph in order to ensure that he can completely guarantee the functionality of the computer for the legal range of speeds he is able to test? Something tells me that the manufacturer wouldn’t be able to sell too many of that particular model.
Instead the test engineer must find other ways to test the car at high speeds. Custom models of the car are constructed that allow a connected rig to be tethered to the car, carrying all of the the test equipment alongside even at high speeds. At even higher speeds, motion simulations and car treadmills are employed to allow for testing that the tethered rig is unsafe for. The design of the car wasn’t limited to a maximum speed of 8mph, rather a compromise was found to allow the right balance of functionality and testing.
In software engineering, we have a variety of tools and frameworks at our disposal to accomplish similar testing of software. Unit testing frameworks, mock frameworks, dependency injection containers; they are tools. Each plays a specific role and solves a particular flavor of problem, however none of them are all-encompassing and all of them require thought before use.
There is a growing epidemic of code grown out of the desire for “high coverage” unit testing. However, some of the motivations behind this testing appear to have taken an 8mph turn somewhere; for the worse. In an effort to improve code quality, the initiatives towards testing have started to build 8mph cars with 6 wheels and no doors. In terms of test coverage the cars are truly superb, but from the outside many consumers are left scratching their heads and wondering where it all went wrong.
I usually find that if a particular design decision is inhibiting the testability of a component, one of three things is happening:
- The design is so tightly coupled and stable that no testing could hope to penetrate the darkness of the black box surrounding it. (i.e. a dll with one 5,000 line method inside that does “something”)
- The design is so abstract that is doesn’t actually do anything. (i.e. a dll containing only interfaces – test that suckas!)
- The testing tool being applied is the wrong one for the job.
As a software engineer, I can fix 1 and 2. Fortunately with good design and architecture we can actually prevent 1 and 2 from ever reaching QA in the first place.
There isn’t much I can do about number 3. This is where I ask for the line to be moved a little with regards to the tool that is being employed. Here are a few examples of hammering screws and screwing nails.
Static methods, unless invoked through reflection, attract coupling from consumers. That is to say that if an assembly contains a static method, there is a good chance that something calling that static method has a strong reference to the assembly that exposes it. This is not a bad thing and there are many places where such a design makes perfect sense. However, in certain situations, this can cause mocking frameworks to perform more work than necessary to set up the premise for a unit test because they can’t mock away the reference the exposed assembly that performs work. This is because mocking frameworks are ideal for mocking abstract implementations and poor for mocking tightly coupled components. There is nothing to mock because everything is explicitly stated. See screw, swing hammer. Static methods are perfect for unit testing thanks to the magic of reflection. Inspect the method contract (the logical contract not the physical source code – i.e. what, in English, does it do) and based upon that contract construct a suite of tests to affirm that the method is indeed doing what it promised it would.
Service assemblies, such as configuration loaders or proxies, tend to operate more upon interfaces than implementations. That is because the value they add is an operation upon some implementation of an interface rather than a concrete process defined up front. This is the provider model. We write a set of interfaces describing the kinds of things a provider can do. We also write a set of services than help those providers achieve some work. We don’t write the actual providers themselves because this is a plugin model. Well, how do we test a provider that hasn’t been written yet? Wait, this could be hard in unit testing. We call a method on an interface but have no way to know what the “as yet unwritten” provider will do in that method call and have no way of knowing whether the result is correct. NTemporalDisplacementUnit 1.0 hasn’t been released yet so we can’t do it in a state of temporal flux either…panic! But wait, in thinking about the problem further we realise that we don’t care what the provider does. The contract we are providing is that we will interact with a provider in a predictable and documented fashion. As long as we interact with the interface in the correct way and at the correct time, our part of the contract is fulfilled. Woohoo! That’s what a mock framework is for. We mock up a provider and check that our service correctly interacts with the provider. Mock frameworks are great for this. Finally, a nail that we can hit with our hammer.
If you find that such thinking manifests itself in dogmatic emails that state “static methods shouldn’t be used because they inhibit testability” or “interfaces are bad because we can’t get coverage with them.” then it might be time to inspect both the situation and the tools and see that the two are being correctly paired.
The goal of testing is to improve the quality of the end product, not to turn lots of lights a pretty shade of green. 8mph cars with lots of green LEDs and no doors for great dogmatic case study’s and really awful consumer responses.