8/27/2015

Quality Testing Is The Best Investment

There's no doubt in any software engineering practice that spending money on the quality of the product has to happen. After all, a feature that doesn't work the way it's supposed to, or just flat out doesn't work at all, really isn't a feature at all. And, hopefully at the very least we won't write code into our systems that cause the systems to crash entirely. That would most certainly destroy customer trust. We have to have quality in software, whether we work for internal engineering departments or whether we're marketing software as a product. Games, productivity tools, health tracking tools, and all kinds of other software depend on software to remain profitable. Instead of thinking of quality as expensive, we need to think of it instead as the much less destructive cost than a customer finding a failure in quality. This article does a pretty good job explaining why it's much more expensive to fix a bug the later in the process it's uncovered. Quality isn't an expense. It's the most critical investment you can make in the software portfolio you offer. There's two kinds of investing testing I want to cover, and their related components. Interactive testing and automated testing.

Interactive Testing

There's several different phrases or quotes that all have the same general acronym in them: QA. For example "has QA tested this yet?" or "What did QA say about the latest features?". In general when we talk about "QA" we mean our team of interactive testers. This is a group of people with a test lab of hardware where they exercise the software by inputting all kinds of nonsense (and valid input too) to make sure that when gremlins hit the software, it won't implode. They have requirements and acceptance criteria, build test suites, and make sure that when Joe User opens the application they don't immediately close it again. No matter how good your automated test suite is, we always need interactive testing to verify that, as human beings, the software behaves itself.

What makes this a good investment? Well, first of all when you have a strong interactive testing team they will build automated tests out of the manual tests cases that they have created. Then, going forward, you have the ability to plug in regression analysis into your continuous integration build that includes testing that happens very close to the user's reality.  For web applications (my wheelhouse) we usually use a tool such as Selenium to drive these interactions.

Automated Testing

Unit / Functional / Integration Testing

There's plenty of people way smarter than me that have taken the time to address the ratio of unit vs. functional vs. integration testing in your testing suite. My main goal is to get unit test suites that do a great majority of the testing of each class and method within a class. When I talk about these kinds of tests in general, what I mean by this is tests that can be run by some combination of a test harness (jUnit/CodeCeption/nUnit) and some kind of dependency injection container (Spring/Symfony/vNext). When you're writing pure unit tests, the DI containers should not be initialized by the test harness. For some functional tests, and definitely for integration testing, it's important to ensure that the proper usage of the objects placed into the DI container is correct. For the most part, the highest bulk of automated test harness code should be unit testing that mocks away any dependencies it requires. Prevention of repeated regression is the main and most obvious benefit that comes from testing created in a test harness. Furthermore, with a proper test harness production bugs can follow this workflow: verify bug->reproduce bug using test harness->make red test turn green by refactoring. By using this workflow, you assure yourselves that, as far as regressions go, one less will take place. Now, instead of making it into QA or out to the client, your automated test can inform you immediately upon check-in if any new code violates an existing test.

Performance Testing

Applications have to be fast. According to Microsoft research, Goldfish do a better job of paying attention to one subject than human beings do. You have 8 seconds (or less) to ensure you're delivering the right value to your customer. Otherwise they're going to context shift and look for something else more interesting to pay attention to. 8 seconds. To load the most critical insights on thousands/millions/billions of data points. If you don't think performance (or perceived performance) matters in your application - you're going to end up regretting it. People will stop paying attention to it, and then you won't have customers to work for any more. 
Source: Microsoft Attention Span Research

Penetration Testing

Ignoring security implications of software seems to me a bit like sending a cow out into an alligator infested river and expecting it to make it safely to the other side. Read the news. You'll see what I mean. After all, one of the many critical aspects of quality is security. Many software industries can even get into significant legal trouble if the software doesn't properly protect its users. HIPAA protects health records. FERPA protects education records. Imagine if you worked for a company making one of those kinds of software, and sensitive data got out. At that point your company's reputation has been tarnished to a point where recovery will take a very long time, if you can even recover at all - and while you're working on spending a ton of money trying to repair the damage, you also have legal regulators to contend with. The expense of a security breach cannot be ignored, just ask Target. The cost of implementing the best security measures we can on the outset hardly seems like any kind of expense at all when you compare it to the risk of not raking security seriously. How much more likely is it Target wouldn't be in the public's cross-hairs if they had spent $148 million dollars on a security improvement initiative? And they'd have a reputation that didn't include "don't use your credit card to shop there."

Peer Reviews are Critical Too

In essence, the fastest way to get quality is to have someone sitting over your shoulder watching you write every line of code you're writing to get a feature out the door. This assures two things: 1) no one person has all of the knowledge about a given feature 2) the person writing the code will write differently because they know they're being reviewed in real-time. Pair programming gives us the opportunity to have a pilot/co-pilot pattern to work from. But, it's very hard to implement practically - and also very expensive, because you have to have "twice as many" engineers to get the software into production. But - with twice as many engineers, the likelihood of regression happening from any one given check-in goes WAY down.

JSON Jason