So it is, but actually stating that, or anything along those lines? Way to kill the team, boss! (See Peopleware)
That said, quality assurance, quality control, QA, call it what you want, but it's one of the more misunderstood aspects to software development. Oh sure, everyone knows that they need to do more QA or better QA, but lip service is about all that is ever paid towards it. I am notably not including in my 'everyone' those who feel that QA can be completely automated. You guys are wrong and I'm going to leave it at that. You also may think you don't need to do it, see this article for some classic arguments against that fallacy.
I'm not going to go into depth about QA, how to do it, best practices or anything along those lines as I'm fairly unqualified. That said, I'm not really qualified to talk about anything, but that doesn't really stop me.
QA is a processes, not a task
This particular fail case is something I've seen in multiple organizations now. The most obvious symptom of this is when management has decreed that there is a block of a few hours set aside to 'do QA' on an application with a few hundred known use cases. Another obvious indicator is when other employees are volunteered to do a few hours of QA on top of their normal job. Think you're going to get good results from that?
The root cause of this failure is simply not understanding how QA works, so let's walk through it a bit. In a very broad sense, the general list of tasks for QA is something like this:
- Go through the basic cases
- Go through the corner cases
- Go through obscure, known failure cases
- Exploratory testing
- Automating 1, 2 and 3.
So, how does this
fit into a day of work? Let's find out:
First off, we're going to go through the basic use cases for the application. Then, there is a pile of corner cases that are pretty valid that need to be checked out. Then it's time to check all the really obscure, but horribly embarrassing failures that have been seen before. From there we can finally...What? You changed the code? Okay, first off, we're going to go through the basic use cases for the application...
Interruption here! "Silly tester," says the savvy developer, "You only need to re-test the parts of the system that were changed." Nice theory, but wrong in many, many ways. Simply put, if this was the case, testing outside of developers would never be needed. That generally goes well.
Back to the task at hand, do the basic cases, do the corner caWHAT? Changed again? Basic cases...
The real job of QA starts at step 4, which we haven't even seen yet. Exploratory testing is finding the embarrassing defects before they get out into the wild. A good tester at this phase is going to break your application in ways you haven't even dreamed of. In ways that only 0.1% of your users would ever try to do. Of course, if 0.1% of your users do it, and you get 10k uniques per day? That's 10 people per day that are going to hit this embarrassing bug that how could you possible let into the wild and I'm taking my business elsewhere right now as I obviously cannot trust you with my data. And if one of those has a blog? Heh. Have fun with that.
So the epic fail with having 16 hours scheduled in to test your quarter million lines of code application? If you've got bug fixing going on at the same time, any of your competent testers will never get past step 1. Any testers that listen to the savvy developer, or worse, are the savvy developer will miss basic cases and you deploy with fundamental breaks.
The purpose of QA is not to have someone say, "Wonderful developer, your application is perfect!" If I hear that from a tester, I assume the person isn't doing their job very well. QA should hurt your feelings. Assumptions you made should be laid bare and justified or thrown out if incorrect. This is often the last line of defense before your customers see your application, take it seriously.