In this blog, we will cover how Teams alienate existing users when they choose to test their apps on intuition and not driven by data.
Every Product Team is forced to work in a system that clashes dev and QA incentives.
Devs are incentivised to ship code as fast as possible, with concerns around the feature or sprint velocity, and time-to-market of new features. These come at odds with QA incentives, who want fewer and fewer bugs in production hence want to write as many tests as possible to reduce damage in production.
Writing and maintaining tests is hard, as it becomes humanly impossible to think of all possible user-flows that might break for a large and complicated application, so they try to optimize what to test. QA teams would not risk anything less than complete coverage, but an educated guessing game based just on experience and vague knowledge can only make “assumptions” on what is important. Their judgment on what to or what not to test is not actually based on the data from real user-app interactions.
What happens when teams try to solve this by investing in a QA process that works on intuition and not data?
Teams feel safe in having thousands of hand-written tests that take several days to run in a sprint, therefore slowing down development speed and time to market.
These tests at best are educated guesses and cover how users can or would interact with your application, therefore failing to catch anything important that might destroy your application’s product experience.
This hand-written test suite soon becomes a separate product that takes several thousand hours in maintenance, in effect increasing the effort required in testing, contrary to the actual goals of test automation.
Why does this testing strategy of building a hand-written test suite fail every time?
Every hand-written test suite fails to optimize all 3 parameters i.e. coverage, assertions, and tests at the same time. It is impossible to think and write enough tests with sufficient assertions that give 100% coverage.
Questions to engineering and product leaders who have decided to build their own test automation suite.
Are you sure your test suite can test anything & everything your user can do such that the app never sees patches or fixes?
As your application changes rapidly and keeps adding new features, how many tests would you need for the right coverage?
If the answer to the above question is yes, what was the cost of writing that many tests and the current cost of maintaining them?