April 30, 2019
Automated testing is not a substitute for QA. This ultimately depends on the product and if it’s possible to automate testing; this is especially true if it is something that has been tested in the first place.
One example of this is at one time Chrome appeared to have broken full screen movie playback for 5% of users. Something changed with the application, performance got bad, and people who previously could play full screen video on a low end machine suddenly got playback too slow to be useful. Videos just stopped going full screen. No bug reports came in. Nothing in the testing infrastructure caught it. I only noticed because my father tried to show me a video on his atom based netbook.
However, a strong case for it is when it becomes a huge time-suck. QA teams operating in a waterfall method with gigantic, monolithic releases probably release a handful of times a year. Almost without exception, QA is done manually and is painfully slow. Starting to automate all of this is a daunting task, and might face some resistance due to completion dates being far over the horizon.
What usually happens in this scenario is QA engineers without coding chops are compelled to write automated test cases or let go, while the others might be integrated into the development teams. Overall this does end up shortening release periods and produce better quality control than ever before.
The important thing here is a separation between development and QA. Nobody’s going to be working 80 hour weeks doing two full-time jobs at once.
This website talks about sensical approaches to QA testing and how to really know that your application does what it needs to do day in and day out.