A lot of words have been said about the automation role - that automation tests are not tests but checks (for example: Testing vs. Checking), that the test engineer does much more than that, etc. I agree with most of them.
But I think the problem is much deeper. The strategy of automation is almost designed not to find bugs.
To make my point I will refer to an article by Oren Rubin, CEO of testim.io. I heard a few talks by Oren and he is an expert in automation and development in general. Oren is a very knowledgeable and capable person, but I don’t know the product testim.io manufactures.
However, I think he considers that it’s more important for checks to run smoothly than it is for checks to find problems in the product. But he is not the only one, you can find more-or-less the same views from other automation experts.
The article I refer to is 10 Rules for Writing Automated Tests and I will evaluate some of the 10 rules.
“Rule 3: Create Structured, Single-Purpose Tests. Single-purpose tests verify one thing only!”
It makes sense for automation, not for a user. I open WhatsApp and start writing something to someone, but then I get a message that might be more important so I pause and go over to see it. Then I return to the starting point.
This is Rule 4 from the article: "Tests’ Initial State Should Always be Consistent"
Automation tests, he claims, “should always start from the same initial state”. For that, each test should be independent, and we should consider creating a new user as part of the test.
Yes, it is probably good advice to make sure the automation will not fail, it is poor advice if you search for abnormalities or bugs. Creating a new user per run is, if I am not mistaken, not something the users usually do, and this is enough of a reason not to do it. The users have usually one app user each, and this app user has a history, sometimes lacking some data because of database upgrades in the past. Sometimes the user didn’t log in for a long time and lost some permissions etc. These things can result in bugs, I’m sure most of us have encountered a scenario of the sort.
Rule 5 says: “Compose Complex Tests from Simple Steps“ to “emulate real user scenarios”. This sounds good. However, “composing those tests [i.e. complex test] from simple tests parts (shared steps)”. This closes the opportunity to do things differently, what a human tester might and even should do. On one occasion you press the Send button with the mouse, on another with the Enter button. In one case you get to a screen by going forward and in another by going back.
“Rule 7: No Sleep to Improve Stability“. Yes, using the “sleep” function may avoid “one of the main sources of flaky tests”. Instead “add a wait-for-condition step” which is, for example, to “wait for an element to be visible”, and this is “just like a human would.''
Well, yeah, maybe. Maybe it is a young impatient engaged person that will immediately press on the now visible element. But it could be someone that, until the element appears, will go to make coffee and open another program. Or an elderly person that will take his or her time. And in this idle time, many things can happen - infinite loops and losing focus are only some of them. And I don’t even think that the element might be wrong from UI POV.
“Rule 9: Reduce the Occurrences of Conditions”, e.g. “disable random popups”. Which popups? That the user encounter and might lead to… bugs. Yes, well, this is not the purpose of testing.
“Rule 10: Write Independent and Isolated Tests”. This means that “an important methodology of test authoring is creating self-contained, independent flows”.
I think one of the most important ways to find bugs is NOT to do that. Most users are NOT using your apps like this:
Well, I have to send and write some emails. I will open Outlook and open an email. Then I will close Outlook and reopen it to write something in the email. Then I will close and reopen Outlook to send the mail, restart Outlook to read an incoming email, and if I have 10 emails I will reopen and close Outlook 10 times.
“This”, says the author, “allows to run tests in high parallelism,'' which is something else the user will not do.
I am not against automation, and the meaning of the title is not that automation doesn't add a certain layer of confidence. It does. And I don't think that automation vendors are doing something morally or legally wrong. The deception is on you, if you think that what was checked by automation is imitating user behavior and that it is certain to work for the customer, even in equal conditions (configuration, hardware, etc.).
To trust the automation checks more, I expect the vendors to do the right thing and create tools that will not only be easy to use but also efficient in detecting bugs.
Until then, use automation if you must, but not in a blind way, suspect it just as you would suspect a Ferengi partner.
Comments
Post a Comment