Part 4 - Smart Test Automation: Intuitive Enhancement over Reactive Rectification

<< Previous Assumptions: Intuitive Enhancement method has the following assumptions.

1. The team that works on the development of the Automated Test Solution (ATS), collectively, knows, in details, each and every part of the implementation of the ATS in question.

2. The team that works on the development of the Automated Test Solution (ATS), collectively, knows, in details, each and every part of the AUT functionality and domain background.

We believed that a “‘Failure’ surfaces only if something is not correct in the solution, OR something correct is not present in the solution”. The process of Intuitive Enhancement caters to both - rectification of mistakes and addition of required solutions | workarounds (enhancements).

Method Deployment Process: Let’s discuss how we implemented this approach. We ran the test till it broke for the first time. The test broke and we identified and analyzed the problem (s) accountable for the same. Our goal was to strike a reasonably optimal trade-off between the test-time and the solution-reliability. Since we did not want to sacrifice, even to a small extent, on the reliability front, we planned to influence the test-time factor based on our reliability-benchmark. After analyzing the problems, we classified them on the basis of the nature (unhandled exceptions, missing | changed control property etc.) of the problem and the impact of the problem on the solution. Then, we selected one representative problem from each class and designed workarounds for the same.

Then, we incorporated the workarounds in the solution and ran the test again till it broke for the second time. We identified and analyzed the problems pertaining to the second test-run. Again, we classified the problems and included that in the existing list of logically classified problems pertaining to the first test-run. After this exercise, we had a list of problems classified into various categories.

Then, we did not run the test for the third time immediately. Rather, we designed minimum number of workarounds | solutions (enhancements) that would address maximum number of issues discovered so far and stated those solutions on our list. Based on our knowledge of the enterprise-application (the automation candidate in the project) and the problems discovered pertaining to the first and the second test-runs, we predicted what similar problems were highly probable to surface if we went ahead with the subsequent rounds of testing. We designed solutions | workarounds for the predicted-problems as well. Please refer to the figure, Figure - 6, sited below.


Then, we went ahead with testing the workarounds so designed in isolation and after integration. Once we were satisfied with the credibility of the designed and implemented workarounds, we ran the test for the third time with the workarounds integrated into the solution. And, this time, as expected, the test did not break in the middle and saw a successful completion. In our new approach, we could save a lot of time that we would have otherwise spent in the reactive mode of rectification! With this method in place, we saved a lot of human effort as well, of course. Thus, we could provide a stitch in time that saved us nine.

Conclusion: In a typical case, it could be nine OR it could be five. However, if it is more than one and if the strategy works, we have a reason to be happy about. It saves time, saves effort and saves cost, ultimately. This approach worked for us. If implemented in the right way, it has the potential to benefit many contemporary large (and medium) scale Test Automation projects in the current context.  Next Page >>

Comments

Popular Posts