When we started to design Q7 our goal was to let non-programmers to develop 50-200 automated test cases per month. In fact, such high rate of test creation would not be possible without Capture and Playback functionality implemented in the tool.
Capture and Playback is not a unique approach, moreover, it is commonly used. Most of testing tools declare their Capture and Playback functionality as a silver bullet for QA engineers’ armory. In reality people face too many problems related to Capture and Playback, and that makes them very skeptical about the whole approach.
One of the widely recognized problems are Wait operations. This is the worst enemy of an engineer who is responsible for acceptance tests automation. The easiest way for test developers to resolve this problems is to insert a waiting operation which lets the tool sleep for a specific amount of time. There are more sophisticated and therefore time consuming elements of code which
recognize a completed operation by monitoring UI changes or receiving other internal events from the application.
Q7 do not require anything from an engineer to deal with this problem. Using Runtime Intelligence and Code Instrumentation, Q7 always knows when an operation is completed.
Why don’t other tools/frameworks use the same thing such as Runtime Intelligence to achieve similar results? The answer is clear: it would not be possible to develop a kind of an uber-tool working with all the types of applications in same fashion. Neither it would be possible to do this while interacting with an application via the UI layer only. Since we are employing Code Instrumentation for the well-known types of applications: Eclipse RCPs, Eclipse Platform, etc., we are able to do this. Yes, Q7 is not that useful for testing .NET and other non-Eclipse platforms; but as we stated already, this is out of Q7 scope. All we needed was a great tool for Eclipse and we made one.
While your testing tool is capturing/recording user actions for further playback, it should be able to identify UI elements properly, such as “click on the button, which caption is “OK” and which is located under the edit-box labelled “Name”…. This identification should be short, unambiguous, and independent from an operating and windowing system, so that the tests recorded on one platform can be
replayed on another.
This is a relatively easy task while we are working with standard operating/windowing system elements, such as buttons, or standard elements of an abstraction layer/UI framework such as SWT, Swing, or WinForms. However, when we deal with custom-draw controls (these are elements which are known to the authors of an application only), most of the tools face the problem of object identification.
Let us take an Eclipse GEF-based application with a diagram editor, for example, the UML diagram editor. From the perspective of a testing tool there is no diagram presented to the user; all the tool sees is a large canvas on which the application draws something. This leads to identifying objects through coordinates on the screen, e.g. click at pixel 56, 127, and hence tests become extremely fragile for several reasons: a diagram may be
re-layouted later due to a minor change in the allocation, an object may have another coordinates when test is replayed on another computer with different screen resolution, etc. Custom draw controls would also worsen problems with Wait operations mentioned above.
Q7 works with GEF/GMF, and for Q7 these GEF diagrams are more than simple drawings on canvas. Q7 fully understands diagram structure and layout and is able to identify any diagram node using path within the shapes tree. It can use captions and texts within a diagram to make this identity short and easy to maintain. When dealing with EMF, Q7 can also identify a diagram element using domain model objects which are mapped to diagram shapes,
as well as fall back to coordinates if everything else fails.
Technical difficulties to capture and to reproduce user actions are also the real challenge for testing tool vendors. One would reasonably expect that once recorded activity would be exactly replayed later by the tool or Continuous Integration environment.
However, it’s quite common that tool vendors has a limited set of mechanisms to capture user actions and play them back. Sometimes it narrows them down to interactions with the windowing system of the operating system, capturing UI event messages and sending them back at replay. The next level is to deal with UI libraries like SWT or Swing, listening to their events and invoking corresponding methods using library functionality
to playback (this is how it works with SWTBot).
The problem is that neither windowing systems nor UI frameworks were designed to support such capture/playback functionality, and it’s technically impossible to reproduce user activity at an accurate ratio of 100% during the playback phase. As a result, the playback feature does not work as expected during the replay phase. Some vendors offer their versions of UI frameworks (SWT) to bypass framework limitations and support capture/playback
more precisely. However, this approach cannot guarantee 100% accuracy as well and leads to the following consequences:
Q7 kills two birds with one stone using Code Instrumentation to hook into Eclipse frameworks and bypass technical limitations caused by the framework design. At the same time, Q7 lets you minimize any configuration/compatibility efforts, hooking at the runtime into the framework code that you’re building.
Copyright © 2013 Xored Software Inc. All Rights Reserved.