Inception

There are so many testing tools on the market already, why did we create another one?

Xored Software, Inc. is an Eclipse-centric company delivering Eclipse-based products for nearly ten years. Our first project was PHP IDE for Eclipse 2.0; the first public version was released in the fall of 2002. For most of the our projects we were managing full development lifecycle here (besides the initial set of requirements), including the QA part.

Eventually, most of our clients and customers would come to us with quality issues seeking recommendations on testing tools or simply asking to help with improving QA processes in order to deliver better products. Therefore we were constantly looking for a proper UI testing tool to develop UI tests for our deliverables and recommend a suitable testing toolkit to our customers.

Starting with Commercial Capture and Playback Tools

After a while, we came up with adoption of WindowTester (commercial at the time, now it’s Google’s WindowTester). Besides some disadvantages, which were very important for us (such as inability to run tests in Linux environment), the key thing became the fact that capture/replay functionality of WindowTester was next to useless: in most cases the captured tests would not work as expected due to different reasons (more on this in Capture and Playback). Our QA engineers had to spend quite an amount of time getting tests captured in Java to work by:

The amount of this post-capture work was overwhelming, and after a couple of days of recording tests and becoming familiar with WindowTester API, our engineers gave up capturing and started to program test cases in Java. Shortly after we had figured this fact, we switched to SWTBot, which was an open-source project and allowed us to explore why things would not work. It was also functionally rich comparing to WindowTester even though SWTBot did not have the capturing feature implemented; in any case, we had already considered it as useless.

Employing UI Testing SWTBot Framework and Java

When we started to use SWTBot, we had faced the major problem: the test author should be an experienced Eclipse developer. Here are some of the reasons:

Thus we quit using inexperienced QA engineers and started to bring in Eclipse developers to write UI tests. This did not solve the problem either because the price was too high. A highly skilled engineer is a valuable resource, and development of UI tests for Eclipse-based applications requires a great deal of efforts. Sometimes it’s more complicated to develop a test suite for the new functionality than to implement this functionality itself.

As a result, customers were asking us to stop spending time on testing and commit more resources to development of the new features. We were surely increasing the quality of our services, however, it was not due to test automation, rather because our teams were more focused on quality. We had never managed to cover any significant amount of product functionality with UI testing using SWTBot, WindowTester or other tools, not to mention receiving useful feedback from their reports.

Low coverage had also affected our regression testing, making regressions almost impossible to locate. And good regression testing should be one of the strongest side of automated testing; people become numb from running the same manual tests many times and seeing the same test results all the time. It becomes too easy to overlook errors, which defeats the purpose of regression testing. That’s where the testing tool should step in and take this load off QA team. Unfortunately, this was not the case.

Becoming Skeptics

All in all, we were very skeptic about UI testing of Eclipse-based applications until the year of 2009. Things have changed when BT came to us in the fall of 2008. BT was just starting to adopt Agile technologies and we had to introduce 2 week iterations to our development process. During each iteration a team of 5 engineers were delivering about 50 user stories.

After a year of development, they started to push a stable build after each iteration to a number of beta-users. These users required acceptable quality and could not stand any regressions and that led to the creation of a huge UI testbase meaning that we should develop about 200 test cases per 2 week iteration as well as maintain the existing testbase. On top of that, we had only one engineer on the team’s budget assigned to accomplish this mission.

We accepted this challenge and created Q7. As of today, this BT project has a testbase of 3000 testcases and growing. If run on a single developer box, all these tests would take about fifteen hours to be completed. We are running them within our cloud on each commit and that takes about 50 minutes. We are currently working on how to further reduce the running time.

Sounds like a miracle? Please read more here: How Q7 Works.