Handy Selenium reports

Handy Selenium reports

In this post we publish an English translation of Dima Yakubosky’s second talk on SeleniumCamp 2012 conference which took place in February in Kiev, Ukraine. Text precedes a slide it is related to. Here it is …

It’s nice that sophisticated tools, like Selenium, allow us to quickly and effectively execute a large chunk of routine work. For example, let’s check if all of 20 important elements are on this page. But, all this speed and thoroughness would not be useful to anyone unless tests could inform you of their results.

The easier the report is to understand, the less time you need to spend on finding the cause of the error when it finally appears.

In the Selenium manual it is written that most testers will sooner or later end up developing their own reports instead of or in addition to those reports which are presented by frameworks. It would be nice to know if any of you use your own, personal reports.


Who will use these reports? How does work with tests and reports look with your company?

For example, if you have CI process, and the newest build of your product is failing tests. Who will receive messages with information regarding that? Most probably, the person who committed it. He goes and looks at the reports of the tests. Another variation: a CI is absent and tests are run from time to time by a tester. He is the one who deals with reports. Any others? Is it possible that a third person sometimes deals with reports? Not a programmer or a tester, but a manager.

I’m guessing that a variety of people, with a variety of skills will have to work with reports, and it would be advantageous to have a separate type of report to match each person. Somewhere it only has to be a small message mentioning an error, somewhere there is information on how to reproduce this error. For a developer of tests or applications, it would be interesting to have access to all possible details.

So, it would be nice, if the type of report can be adjusted for whoever it is meant for.

One more wish on the list– a comprehensive description of the actions taken to reach an error, and their order. What actions is it? For example, to check login, we use these simple steps:
- open page
- fill in fields
- press the button
- check if ‘logout’ exists

As you can see, the simplest steps exactly correspond to Selenium functions.

To reproduce an error, it is enough to repeat those steps you took in the first place, they should be in your test report. This list of steps can contain a stack of method calls, but it would be cool if the steps looked less brutal.

Let’s assume that your test must login several times during the testing process. Obviously, all these simple steps will be executed every time, but is it necessary for them to be listed in the log every time? Majority of the time, you only need the list once – when you actually test login functionality. For all other instances, it would make sense to combine these steps into a higher order step – ‘login’ and show this in the reports instead of the whole chain of events.

We can also provide an opposite example, when a simple action consists of several smaller actions. For example, many people, including ourselves, do not use Selenium click() function as it is, we use a wrapper. A wrapper can execute two actions:
- check if the element we are going to click is present
- click

In this manner, the order of execution has a hierarchical structure where larger actions include smaller ones. We would like reports to know this.

Selenium has a function for receiving screenshots. We think reports should contain screenshots. There are many articles which describe how to take a screenshot immediately after an error occurs. But ideally we should be able to view a screenshot not only after an error but also before it. But it is almost impossible to predict when an error is about to occur, and the only solution I see, is to constantly take screenshots.

What do you think about this? Do you take screenshots after every actions and do they end up in your reports?

We had decided from the very beginning, that we would use TestNG. The deciding factor was the fact that it could run tests in parallel, and that’s was exactly what we were looking for. I’ll talk about this more in the first part of this presentation. I began to write the first tests, and almost instantly ran into this problem. The regular way for TestNG to inform you of errors, it to use a variety of assertXXX. If the requirements of an assert are not met, an exception is fired  out and the test is considered failed. So any test from TestNG could have two statuses: fatal or ok. I couldn’t say; “Hey, dude, I have some warnings for this page, but for the time being let’s just assume it’s ok.”

This route was very new to me, I was spoiled by loggers which allowed me to dynamically decide how severe an alert was: TRACE, INFO, ERROR or FATAL.

Let’s take something common, like log in form.

Let’s assume that for some reason ‘Remember Me’ is absent on the webpage. Is it important to let us know? Of course. Is it necessary to stop the testing of login and logout if it is absent? No, I think not. Even if the checkbox was absent, it would be a severe enough reason to stop running the test. I’ll tell you a secret; my test don’t even check for it (what about yours?). Something that concerns me more, is that once you have logged in you should see the ‘logged in’ status in the corner of your window. And if it isn’t there, then – FATAL! Login was failed.
I would like TestNG to show me the following errors in reports:
WARN – no label
ERROR – no checkbox
FATAL – not logged in

After a bit of Googling, I found a few methods which could allow TestNG to emulate this behavior, but none of them struck me as efficient or comfortable.
I understand that my expectation for TestNG may be set a little too high. Instead of, or in addition to TestNG reports we could use the LoggingSelenium extension, but it to has it’s flaws, like all of the other alternatives I found. I’ll let it slip, this search was done three years ago, but before preparing this presentation I search again and didn’t find anything totally new.

Another side of loggers that I really liked was that different levels. I could easily change the level at which I should be disturbed. For example, while I’m working on global changes, something somewhere might break a little, but there is no point in tinkering with it until all global changes have been finished, so telling me would be useless. So I use the ERROR level, anything below doesn’t reach me.

In the end, I decided not to use TestNG reports at all, and began writing test logs into text files with the help of log4j.

But, as you probably know, text logs also have many flaws. To see if there was an error, you need to conduct a search the text for the words ‘FATAL’ or ‘ERROR’. How can this process be made easier? We can set log4j to log only FATAL or ERROR alerts. If there were none, the log will be empty.

But if we choose to do this, we lose the opportunity to see the other alerts – INFO and WARN. And those are the ones which contain the actions we need to have to be able to reproduce an error.

The report should be as detailed as possible, but at the same time the details must not block out the main point.

To this moment, we have a small list of expectations from our reports, which were not fulfilled by anything available to us:
- Generate different types of reports for different uses/users
- A report should be able to group simple steps into more complex ones
- It should take screenshots before errors
- It should allow alerts to be filtered by severity
- It should be as detailed as possible

The answer to our problems came with the reports we developed ourselves. I will now demonstrate how they look and how they work.

Our report layout.

Summary for a particular test.

Expandable tree nodes.

Event’s details.

Viewing screenshots.

Filters.

Let’s speak about the first requirement – different views.

This criteria means that the report shouldn’t be written as the test is running, like a log. The test should remember everything. When the test is completed we have all the information and can decide what to do. First, the test can generate a report straight away. It can also stack all the data into itermediate format. From this format we can view the report with the help of a viewer or generate a report with a special generator when we need it.

Our tests generate reports once they have finishing working, but we do store some data in xml format.

What will we show on the report? A list of actions, each with some attributes and results. Logically, we should store actions, or ‘events’ in the form of an list (ArrayList) of Event type objects. These objects will store all the information of an action; name, start time, end time, target (for example, a locator), a value applied to a target (for example, a value entered into input field), expected action result, actual result, status (OK, ERROR, FATAL), error details, error from triggered exception (if an action triggered one), names of screenshot before and after action.

The report should be aware of the action’s hierarchy, that these actions are a part of a higher level action. Unfortunately, the report cannot find out about these dependencies without our help. This means that we need to somehow indicate which actions contain other actions during logging. We use this approach: at the beginning of each action we open an event. What does this mean? Basically a new object of Event class is created. It is added to our list of events, and some fields are filled out, at least the name and time. Let’s say we opened an event Event1. We are ready, an action has begun, the event is open.
At the end of an action, which is matched with an event, we close the event. What does this mean? We write down the results of the action: status, value, etc.

Question: Where is the hierarchy? The fact is, that while Event1 is open, other actions were carried out, and each of those created new events. All of these events are automatically considered as child events of Event1.

So recording opening and closing an event is necessary to determine child events. This approach also allows us to time the start and finish of each event. If you ask me: What flaw is in our approach? I would honestly tell you, this one. Logging becomes more complex, instead of one entry – ‘Event failed’, I need to record both ‘Event started’ and ‘Event failed’. If you asked me: Is this hard to implement? I would also honestly say – no.

Very detailed report.

This means that any Selenium method call should write to a log to allow us to use this info during report generation – like it is done in the LoggingSelenium library.
Just then I said that each event requires two commands to record it in the log. You can imagine the wild picture, when I open an event before calling any Selenium function, then analyse results and close an event. This is too complex and none of you would have made it like this.

Almost certainly, like me, you would have written wrappers for Selenium functions. So I made a separate class, which includes wrappers of Selenium methods with identical to Selenium names. This class is called CommonActions. Tests never directly call the driver methods, but interact with CommonActions methods.
What does a wrapper do? Besides working with events, it also has several other functions. To better explain these functions I will go through them in order.

First – working with events.
For example, the isTextPresent(some_text) wrapper.
Wrapper:
- Opens the event. The events attributes are filled out: name, start time, action target – some_text.
- Executes Selenium command isTextPresent().
- Ends the event, recording the end time, result (was target text found or not), status – OK (action completed without any problems).
- Returns the result.

It’s apparent that the isTextPresent() method can create an exception. We might end up with an open event and a failed tests.
To avoid this, it’s logical to wrap this Selenium command with try-catch. If we will have an exception, there’s nothing to worry about. In this case, we would add to the event an exception and give it the ERROR status.

Reports should contain before and after the error screenshots, not just after.

It’s just about time to go back to the topic of screenshots. In the first versions of our tests, we took screenshots immediately after an error. But errors can come up on different levels, like an exception generated by the driver command due to the missing element – an obvious error on this level, but on the next it may be considered insignificant. Or significant? Before we ended up taking screenshots of an error on every level, ending up with doubles.

Back to screenshots BEFORE errors. As I have said before, as we can’t predict an error, we need to constantly take screenshots. What do we mean by constantly? Take a screenshot after every line of code? That would be too complicated. The most obvious route is to take screenshots after every Selenium command, which can somehow affect the look of the page. What are these commands? The most obvious are; typeText(), click(), …andWait() methods.

Perfect, we already have the wrappers for these commands, that’s where we’ll take the screenshots.
Make a variable called lastScrFilename, which will contain the name of the last screenshot taken.

This is how it looks:
- Open the event (and wrote down that the screenshot before the event is the latest screenshot)
- Checked if the element is present (this is also an event)
- Clicked
- Took a screenshot
- Closed the event (and wrote down that the screenshot after the event is the latest screenshot)
- Returned the result

Next requirement – it would be nice to be able to generate different reports for different uses.

Because we collect information while testing, we can freely create a few types of reports, or dump it in an xml file and use it later with a generator or a viewer. We decided to make HTML reports straight away, so we could either look at the most global issues, like the amount of errors, or, if need be, dig deeper. HTML + JavaScript easily solve the last problem on the list. The report should allow for filtration of alerts based on their importance (you can see a drop down list on the report).

To summarize. Let’s talk about cons.

- For now there is no mechanism, automatically closing open events. If you didn’t pay attention, and left an open event in one of your branches, you will receive an error when your reports are being generated.

- It is required to write a bit of additional code. It all depends on how detailed you want your report to be, and who they are for. I was thinking that I would write the code for the tests once, but the reports would be looked at many times. It would be unfortunate if the same report that has an error, wouldn’t have enough information. So for these reports there is not enough code in the wrappers, it has to be everywhere.

- Excess of screenshots. First, taking screenshots takes time. Second, after the completion of tests, they stay there. The second problem can be solved by deleting the unwanted screenshots at the end.

- You can’t just plug in our libraries and start generating reports. We gladly share out libraries, but to get them working will require some work from you. Good news, it won’t be too difficult to make reports at a base level (includes a list of Selenium actions with screenshots).

Pros:
- Flexible system, with a custom amount of levels. Mark an event as INFO, or, if you like, as WARN or ERROR.
- The report allows us to zoom in (fine details) and out (overview) what the test did
- It presents us with as much information as it can: execution time, expected value, actual value, exception text, locator
- If information isn’t needed, it is hidden. For a person who isn’t used to these reports, he will see only the main point.

It’s safe to say, that the advantages far outweigh the disadvantages, and we are very happy with this approach.

We have been planning improvements for a while; easier access to screenshots, maybe showing previews.  One more feature our reports don’t have is aggregation. I’ll talk about it a little later, after I talk about parallel execution of tests.

To explain what is meant by aggregation, it is required to mention how tests run in different browsers. To automate testing, we wrote our own system – Nerrvana. To execute tests, we load them into the system, and indicate when and on what browsers to execute them in.

In the system’s report, we can see the results for each browser. Also, tests can tell the system some important things, for example; every event that finished with a level of WARN or above.

Also, you can see the execution time for each browser.

How does it work?
When the time comes for a run, a grid of virtual machines is created for each environment. A hub, and a certain amount of RCs. Tests are loaded and executed on each hub. On completion, test results are downloaded into their own folder, one for each environment. This way we can have not only several tests from the suite can run in parallel, but the same test running in parallel in different browsers. At some point we realised that our system could be used by others, so we are soon releasing it as a service – www.nerrvana.com. If you are interested, you are welcome to become a beta tester.

Now let me explain what I mean by the aggregation of reports. When tests are simultaneously executed in different browsers, it would be nice to receive a report that didn’t just contain links to a report for each environment, but analysed and compared them:
- to show cross browser errors
- to allow a side-by-side comparison of screenshots
- to compare the times taken by each browser

Example of reports without errors and with errors. Please do not pay attention to ugly login page. It is a quick emulation of a main web site our Answers, we are testing, is integrated with. Answers is a plugable product and does not have own login page.


Print this post | Home

4 comments

  1. LeshaL says:

    Пара вопросов:
    1) Чем не понравился EventFiringWebDriver? Там можно повесить слухача на большинство нужных событий драйвера и оттуда же делать скриншоты.
    2) Зачем создавать Event вначале теста и в конце его руками закрывать? Т.е. почему бы не сделать это посредством BeforeTest и AfterTest и это будет происходить автоматически. Или как вариант создавать ивенты руками (если так надо), но используя некоторую фабрику. Все открытые ивенты потом можно закрыть автоматом после того как тест завершился.

  2. bear says:

    С удовольствием отвечу :)
    Во-первых, когда мы столкнулись с проблемой логирования/отчётов (более двух лет назад), EventFiringWebDriver вряд ли существовал.

    Во-вторых, обратите внимание, что мы не просто стремимся залогировать все действия, а представить их в иерархическом виде (возможно, вы не заметили – вот пример отчёта с ошибками, наглядно видна иерархия). Для этого и необходимо “открывать” и “закрывать” действия вручную: автоматически никак невозможно определить, какие действия относятся, скажем, к проверке логина, а какие – нет.
    Тут же и ответ на ваш второй вопрос: BeforeTest/AfterTest – позволят открыть/закрыть только событие (единственное) верхнего уровня, мы же стремимся поместить в отчёт как самые примитивные действия, так и их последовательности в виде действий/событий более высокого уровня.

  3. LeshaL says:

    Спасибо за ответы.
    В целом очень здорово смотрится получившийся результат. Респект.

  4. xwizard says:

    Отличная работа, жаль, это не оформлено в виде библиотеки/плагина, которые можно подключить к своим тестам и использовать.