Manual tests vs Automated tests analysis

Hi devs/Ilie,

Context: Ilie is a contributor of the xwiki open source project who focuses on doing manual tests for XS (all the tests he executes are on https://test.xwiki.org/). There are too many tests listed there for him to execute all of them. We need a better strategy. One area that is promising is to identify manual tests for which there are already automated tests existing.

Strategy proposal:

  • Ilie takes a few of the manual tests every week (not too much so that it doesn’t impact his testing schedule)
  • Ilie checks if there are automated tests for them. How to achieve this?
    • Initially he’ll ask questions on the #xwiki chat about that but as time progresses he should be able to identify tests himself.
    • Once an automated test is found, Ilie tries to read it to understand what it does. Similarly he asks questions on #xwiki when it’s not clear. For functional tests it should not be too hard since we use PageObjects.
  • If there’s no automated test, Ilie creates a jira issue for it and a discussion can happen in jira to decide if it’s a won’t fix for whatever reason or where it should be located, what should be done, etc. It’s also important to mark the manual test with metadata to indicate its status and the link to the jira issue.
  • When there are some differences in the automated tests, Ilie starts discussing them on #xwiki to decide if these differences are important and should be added to the existing tests or if they should be dropped from the manual test. If some work needs to be done, Ilie creates a jira issue for it. Again the manual test is annotated with metadata and the jira link.
  • When there’s an automated test, no jira is created but the manual test is annotated with metadata. Also link the manual test with the automated test github test class link.
  • Ilie monitors new tests added every day/week, as they are added to update the test plan (verify if there are manual tests for it or not, add a new test entry and link it to the automated test).

Open questions:

  • How can Ilie follow new tests added every day/week?

Other ideas:

  • Modify our automated tests to be able to automatically generate a human-readable scenario for them and import/synch them with test.xwiki.org. This could be a solution to the question above “How can Ilie follow new tests added every day/week?”.

Let’s start the discussion. Throw ideas if you have some.

Thanks

Hi Vincent,

Just to clarify, what would be the strategy if manual tests are found to be automated with same steps? To just ignore them in manual testing?

AFAIK Ilie’s not a developer, so I’ve the feeling that it would be more productive to do that (at least at the beginning) as a planned pair-programming step. Basically Ilie and another dev block a slot to discuss about some manual tests, identify them and describe them. I’m really not sure it would be a step that can be done properly async to start.

I think we’d need to be more specific on that:

  1. a test class might contain a lot of test cases
  2. we might put a link on master to a test class in flavor-test-ui which will later be moved elsewhere, breaking the link

For 1 I do think that we’d need to link to the actual test case and not to the test class, but in that case 2 is even more a problem: any change on the test class might break the link because of new lines added.
We could fix 2 by putting a link to a specific revision on Github, but then it might also be dangerous because in the future we wouldn’t necessarily check the up-to-date automated test.

I don’t have a simple answer for that, but I think we need to be careful about it.

I think it would be easier if devs inform about the newly added or refactored tests. We could imagine a dedicated page that we’d update everytime we commit a change on automated test:

  1. it would allow to describe the change perform, be it a test added or a step remove in an existing test, etc
  2. editing a page would allow to be async and avoid constant interruptions

Here’s my 2 cents.

Hi!

This has to be tested, to see how much time it takes and then the number has to be adjusted.

As I understood, the tests are not all located in the same place. I can ask on #xwiki at the beginning, but as I am not a Dev, I don’t know if I’ll be able to find all locations when searching for a test. This may be the case for most part of the tests (there are app. 800 manual tests for platform ATM).

This means that for every manual test for which there isn’t an automated one a JIRA should be raised (on XWIKI platform Project)?
What happens in the case where there are two (or even more) manual tests with the same Expected Results in a given category, but have different steps to reproduce?

Will these JIRAs be created as Tasks? Should we have a rule for these JIRAs titles?

If the ticket is closed as Won’t fix it means that the manual test will remain marked as not automated? (I guess the details will still have to be decided).

This will be one of the most frequent cases. It means that for every manual test that is different I should ask for it on #xwiki and a dev should verify it and we decide together if the (extra) steps are relevant or not, this will take some (quite long) work time given the number of manual tests.

The human-readable scenario will be great in this regard. I agree that a dedicated page should be created and updated every time when an automated test is added with the steps for that test and the link to it.

There is also the issue with the Recommended extensions that are tested manually and for which there are app. 500 manual tests (more than 1/2 of the total platform manual tests). I guess these tests will still have to be run manually.

Hi!
After the meeting we had in order to better discuss to analyze the strategy we established the following:

  • Will start with one test per week and adjust the number if the case;
  • For finding if the corresponding automated tests exist it will be asked about them on #xwiki at the beginning. Together with the Dev team we will search whether an automated test exist and has the same reproduction steps;
  • If an automated test doesn’t exist or the scenario is different, it will be decided with the help of Dev team if a JIRA ticket should be raised for it or it should be dropped and marked as a deprecated manual test. The JIRA tickets should be raised on XWIKI platform Project, as type ‘Task’ and will have the label ‘Development Issues Only’;
  • When a manual test should be created (for a new feature for example) it will be asked about it on #xwiki if it exists already;
  • It has been discussed that the daily/weekly monitoring of automated tests won’t be necessary at the moment;
  • The tests for the Recommended extensions (app. 500 tests at the moment) will still have to be run manually.

Thanks @ilie.andriuta for the summary.

Also, very important, we need to keep track of the status of each manual test analyzed and the results. I propose to use the “Automated Link Source Code” xproperty to link to the JIRA or to the existing test on GitHub (and ofc to use the “Automated” xproperty too).

This is not needed since the idea is to use test.xwiki.org as the backlog of what remains to be done by the devs to improve the automated test suite. Thus we won’t record new (or missing) automated tests there. And new manual tests will be only be added to test.xwiki.org when there’s no automated ones existing (or if it has a different scenario).

These are exactly the same as for platform and should follow the same practices (if no test exist, jira issue, do not run manual tests for existing automated tests, etc).

you didn’t answered my comment about that one, I don’t know if you have ideas for that problem:

I didn’t because I consider it to be a detail and not very important. This is not meant to be something stable or used by a script somewhere. And as you say it can break very often.

I’d use the most specify link possible (ie test method level).

Thanks

Reacting here on the following quote from https://forum.xwiki.org/t/integration-tests-and-ckeditor/7453/23

So the goal with this strategy here for automated tests shouldn’t be about more functional tests, but as much as possible about much unit test, is that correct?

Yes but that’s obvious. It’s always the case. XWiki has about 15K tests in total and less than 1K of functional tests. My POV is that any test that can be expressed as a unit test should (but we need to leave a minimum of func test to verify the main behavior of each feature, end to end). For some cases, you don’t have the choice (like testing exception conditions) since it’s very difficult to test all branches in a func test. We need func tests to make sure stuff work end to end, they are absolutely necessary and critical.

@surli I don’t understand the relationship with this thread, could you elaborate?

Not sure if it’s about that, but when Ilie checks a manual test to verify if there’s an automated test, we’re not just checking functional tests but any type of automated tests (unit, integration, functional).

It was about that :slight_smile: I read the proposed strategy mostly with having integration tests in mind, and I realized that I was wrong. It’s easy to compare steps of a Manual test against steps of an integration tests, but it could be much more difficult about unit tests, especially if the manual test is actually covered by many different unit tests. I guess we’ll see on the first ones how it goes.

Yep it can be difficult sometimes but we’ll see.

For ex, for https://test.xwiki.org/xwiki/bin/view/Syntax/PageWebHome%20equivalent%20to%20Page, the answer is a unit test most likely.

FTR I’ve created https://test.xwiki.org/xwiki/bin/view/QA/AutomatedTestStatusClass/ to hold the status.

This means not using the following existing fields in https://test.xwiki.org/xwiki/bin/view/QA/TestClass:

  • automatedLinkSourceCode
  • automated

In the future we’ll need to decide what we want to do and merge the xclass or not.

We’ll also need to change the app UI to display the data from QA.AutomatedTestStatusClass.

A good and short conclusion I’ve read here - https://www.cleveroad.com/blog/manual-vs-automation-testing
They’re both great. The main idea is combining these two approaches when needed and not sticking to one.

Manual testing works best for Exploratory, Usability, and Adhoc testing.

Automated testing – for checking the UI, the main testing flows, and rarely-changing cases.

+1 to that, I agree completely.