Status of Manual Tests vs Automated Tests strategy

Hi devs and QA,

I’d like to follow up on Manual tests vs Automated tests analysis

@ilie.andriuta It would be interesting to review the results since December 2020, and check useful or not the strategy is.

  • How many manual tests we’ve been able to mark as automated and thus excluded from manual testing since the beginning of this strategy?
  • How many jira issues were created for automated tests to improve?
    ** Answer: 41 as of today, with only 3 closed unfortunately
    ** BTW @ilie.andriuta have the manual test status been updated for these 3 tests?
  • How fast are you adding new tests for new features/improvements? It would be great to have a history of new or updated tests per month.
    ** I don’t remember a lot of cases when QA would notice a new feature in the release notes or some improvement (or even an important bug fix) and ask if there are automated tests for it and review it to decide if a manual test needs to be added.

I don’t have the figures yet but my guess is that the conclusion is probably:

  1. Yes it works as it’s been able to save a lot of time for the manual testers
  2. We’re probably not progressing fast enough on identifying automated tests from the manual list
  3. We’re definitely not spending enough time on closing Loading... . Should we do a XWiki Day for that?
  4. I’m not sure that QA reviews enough new automated tests for new features/improvements/important bugs to let devs know about what’s missing or add some new manual test to compensate.

Thanks

Definitely.

+1

Hi!

  • As a result of the strategy, so far, a total of 79 tests were researched for automation out of which 30 manual tests were marked as automated (a few tests were marked as not automatable (such those related to Captcha) or deprecated, the most activity recorded in 2024)
  • Yes, there were 41 jira tickets created and for those 3 closed, the corresponding manual ones were marked accordingly as automated.
  • The new tests for new features/improvements are added when the respective XWiki version that contains them is tested, but most of the times this doesn’t happen often at the time when that version is released since most of the times it coincides with LTS versions testing timeboxes, but they are tested afterwards (this is especially the case when LTS versions - including internal one - are quite often released or released in the same time/ close). So in this case, not so many tests are added on a regular time basis, but when that respective version is tested.

** I don’t remember a lot of cases when QA would notice a new feature in the release notes or some improvement (or even an important bug fix) and ask if there are automated tests for it and review it to decide if a manual test needs to be added.

For important new features from Release Notes we are actually creating Test Plans or just test cases (for smaller improvements) and also for important bugfixes. In the same time, the existing manual tests are updated/modified accordingly if the case, but this also takes place when the respective XWiki version gets tested.

Some Test Plans validated for automation with Devs:

Some examples of tests for other features added in RN:

Some example of tests created for some more important bugfixes:

In my opinion the strategy works, since usually the tests marked as automated took a lot of added time when manually executed within extensive test sessions and we should continue it and close the jira tickets associated.

Currently, I am on Administration tests section, which should end soon and begin Nested Spaces section.

What it would be great is if we could automate the upgrade tests from older XWiki versions to the most recent one using complex scenarios like in Migrate XWiki using Distribution Wizard test. This test gets usually updated whenever an issue is found on some upgrades to catch eventual regressions, but through its complexity, it takes quite a lot of time if run on all supported browsers/ databases.

Thanks