BTW @ilie.andriuta have the manual test status been updated for these 3 tests?
How fast are you adding new tests for new features/improvements? It would be great to have a history of new or updated tests per month.
I don’t remember a lot of cases when QA would notice a new feature in the release notes or some improvement (or even an important bug fix) and ask if there are automated tests for it and review it to decide if a manual test needs to be added.
I don’t have the figures yet but my guess is that the conclusion is probably:
Yes it works as it’s been able to save a lot of time for the manual testers
We’re probably not progressing fast enough on identifying automated tests from the manual list
We’re definitely not spending enough time on closing Loading... . Should we do a XWiki Day for that?
I’m not sure that QA reviews enough new automated tests for new features/improvements/important bugs to let devs know about what’s missing or add some new manual test to compensate.
As a result of the strategy, so far, a total of 79 tests were researched for automation out of which 30 manual tests were marked as automated (a few tests were marked as not automatable (such those related to Captcha) or deprecated, the most activity recorded in 2024)
Yes, there were 41 jira tickets created and for those 3 closed, the corresponding manual ones were marked accordingly as automated.
The new tests for new features/improvements are added when the respective XWiki version that contains them is tested, but most of the times this doesn’t happen often at the time when that version is released since most of the times it coincides with LTS versions testing timeboxes, but they are tested afterwards (this is especially the case when LTS versions - including internal one - are quite often released or released in the same time/ close). So in this case, not so many tests are added on a regular time basis, but when that respective version is tested.
** I don’t remember a lot of cases when QA would notice a new feature in the release notes or some improvement (or even an important bug fix) and ask if there are automated tests for it and review it to decide if a manual test needs to be added.
For important new features from Release Notes we are actually creating Test Plans or just test cases (for smaller improvements) and also for important bugfixes. In the same time, the existing manual tests are updated/modified accordingly if the case, but this also takes place when the respective XWiki version gets tested.
Some Test Plans validated for automation with Devs:
In my opinion the strategy works, since usually the tests marked as automated took a lot of added time when manually executed within extensive test sessions and we should continue it and close the jira tickets associated.
Currently, I am on Administration tests section, which should end soon and begin Nested Spaces section.
What it would be great is if we could automate the upgrade tests from older XWiki versions to the most recent one using complex scenarios like in Migrate XWiki using Distribution Wizard test. This test gets usually updated whenever an issue is found on some upgrades to catch eventual regressions, but through its complexity, it takes quite a lot of time if run on all supported browsers/ databases.
thx for the update. Good to see that we update our tests regularly, with the release notes as input. Thx!
We have an upgrade test but it’s probably not doing all that is listed. Would be great if you could check this @ilie.andriuta and if there are discepancies create a jira to update our upgrade test accordingly.
The main thing which is missing is that it’s only executed on Jetty+HSQLDB packages right now (not sure testing several browsers have much value strictly for upgrades).