Problem with parallel working of several users on XWiki pages etc

Hi everybody,

i’ve got the following problem when using XWiki, that as soon as several users work on the wiki in parallel, the CPU usage due to editing, saving, reloading pages here makes this hardware run full and the xwiki service is no longer accessible.

Is there a setting that can be set to optimize the wiki performance and counteract this behavior of service crashes and inaccessibility?

Hello,

You can find in the documention the memory and CPU requirements.

To give you an idea about what you need to run XWiki on, XWiki SAS has the following configuration for its cloud instances:

AMD Opteron(tm) Processor 6386 SE
cpu MHz : 2800.000
cache size : 2048 KB
  • 16GB disk size by default

You can also check the performance documentation if you wish to tune you installation’s settings.

Hope that helps.

1 Like

@mleduc many thanks for your quick reply

the server hardware running XWiki is oversized (more than the minimum requirements described in the documentation), only these problems occur as soon as more activities/interactions take place in the wiki at the same time.

I thought someone might have a similar problem

Maybe someone has an idea to solve this issue

You just need to find out what is taking CPU. See https://dev.xwiki.org/xwiki/bin/view/Community/Debugging#HDebugmode which could help. Beyond that you’ll need a profiler or set up Glowroot (Monitoring (XWiki.org)).

You can also check if you don’t have panels taking a long time to display. Generally speaking you need to find out what page is taking long to display, then verify what content you have put in these pages (or in displayed panels). If you have done custom dev or installed extensions also check if they could be the problem.

1 Like

Unfortunately these steps don’t really help me in this problem case (An active monitoring and the performance monitoring of the server is available), because as soon as the overload occurs the wiki is no longer available and the service blocks (only a restart of the service helps here) with the error message: “503 (Service Unavailable)”.
→ This always leads to data loss during processing, as not all changes can be saved before they are blocked/unavailable

There are no other applications running on the server except XWiki and the database.

Are there any known problems or settings that improve the parallel work on the XWiki or increase the performance and reduce/save the use of resources?

Two ideas:

  • Take a thread dump
  • Take a memory dump on low memory (JVM parameter)

Well there are no known parallel issues and I know XWiki instances with hundreds of thousands of people on them so there’s no strong scalability issue at least.