Security Policy for XWiki

Hi everyone,

we discussed a lot about the process for handling security issues as part of Process for handling security issues but we never finally vote for it, so I’d like to proceed on this vote so that we can publish a policy.

You can find below the full policy proposal (for info I didn’t made any change, this is the same proposal that is written on the original thread at this date), this vote is opened until Monday 16th of November.
Please only vote for it here, discussions should be part of Process for handling security issues

Here’s my +1.

XWiki Security Policy

The goal of this document is to provide some information about the policy of in case a vulnerability is found in XWiki Standard.

It aims at being published on Security Policy · xwiki/xwiki-platform · GitHub and to be referenced wherever it’s needed (in particular on and

What are the available channels to discuss about security issues?

Three channels are available with different usages.

Security Mailing-List

The main channel is the security mailing-list (security[at] Anyone can post on this mailing-list, but only core committers and some other trusted persons can read them. More information are available on

This channel must be used for warning the sponsoring companies that a new security issue has been discovered. It might be used for asking about a potential security issue.


The JIRA of XWiki ( is the right place to submit issues by using the visibility set to Confidential and the label “security”. Those issues are only visible to people who submitted them, to the committers of XWiki, and to anyone trusted by the XWiki committers who can help fixing them. All informations about the submitted issues if discussed elsewhere should be added on the comments of the issues so that the reporter can follow them. Note that the label “security” could be used in JIRA dashboard or in Release Notes to inform about the fixed security issues.


A dedicated Matrix Chat room is dedicated to talk about security issues but is dedicated to the same people as the security mailing-list. It should be mainly used to discuss about the security policy and technical details about a specific issue.
More information are available on

Where to submit security issues?

As specified above, all security issues should be submitted on with the visibility set to Confidential and the label “security”.
Those issues are only visible to people who submitted them, to the committers of XWiki and to anyone trusted by the XWiki committers who could help fixing them.

What are the criteria for computing severity?

The severity is defined case by case by the core committers depending on two criteria:

  • the impact of the security issue (e.g. an issue that might impact all pages of a wiki is more severe than an issue which might impact only pages with specific rights)
  • and the difficulty to reproduce it (e.g. an issue which needs script rights to be reproduced is less severe than an issue which only needs view access on the wiki)

We currently use two types of labels to compute the severity of an issue: the type of attacker (depending on its rights on the wiki) and the type of attack (depending on what he can actually do).

Types of attackers

Label Description
attacker_guest The attacker doesn’t need to be logged-in to perform the attack.
attacker_view The attacker needs to be logged-in to perform the attack.
attacker_comment The attacker needs to be logged-in and the comment rights to perform the attack.
attacker_edit Same as above but with edit rights.
attacker_script Same as above but with script rights.
socialeng The attacker can only perform the attack if he has physical access to the target device

Types of attacks

Label Description
stability Attacks that are related to targeting the host (e.g. DOS attack)
escalation Attacks that are related to permanently getting more rights
login Attacks that are related to login with another user identity
xss All attacks related to code injection
impersonation Attacks that are related to using another people right to perform actions
dataleak Attacks that are related to confidential data that might be retrieved in readonly: could be emails, but could also be XWiki document that shouldn’t be viewable.
spam Attacks that are related to spamming

Severity matrix

DISCLAIMER: This severity matrix is only indicative, the severity is computed on a case-by-case basis only.

How to read this matrix:

  • columns are representing the type of attackers
  • lines are representing the type of attacks
  • values are a severity between high / medium / low
Attacks \ Attackers guest view comment edit script socialeng
stability High High High Medium Medium Low
escalation High High High High Medium Low
impersonation High High High High Medium Low
login High High High Medium Low Low
xss High High High Medium Low Low
dataleak High High High Medium Low Low
spam High High High Medium Low Low

Note: on the future we’ll need to formalize the usage of NVD - CVSS v3 Calculator to compute the severity of our security.

How long does it take to fix a security issue?

The priority of the JIRA issue is set depending on the severity of the issue, we apply basically the following mapping:

  • high severity: blocker
  • medium severity: critical
  • low severity: major

Blocker issues have to be fixed before next release. There’s no obligations about critical and major issues, so they are handled depending on the other priorities of the core committers.

When is a security issue considered as fixed?

Security issues are considered as fixed only once the fix is part of releases for all supported branches impacted by the issue. So for example, if is currently in the 12.x cycle and a security issue impacts both 11.10.1 (LTS) and 12.3 (stable), the issue is fixed when 11.10.2 and 12.3.1 (or 12.4) are released with the fix.

Are security issues ever publicly disclosed?

Once the issue has been properly fixed and the fix releases, a CVE is published to publicly disclose about this issue and to incitate for an upgrade. The CVE is mandatory for any security issues.

How long does it take to publish a CVE?

Once an issue has been fixed and released, an embargo of 3 months is starting to allow anyone working with XWiki to perform actions before the publication of the CVE. The sponsoring companies are automatically informed as soon as a security issues has been discovered through the security communication channels.

For example, if a security issue has been fixed and released in 11.10.2 and in 12.0, respectively released the 5th of February and the 29th of January, the CVE could be published 3 months after latest release: i.e. the 5th of May.

What’s the process to handle security issues for committers?

  1. Take the ownership on the security issue by assigning the JIRA ticket to yourself.
  2. Validate the information about the security issue (including the label and confidentiality fields)
  3. Announce the problem on the dedicated security communication channel
  4. Create an draft advisory on Github (see: Creating a repository security advisory - GitHub Docs). As part of this compute the security score (severity)
  5. Add a comment to the jira issue with a link to the advisory draft
  6. Fix the issue on all supported branches and release XWiki.
  7. Annnounce the fix on the security list with the start of the 3 months timer clock. Make it part of the Release Plan for the Release Master to do.
  8. After 3 months, request a CVE through GitHub Advisory page. Remove the confidential label on the Jira issue. Publish the advisory once the CVE ID has been received.


This is not fully correct. Anyone who asks and we know has the right, there’s no relationship with sponsoring companies.


And to anyone trusted by the xwiki committers.

This should link to

This should link to

Duplicate of above.

This should link to

should? I thought we wanted to make it mandatory for high severity, no?


Not just to sponsoring companies. To everyone on the security channels.

General comment: sponsoring companies don’t have any more rights than others here (see and I don’t think they should even be mentioned in this proposal.

So +1 from me after all the changes I’ve commented about are integrated (they’re small and shouldn’t be a problem).

That’s what @surli meant I think but it’s indeed possible that it’s not explicit enough.

+1 for the general process with the same reservation than @vmassol regarding who other than the committers has access to those security issues (until they are made public)

I just edited it to take into account your remarks.

Sounds a bit weird to make high risk issues public and not the others. I feel we should simply apply the same 3 months rule before making the jira issue public (basically the main difference being that there is no CVE).

I agree that we shouldn’t keep the security issues private forever. Now those won’t benefiate from a warning on the ML, and they won’t appear immediately on the release notes since they would be disclosed 3 months later. IMO It would make sense to send a warning on the security ML when we proceed to releasing public some security issues.

Actually I’m starting to wonder if we shouldn’t provide a Release Note section about security issue when we perform a release containing confidential issue specifying that some important security issues have been fixed and information about them will be published in some times. This section would contain the JIRA request to display the confidential issue when they are publicly released (we might need some special label for that).

Sure, it does not cost much to do that.

That’s indeed what many big projects I can think of are doing.