Process for handling security issues

Hi everyone,

we just published our first security advisory on Github, so I’d like to submit to vote a general process for handling security issue. If everyone agrees on it, then it will become the rules for handling them for committers and I’ll publish it on Note that this is pretty important since it might also impact the sponsored companies of XWiki. Also consider that we comply to those rules for the security advisory we just published in order to prove it’s working (at least for one occurence).

The vote is open for two weeks until May 26th.
Here’s my +1.

Below the document to vote to.

XWiki Security Policy

The goal of this document is to provide some information about the policy of in case a vulnerability is found in XWiki Standard.

It aims at being published on and to be referenced wherever it’s needed (in particular on and

What are the available channels to discuss about security issues?

Three channels are available with different usages.

Security Mailing-List

The main channel is the security mailing-list (security[at] Anyone can post on this mailing-list, but only core committers and some key people from sponsoring companies received those emails.

This channel must be used for preventing the sponsoring companies that a new security issue has been discovered. It might be used for asking about a potential security issue.


The JIRA of XWiki ( is the right place to submit issues by using the visibility set to Confidential. Those issues are only visible to people who submitted them and to the committers of XWiki. All informations about the submitted issues if discussed elsewhere should be added on the comments of the issues so that the reporter can follow them.


A dedicated Matrix Chat room is dedicated to talk about security issues but is dedicated to the same people as the security mailing-list. It should be mainly used to discuss about the security policy and technical details about a specific issue.

Where to submit security issues?

As specified above, all security issues should be submitted on with the visibility set to Confidential.
Those issues are only visible to people who submitted them and to the committers of XWiki.

What are the criteria for computing severity?

The severity is defined case by case by the core committers depending on two criteria:

  • the impact of the security issue (e.g. an issue that might impact all pages of a wiki is more severe than an issue which might impact only pages with specific rights)
  • and the difficulty to reproduce it (e.g. an issue which needs script rights to be reproduced is less severe than an issue which only needs view access on the wiki)

We currently use two types of labels to compute the severity of an issue: the type of attacker (depending on its rights on the wiki) and the type of attack (depending on what he can actually do).

Types of attackers

Label Description
attacker_guest The attacker doesn’t need to be logged-in to perform the attack.
attacker_view The attacker needs to be logged-in to perform the attack.
attacker_comment The attacker needs to be logged-in and the comment rights to perform the attack.
attacker_edit Same as above but with edit rights.
attacker_script Same as above but with script rights.
socialeng The attacker can only perform the attack if he has physical access to the target device

Types of attacks

Label Description
stability Attacks that are related to targeting the host (e.g. DOS attack)
escalation Attacks that are related to permanently getting more rights
login Attacks that are related to login with another user identity
xss All attacks related to code injection
impersonation Attacks that are related to using another people right to perform actions
dataleak Attacks that are related to confidential data that might be retrieved in readonly: could be emails, but could also be XWiki document that shouldn’t be viewable.
spam Attacks that are related to spamming

Severity matrix

DISCLAIMER: This severity matrix is only indicative, the severity is computed on a case-by-case basis only.

How to read this matrix:

  • columns are representing the type of attackers
  • lines are representing the type of attacks
  • values are a severity between high / medium / low
Attacks \ Attackers guest view comment edit script socialeng
stability High High High Medium Medium Low
escalation High High High High Medium Low
impersonation High High High High Medium Low
login High High High Medium Low Low
xss High High High Medium Low Low
dataleak High High High Medium Low Low
spam High High High Medium Low Low

Note: on the future we’ll need to formalize the usage of to compute the severity of our security.

How long does it take to fix a security issue?

The priority of the JIRA issue is set depending on the severity of the issue, we apply basically the following mapping:

  • high severity: blocker
  • medium severity: critical
  • low severity: major

Blocker issues have to be fixed before next release. There’s no obligations about critical and major issues, so they are handled depending on the other priorities of the core committers.

When is a security issue considered as fixed?

Security issues are considered as fixed only once the fix is part of releases for all supported branches impacted by the issue. So for example, if is currently in the 12.x cycle and a security issue impacts both 11.10.1 (LTS) and 12.3 (stable), the issue is fixed when 11.10.2 and 12.3.1 (or 12.4) are released with the fix.

Are security issues ever publicly disclosed?

Once the issue has been properly fixed, a CVE might be published to publicly disclose about this issue and to incitate for an upgrade. The CVE is not mandatory and should happen only in case of issues with a high severity.

If no CVE is published, the issue and its details are never publicly disclosed, but the release notes will mention that some security issues have been fixed.

How long does it take to publish a CVE?

Once an issue has been fixed and released, an embargo of 3 months is starting to allow anyone working with XWiki to perform actions before the publication of the CVE. The sponsoring companies are automatically informed as soon as a security issues has been discovered through the security communication channels.

For example, if a security issue has been fixed and released in 11.10.2 and in 12.0, respectively released the 5th of February and the 29th of January, the CVE could be published 3 months after latest release: i.e. the 5th of May.

What’s the process to handle security issues for committers?

  1. Once the issue has been validated by a developer taking the ownersnip of fixin the issue, create an draft advisory on Github (see: As part of this compute the security score (severity)
  2. Add a comment to the jira issue with a link to the advisory
  3. If the severity of the issue is high, announce the problem to all supporting companies by using the dedicated security communication channel
  4. Fix the issue on all supported branches and release XWiki. For low level security issues add them to the RN
  5. Annnounce the fix on the security list with the start of the 3 months timer clock. Make it part of the Release Plan for the RM to do.
  6. After 3 months, request a CVE (for high criticity issues only FTM) through GitHub Advisory page. Remove the confidential label on the Jira issue. Publish the advisory once the CVE ID has been received.


Shouldn’t we explain here the labels we use on and the security dashboard (visible to committers)?

Why not use the labels we defined already and explain what is sever using them?

I think we should make this text a bit more generic. For me it’s not juste related to sponsoring companies but also to the reporter of the issue or anyone asking about it on the security list. We should also say why we favor sponsoring companies over others (it’s because they sponsor the dev of xwiki and we’re ok that they have a reward for doing so by having this extra lead time to prepare for their clients).

I don’t think this is right. The committers should just send the warning on the security list. It’s up to sponsoring companies to have someone on that list to relay it to them.

+1 globally, thanks

I can provide information about the labels and explain them indeed: I think we should use them when creating an issue. At least to them to quickly understand what it’s about.
I’m not entirely sure they are enough for computing the final severity (particularly when we investigated on the issue I think we should start using, but we could provide an easy-to-use matrix to have an “high-level” scale of the severity.

I’m not sure to see what you have in mind there.

Sure we can be more specific here.

Right I think I forgot to edit this one.

Here’s the text you have:

Once an issue has been fixed and released, an embargo of 3 months is starting to allow the sponsoring companies to perform actions before the publication of the CVE.

It mentions “sponsoring companies”. This is not broad enough IMO. There are more actors than just sponsoring companies. For me it’s not juste related to sponsoring companies but also to the reporter of the issue or anyone asking about it on the security list.

Hi everyone,

so following @vmassol remarks I edited the proposed document about our security process (available in first message of this thread). I performed the following changes:

  • more information about the channels of communication to be used for security (new first section after introduction)
  • more information about the way we compute the severity of our security issues (new subsections of section 2)
  • I reformulate a bit the other sections to not repeat myself several times (about severity) and to clarify the points Vincent emphasized
  • You might notice that I also reordered the sections to allow more easily the reference to previous sections.

Waiting for your feedback.

@surli Thanks. I’ve read the changes (thanks to Discourse’s nice diff view ;)) and it looks good to me in general. I have some doubt about the High categorization for some cases (like spam and requiring comment rights for example - For ex it would mean that if didn’t have a captcha solution for comments, we would consider the issue about adding captcha a Blocker issue, which I don’t think is good as it’s not as severe IMO).

So I think we need to review a bit the severity mappings.

Actually there’s a good way to check that: it would be great if you (or someone else) could try mapping existing open security issues with these rules and see what happens, i.e. how many are high, medium and low. If we find high or medium ones, it probably means that the rule is not correct since we would have probably fixed the issue already if that were the case (we could also have made mistakes ofc). At least would serve as a testbed for the rules and we can discuss them on the security list or chat one by one.



This is taking too long IMO. We need to publish our security policy. What’s holding us?

In your latest post you asked to review the severity mappings which I postponed several times because of more urgent stuff. So besides that, nothing really. I will propose a new vote for this policy so that we can publish it, even if it needs some minor arrangements later.