some time ago we decided to rely on CVSS for severity of security issues. However, I have the impression that the resulting priority of issues doesn’t match user expectations. For example, a vulnerability that would give read-only access to all pages of a private wiki without user interaction is CVSS 7.5 which is just a critical severity according to the agreed upon mapping. For such issues, we say that we’ll handle them depending on other priorities and give no guarantee that we’ll fix them in a timely manner. That doesn’t make any sense from a user’s point of view.
I propose to change the security policy to state instead the following:
We mark security issues as blocker issues in Jira and will do our best to fix them within 90 days, unless
the attacker needs at least script right to exploit the issue
the impact is low (e.g., minor performance impact, data leak that doesn’t concern actual page contents)
the issue is hard to exploit (e.g., an admin needs to perform an action that seems unlikely)
In these cases, the security issue can be marked as “Critical” or “Major” in Jira.
The idea of the first two criteria is to match the CVSS criteria “Privileges Required” and the impact for “Confidentiality”/“Integrity”/“Availability”. The third point is less straightforward, and primarily targets issues that require some sort of social engineering. In general, I’ve also tried to take inspiration from the deprecated severity matrix.
I’m not sure if we should have further guidelines for choosing Critical vs. Major, I would leave this to the reporter and/or committer who handles the issue - as far as I know we also don’t have any criteria for other bugs.
thanks for the proposal. I understand the need here, but I’m not a big fan of using something as vague as “the issue is hard to exploit” as it’s really subject to interpretation: the idea of using CVSS and of providing guidelines for filling it was to try as much as possible to have something clear.
But maybe we can try to rephrase your proposal using CVSS.
It’s Privilege Required are High in CVSS
Might be kept like that
Attack complexity is High ? We don’t have proper guidelines for defining it but at least we’d ensure to have a match between the CVSS and the jira priority.
The successful attack depends on the evasion or circumvention of security-enhancing techniques in place that would otherwise hinder the attack. These include: Evasion of exploit mitigation techniques. The attacker must have additional methods available to bypass security measures in place. For example, circumvention of address space randomization (ASLR) or data execution prevention (DEP) must be performed for the attack to be successful. Obtaining target-specific secrets. The attacker must gather some target-specific secret before the attack can be successful. A secret is any piece of information that cannot be obtained through any amount of reconnaissance. To obtain the secret the attacker must perform additional attacks or break otherwise secure measures (e.g. knowledge of a secret key may be needed to break a crypto channel). This operation must be performed for each attacked target.
To me, this matches only a small part of what I consider “hard to exploit”. Also, by referring to attack complexity, you’re not making the definition any clearer. Ultimately, the severity of a security vulnerability is always up to interpretation (e.g., it is also up to interpretation at which point the impact is low or high). My goal with the last point is mostly to have a way to acknowledge a reported security vulnerability like XWIKI-20323 that requires unlikely user interaction without committing to fixing it within 90 days. This is for those kinds of security vulnerabilities that do not fall into a nice category like XSS, therefore I think it is simply not possible to provide clear guidelines here.
If you want to make it a bit clearer, maybe we could have two items:
the attack complexity is high (e.g., the successful attack depends on the evasion or circumvention of security-enhancing techniques, or the access to secrets)
the attack requires user interaction, and this user interaction seems unlikely to happen
The first point refers to the attack complexity as defined in CVSS while the second point is basically a second level of “requires user interaction” as defined in CVSS.
Ok, just to be sure it would be a “or” between each item? Even between those? e.g. an issue which has high complexity but doesn’t require user interaction would be blocker?
I’m not sure that I understand your interpretation of “or”. My intention is that a vulnerability is a blocker unless one of these conditions applies. So your example of an issue which has high complexity isn’t a blocker regardless of the other conditions. I would say that this is an “or” between each item, but this doesn’t match your example.
Thinking about it again, I would actually suggest making this more flexible in the sense that we say that if one of these conditions applies, we normally don’t consider it a blocker, leaving us the flexibility to still consider it a blocker in case it seems sensible to do so. For example, in the case of a regression, we would consider it a blocker regardless of the severity.
Yeah I made a mistake when writing it… But ok we’re on the same page.
+1 what we say here is mostly indicative and should work in most cases: we can rediscuss severity at any point in the process if we think there’s a need for a specific vulnerability.
I’m not sure I understand the consequences of this proposal. Does it mean we won’t be able to compute a CVSS score anymore? Or that we’re changing the rules for computing the CVSS score and that we won’t be able to use a standard calculator such as NVD - CVSS v3 Calculator ?
Also, I think In these cases, the security issue can be marked as “Critical” or “Major” in Jira. is not good enough. We had something very clear and saying that it’s up to the committer to decide if it’s critical or major (what about minor?) is a bit too vague for me.
What we had looks much better to me:
0.1 - 3.9: Minor
4.0 - 6.9: Major
7.0 - 8.9: Critical
9.0 - 10.0: Blocker
I feel it’s maybe ok to revise some criteria and change some computation (but we need to provide a new calculator).
Also, isn’t there already some way with CVSS to have some latitude when computing scores? Isn’t there some parameter for subjective part already? Couldn’t we say we use CVSS and make an exception instead, allowing us to continue using a standard CVSS calculator and then adjust the result based on some well-defined criteria proper to XWiki? (ie compute a score in 2 passes).
This proposal has nothing to do with how we compute CVSS scores. This proposal is only about how fast we promise to fix security vulnerabilities and how we set the priority in Jira accordingly. We will compute and publish CVSS scores as before, it’s just that we won’t use them anymore to decide how fast we’ll fix an issue.
We don’t have any criteria for when to use “Critical” or “Major” in Jira for other issues, either. I doubt that it has any consequence in practice if an issue is “Major” or “Critical”. My new proposal is basically that we have two categories of security vulnerabilities:
Vulnerabilities that we consider to be important for XWiki, we promise to fix them in a timely manner
Vulnerabilities that exist, but that most admins will be able to live with and for which we don’t make any promise to fix them in a certain timeframe
Users of XWiki most likely won’t think in these scores, but they’ll ask “do you promise to fix any exposure of private page content to unauthenticated users in a timely manner” and a) this is hard to answer with our current policy and b) the answer is actually “no” with our current security policy as the example in the first post shows. And that seems very wrong as protecting page content should be one of our top priorities.
No, there is no subjective part in the CVSS calculation, and we can’t change the calculation of the scores that we put security advisories on GitHub. As also others write CVSS simply does not equal risk.
ok thx for the clarifications. So in summary your proposal is:
We continue computing the CVSS score for our users (but we won’t use it ourselves for anything - just display it in the release notes + advisories).
Change how we compute jira severity (ie don’t use CVSS anymore).
Lengthen the time it takes to fix Blocker issue and lower our commitment
We currently say Blocker issues have to be fixed before the next release., which 1) is a must and not a best effort and 2) is fixed within 30 days since we released once per month
I’m fine with points 1) and 2). I’m less sure about point 3. Could you explain your rationale for changing it?
In practice, we’re already not following this commitment right now, in most cases we don’t fix blocker security issues before the next release, which is frequently less than 14 days away. For example, we normally don’t block releases unless the issue is a recent regression. Also, security issues might not be trivial to fix, so in practice it is hard to fix and release all security vulnerabilities within, say, 14 days, even if we stop all other work. From my understanding, at most 90 days between the vulnerability report and making a fix available to users is basically the industry best practice, this is the deadline that is set, e.g., by Google’s Project Zero or CERT-EU. That’s why I think it makes sense to adopt the same deadline. I’m open to make the wording a bit stronger, but I also remember situations where we decided not to put a security fix in an LTS branch due to backwards compatibility concerns and this wouldn’t be allowed if we absolutely committed to fixing all blocker security issues within 90 days. Further, there could be situations where due to a high number of reported security vulnerabilities and/or security vulnerabilities that are very complex to fix it is difficult to provide a fix within 90 days.
My suggestion is not to put less effort into fixing security vulnerabilities, I’m just trying to adjust the policy to what we actually do, which is for example not to stop doing releases because we’re currently having an open blocker security vulnerability.
Further, this is not necessarily lowering our commitment, as it will potentially include more vulnerabilities than before. As I explained in the first message, for simple information disclosure vulnerabilities, we gave no commitment at all before, now we commit to fixing them within 90 days.
IMO, we’re not signing a contract here. We’re always talking about best effort in anything we do on xwiki.org (whether we’re saying that we’re releasing once per month, that we’re not releasing a version of XWiki with open blockers - we do that, we even have it in our release process check - or anything else). This is a community open-source project. There’s no guarantee to the user, whatsoever.
What’s important IMO is our intent and what we try to do:
I confirm that we’re not supposed to release if we have blocker issues (that’s why it’s called blocker issues and why we have a check in the release plan). For some reasons, we’ve left this slide a bit too often. Sometimes, the issue is more because we’ve marked the issue as blocker when it’s not really a blocker (we should fix the criteria for setting the issue as blocker in this case - This can happen with not serious regressions for exmple). But we always discuss open blocker issues and when we still perform the release, it’s an exception and not the rule/norm.
We had to catch up on security issues in the past after we introduced our security policy and during this timespan we’ve allowed to release with open blocker security issues (they were not marked as blocker before and we didn’t have that CVSS score computed/mapped).
We’re now supposed to have finished this catch up. The problem is that we still regularly find blocker security issues (and we report them ourselves most of the time), which makes the load a bit high to be able to treat blocker security issues as other blocker issues. Also, as you mentioned, blocker security issues can be very complex to fix and may require more time.
I’d personally not like to have more and more issues with blocker severity that are not fixed for the next release. And slowly let the situation deteriorate…
So we have 4 options I think:
Stop marking security issues as blockers for important security issues (mark them as “Critical” for example (and then use “Major” or “Low” for other security issues). And reword the security policy accordingly, and say that we allow up to 90 says for “Critical” security issues (only because we consider security issues as hard to fix in general)
Continue to mark important security issues as blockers but during the release work and in the release plan, make a difference between general blocker issues (one release) and blocker security issues (2 or 3 releases).
Decide that we allow 2 (or 3 - I find 3 too long) releases to fix Blocker issues, but still prioritize them during the roadmap, and then depending on the important of the blocker decide if we can release with the existing open blockers or not. For ex, a blocker preventing the edit of any wiki page would prevent a release for sure.
More generally, go back to a strong meaning for “blocker” issues and only put a severity of blocker when we absolutely need the issue to be fixed for the next release. Then, use “Critical” to the level below, which means that there’s no obligation to be fixed in the current release but we prioritize them in the roadmaps, and do a best effort to fix them within 3 releases (90 days). To achieve that, reset/review all “criticial priorities” for all issues (this is not really used right now, except maybe for security issues). Thus use “Major” and “Low” for the rest (for security issues “major” would correspond to the current “critical”).
What do you prefer?
I think my preference would go for 3 or 4.
EDIT: I think this should be a separate proposal, WDYT?
Back to the 90 days, I’m fine with the proposal but I would make the wording stronger, showing that it’s a commitment we take (but we can fail, as with any commitment, and especially on an open source community project).
I just read again the whole threads, and it seems that the original proposal of Michael:
is exactly your proposal 2:
and I’m fine with that on my side. I’d be ok with your proposal 3, but in general we’re prioritizing more the regressions for example so I don’t think it’s necessary.