Stop using the CVSS score to compute the priority of security issues

Hi everyone,

in this vote I’m trying to summarize the discussion on severity levels of security issues, with a small addition for allowing “blocker” security issues.

As explained in the original proposal, our current classification based on the CVSS score doesn’t really reflect how important certain security vulnerabilities are. This doesn’t change how we compute CVSS scores in security advisories. It only changes how we compute the priority in Jira based on the CVSS classification. I propose the following new classification scheme:

Security issues are marked as “Critical” issues in Jira unless at least one of the following criteria in the CVSS classification apply:

  • The required privileges are “high” (in the context of XWiki this means that the attacker needs at least script right).
  • The impact is low (in the context of XWiki this means, e.g., minor performance impact, data leak that doesn’t concern actual page contents).
  • The attack complexity is “high” (e.g., the successful attack depends on the evasion or circumvention of security-enhancing techniques, or the access to secrets)
  • The exploitation requires “active” User Interaction as defined by CVSS 4.0.

If at least one of those criteria applies, a security issue is normally marked as “Major” instead of “Critical”.

As an exception, a security issue can be classified as “Blocker”, for example, when the security issue is actively exploited or active exploitation is likely.

Based on the parallel vote for changing the meaning of “Critical” and “Blocker”, this has the following consequences:

We’ll release fixes on all supported branches for all security issues classified as “Critical” within 90 days (max. 30 days before the next roadmap starts, max. 30 days for the fix, and max. 30 days for releasing new versions on all supported branches). “Blocker” security issues are handled with the highest priority and are fixed within 30 days at maximum. When non-critical security issues are handled depends on the other priorities of the committers.

For existing open security issues I propose the following process:

  • We systematically review all open security issues and re-classify them.
  • All security issues that have been marked as critical based on the new classification are scheduled on the next roadmaps, allowing a bit more time in case there should be too many/very difficult ones.

This vote is open until June 16, 12:00. Thank you very much for your votes!

+1

+1

+1

I guess if the existing issues were not already marked as Blockers, it’s unlikely that they would now be marked as Blocker. I’d expect them to be downgraded to “Major”.

So this is on top of the 3 months embargo, right? So globally, with the change above, it can take up to 90+90=180 days (6 months) before a public CVE is disclosed for a Critical security issue, and 30+90=120 days (4 months) for a Blocker security issue, correct?

Thanks

Yes, what’s changed is the time for releasing fixes, the embargo is unchanged from the date when the last version to be fixed has been released. In practice, I hope that we’ll be faster with releasing security fixes and my intention is definitely not to be slower than we currently are.

In general I agree with the idea and the set of rules.

I’m not a huge fan of the wording “active exploitation is likely” as it’s subject to a lot of interpretation. Now maybe that’s the idea: to only decide to move to blocker based on a decision discussed on different criteria, and we chose to not give specific criteria to allow open discussions about that priority?
Then I would formulate it differently to expose the need to discuss it, something like:

As an exception, a security issue can be classified as “Blocker” after discussing it within available channels, for example, when the security issue is actively exploited or when it might involve users’ passwords leaks.

+1

I’m not sure password leaks are a good example as due to XWiki’s storage architecture a lot of vulnerabilities could lead to password leaks. To me, the point here is not really the severity, but the urgency of having a fix. For example, imagine we would introduce a new search backend, and then we notice after the release that it doesn’t filter pages the user cannot view and displays their title and at least partial content. That would be a blocker to me because it can basically be exploited by accident and thus active/accidental exploitation is very likely.

But indeed, my idea was that there shouldn’t be fixed criteria and that it would be more based on a discussion. So what about taking your formulation but reducing the example to active exploitation which seems to be the example we can agree on? So the new formulation would be:

As an exception, a security issue can be classified as “Blocker” after discussing it within available channels, for example, when the security issue is actively exploited.

+1

Thanks,
Marius