Skip to main content

The EU Doesn't Understand CVD

·9 mins

TL;DR: The EU doesn’t understand the difference between a CVD policy and a bug-bounty programme. This is problematic for the CRA.

Earlier this year, I wrote an article about my difficult experiences with Coordinated Vulnerability Disclosure (CVD) in Belgium. Since then, I’ve had the chance to speak with multiple national CSIRTs across the EU. During these interactions, it clearly stood out to me that everyone I spoke to has the best intentions and is genuinely committed to improving cybersecurity in Europe. However, it also taught me that the EU fundamentally misunderstands CVD.

That second point is worrying. CVD is central to both the Cyber Resilience Act (CRA) and the second directive on Network and Information Systems (NIS2), two cornerstones of EU digital policy. If the EU gets CVD wrong, the effectiveness of these laws, and therefore the entire digital policy, could be severely undermined.

If you represent an EU member state or an EU institution and find value in this article, please feel free to reach out.

What the EU thinks CVD means #

To explain where my concern comes from, let’s look at how the EU defines the term CVD policy:

CVD policy: A formalised set of rules for searching for and reporting vulnerabilities, with an emphasis on coordinated handling of information about these vulnerabilities, in order to limit the damage caused by unintentional or untimely disclosure or by non-responsive counterparties. These rules should (…) provide a guarantee that the entities involved in the process will not disclose vulnerability information without due coordination.

This definition comes directly from the Guidelines on Implementing National Coordinated Vulnerability Disclosure Policies. That document was drafted by the NIS Cooperation Group, an official EU body established under Article 14 of NIS2.1

The Cooperation Group is made up of the EU Member States, the European Commission, and ENISA. Among its responsibilities is a legal mandate to “provide guidance to the competent authorities in relation to the development and implementation of policies on coordinated vulnerability disclosure.”

In other words, this is the group that effectively defines what “CVD” means in the EU.

What is wrong with the EU’s definition #

Now, although the definition above might seem reasonable at first, it is actually fundamentally flawed. This is because it fails to recognise the true dynamics between a vulnerability reporter and the notified organisation. Specifically, the Cooperation Group’s definition relies on the misguided notion that reporting vulnerabilities is a privilege granted to reporters. Under this framing, reporters can be subjected to a formalized set of rules dictating how they must behave.

This perspective comes from the implicit assumption that vulnerability reporters must have illegally attacked a computer system in order to discover a vulnerability. From this view, reporters are treated as criminals who should feel fortunate to be “allowed” to disclose their findings without facing legal consequences.

This logic is confirmed by other statements in the Cooperation Group’s document, e.g., on page 5, it is stated that “It is highly recommended for vendors/suppliers to publish a CVD policy, in order to allow researchers to identify and report vulnerabilities”. This might sound innocuous, but pay attention to the framing: the policy is not there to support security experts, but rather to allow them; This (subtely) reinforces the misconception that reporting is a privilege rather than a civic responsibility.

However, it is a mistake to assume that vulnerability discovery inherently involves hostile or unethical behaviour. While in some cases identifying or validating a vulnerability may indeed require reporters to violate one or more laws, this is a consequence of shortcomings in our legal system rather than of any malicious intent on the part of the reporter.2 Moreover, in many cases, identifying a vulnerability can happen through perfectly legal use of a service. For example, the vulnerability I reported earlier this year was discovered and validated through normal use of my ebanking.

In addition, vulnerabilities are not only found in IT services. They can also be discovered in products. Identifying a vulnerability in a product almost never involves an action that could be considered illegal.3 This point is particularly important, since both NIS24 and the CRA5 include explicit provisions for reporting product vulnerabilities.

Most CERTs are far more accustomed to the dynamics surrounding service-related vulnerabilities. However, with the CRA soon coming into force, it is essential that both CERTs, and the policies guiding them, adapt to the different realities of product vulnerability reporting and handle them with equal seriousness and urgency.

The good intentions #

To be fair, the Cooperation Group’s interpretation actually comes from good intentions. As I stated above, it is true that in some cases, particularly when considering IT services, identifying or validating a vulnerability may involve actions that are theoretically illegal. In fact, this is widely recognized to be one of the biggest issues governing vulnerability reporting. To address this, the Cooperation Group (i) calls for the creation of legal safe harbours for vulnerability disclosure, and (ii) suggests that CVD policies effectively serve as legal contracts between organisations and reporters. The logic is simple: “if you behave, we promise not to sue you.” This explains why CVD policies are framed as a set of rules and why we see similar problematic elements in Belgium’s national CVD legislation.

Again, at first glance, this approach makes sense. But it has serious problems.

First, relying on the threat of legal consequences is not only hostile toward reporters, but also ineffective in cases where the reporter does not actually need or value legal safeguards. As discussed earlier, there are many cases where reporters have no reason to fear legal repercussions, e.g., when reporting a vulnerability in a product. In such cases, if you want them to coordinate disclosure timelines, you cannot rely on coercion. This is not to say that CVD policies cannot set expectations or requests for reporters, but cooperation must be grounded in trust and mutual recognition, not imposed under the assumption of wrongdoing.

Second, when organisations can unilaterally impose rules on reporters, they gain enormous leverage. In practice, this often results in CVD policies that force reporters to accept restrictive terms, such as NDA-style clauses, just to be able to report a vulnerability. Belgium’s safe harbour legislation shows a similar pattern, and so does the EU guide. The latter even explicitly suggests that public disclosure sooner than 1 year after reporting is “unreasonable”. In practice, 90 days is a widely recognized timeline for vulnerability disclosure.

Third, this approach ignores the history of CVD. Traditionally, when a reporter agrees to delay publication of a vulnerability, this is seen as an act of goodwill and responsible citizenship; not something that organisations are inherently entitled to. Turning voluntary cooperation into forced compliance undermines the very spirit of CVD, and will likely lead to more uncoordinated vulnerability disclosures in the future.

CVD is not a bug bounty #

Where the NIS Cooperation Group’s definitions do make sense is in the context of a bug bounty programme. Unlike a CVD policy, a bug bounty programme is essentially a contract published by an organisation that says: “We invite you to look for vulnerabilities in our systems or products, under these conditions.”

These conditions can include obligations for both sides. For example:

  • obligations for the researcher, such as “don’t perform DDoS attacks”, and
  • obligations for the organisation, such as “if you find and report a vulnerability, we will pay you €1,000.”

In this framework, clauses about NDAs or requirements on researchers to identify themselves are perfectly reasonable. However, a bug bounty programme cannot be seen as a replacement of a CVD policy. They are fundamentally different.

The Cooperation Group seems to miss this distinction, as they define a reward programme as follows:

Reward programme: This is an optional element of a CVD policy which can offer different types of rewards for submitting a valid vulnerability report, such as a vulnerability reward programme – also known as a ‘bug bounty’ – or a recognition/gift programme.

By effectively blending the concepts of bug bounty programmes and CVD policies, the Cooperation Group is making a serious mistake. The consequence is that organisations may believe they only need what effectively accounts to a bug bounty programme to cover their CVD responsibilities under NIS2 and the CRA. This risks leaving many vulnerabilities unreported, discourages good-faith disclosure from security experts who aren’t interested in bounties, and ultimately weakens the EU’s ability to respond to security issues in a coordinated and transparent way.

We can even see this play out within the EU guide itself: the guide suggests that organisations can exclude systems or products from their CVD policies. In such cases, what is someone who found a vulnerability supposed to do? Not report the vulnerability at all, or resort to uncoordinated disclosure? While such scope restrictions are entirely appropriate in a bug bounty programme, they have no place in a CVD policy.

This is a real problem #

The language and ideas in the Cooperation Group’s document aren’t just theoretical; they have real-world consequences. We are already seeing these concepts mirrored in Belgian, Maltese, and Dutch CVD guildines and legislation, with predictable results: security experts are increasingly hesitant to report vulnerabilities at all, openly questioning whether it is worth the legal and procedural risk. And when vulnerabilities go unreported, the cybersecurity of the entire Union is weakened.

How to move forward #

To get us out of this situation, the Coordination Group will have to redefine the terms CVD policy and Reward programme.

I propose the following definitions:

CVD policy: A CVD policy is a public commitment by an organisation to receive and handle vulnerability reports in good faith, offering assurances that reporters acting responsibly will not face legal threats, while committing to investigate reports, communicate transparently, and coordinate disclosure timelines in order to reduce risk and protect users.

Reward Programme: A reward programme (such as a bug bounty or recognition scheme) is an optional incentive mechanism that may exist alongside, but never substitute for, a CVD policy. It sets out the scope and conditions under which researchers may look for and report vulnerabilities in a system or product, in exchange for monetary or non-monetary rewards.

These definitions reflect the real-world dynamics of vulnerability disclosure. They distinguish responsible reporting from invited testing, and they make space for both, without confusing one for the other.

If the EU wants the CRA and NIS2 to succeed, it must stop treating reporters as offenders and start treating them as essential partners. Until that happens, Europe will continue to lose out on valuable vulnerability reports. Not because security experts don’t care, but because the system makes it too risky or too frustrating to help.

Good policy does not force cooperation, it earns it.

The EU is dangling the Sword of Damocles above a vulnerability reporter


  1. And Article 11 of the original NIS directive↩︎

  2. Computer crime laws tend to use very broad and vague terminology. ↩︎

  3. The primary exception being DRM circumvention. ↩︎

  4. From NIS2 Article 12: “The CSIRT designated as coordinator shall act as a trusted intermediary, facilitating, where necessary, the interaction between the natural or legal person reporting a vulnerability and the manufacturer or provider of the potentially vulnerable ICT products or ICT services, upon the request of either party.” ↩︎

  5. From CRA Article 13(8): “Manufacturers shall have appropriate policies and procedures, including coordinated vulnerability disclosure policies” ↩︎