Zero-Day Diplomacy: How Much Should Governments Disclose?


In recent years we’ve seen hacks like Stuxnet (2010), the WannaCry ransomware (2017), and NSO Group’s Pegasus spyware. All of these relied on zero-day bugs – secret software flaws that no one has patched yet. Stuxnet used four unknown Windows bugs to sabotage Iran’s nuclear centrifuges, and EternalBlue (an NSA tool) was leaked and triggered WannaCry. Pegasus exploited hidden iPhone/WhatsApp bugs to spy on activists. These cases show how powerful zero-days can be.
The big question is:
When a government finds a bug, should it quietly stash it for spying, or report it so everyone can fix it?
What is a Zero-Day?
Simply put, a zero-day is a software bug that nobody (not the maker or the user) knows how to fix yet. “A zero-day is a vulnerability… unknown to its developers,” so attackers can exploit it until it’s patched. Hackers – or even security researchers – hunt for these holes. When someone finds one, they might sell it on a secret market or tell the company so it can be fixed. Zero-days can fetch millions of dollars. There are three “markets” for zero-days:
White market: Researchers report the bug to the software maker (often via a bug bounty program) to get a reward. This helps everyone stay safe.
Gray market: Secret deals where governments or spies buy the bug to use for intelligence or cyberwar. The U.S. and allies are known to be major buyers.
Black market: Criminals trade or sell bugs to use in malware and hacking. They often prefer ready-made exploit kits.
In 2015 the government/spy market was estimated at ten times larger than the legit bug-bounty market. In other words, states and crooks spend far more on zero-days than software companies do to fix them.
Government Stockpiling of Zero-Days
So why do countries hoard zero-days?
The lure is that a secret bug is like a cyber weapon: it can quietly spy on or disrupt an adversary. For example, the NSA found a Windows flaw (EternalBlue) and kept it for its own tools. When that exploit leaked, it was used in global attacks:
Those are just some of the… vulnerabilities pilfered from the NSA’s super-secret stockpile,
The Atlantic reported about WannaCry and EternalBlue. In effect, the NSA thought it had an edge – as one official put it, “If we found a vulnerability, and we alone can use it, we get the advantage.”
Other states do similar things. Israel’s NSO Group sells the Pegasus spyware to clients that target zero-days in iPhones and WhatsApp. Citizen Lab analysis even found real phones infected with Pegasus. China, Russia and others are believed to keep secret bugs too (for hacking or espionage). The bottom line: many militaries and spy agencies treat zero-days as part of their arsenal. But when those secrets leak or get sold, ordinary users pay the price.
The Ethics Dilemma
Holding onto a zero-day has risks and benefits. If a government keeps a bug secret to use against a foe, it might catch a criminal or terror cell it couldn’t otherwise. But if the bug gets out (by theft or leak), it can be used by anyone – hurting innocent people and critical services. EternalBlue’s leak led to WannaCry, which shut down hospitals, phone networks, and even radiation monitors at Chernobyl. Critics say it’s unfair for citizens to suffer because a secret was kept.
Most security experts argue disclosure usually wins out. The U.S. Vulnerabilities Equities Process (VEP) explicitly says the default should be to “prioritize the public’s interest in cybersecurity” by disclosing bugs. As that policy notes, “in the vast majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.”. Disclosure lets software be patched, protecting everyone. Withholding is only justified by an overriding need (e.g. active military ops). Even the UN’s cyber norms say states should “report ICT vulnerabilities in a responsible manner.”.
Still, not every country follows a clear rulebook. The U.S. VEP is public, but other governments often treat bug stockpiles as top secret. In practice, decisions can be political. Some argue that keeping vulnerabilities helps law enforcement or defense – others counter that it just arms hackers. This “publish or perish” debate has huge stakes: every unreported flaw is a potential disaster waiting to happen.
Global Diplomacy and Cyber Norms
On the world stage, there are efforts to set rules. The United Nations and other forums have developed voluntary cyber norms. For example, UN experts agreed on principles like protecting critical infrastructure and informing others about cyber incidents. One norm urges countries to help each other and report vulnerabilities responsibly.
France’s Paris Call (2018) gathered over a thousand supporters – from governments to tech companies – for a pledge on cyber stability. Most EU nations signed on, but key players stayed out. “All of the EU member states are signatories, [but] the United States is not. Nor is India, China, or the Russian Federation.”. In other words, major powers are currently spectators, not signatories. (India for example hasn’t joined the Paris Call or announced any “zero-day policy” of its own.) Some officials in Russia and elsewhere have even floated the idea of cyber arms-control treaties, but so far there’s no binding global deal.
Some analysts talk about zero-days as if they were “digital nukes,” since a single bug can cause massive havoc. But unlike nuclear weapons, there’s no international treaty banning them yet. Instead, states juggle between loose agreements and divergent national practices.
India’s Perspective
For India, cybersecurity is a growing concern. Experts note “India’s technological infrastructure is susceptible to cybersecurity risks and zero day attacks” just like in advanced countries. India has suffered state-sponsored hacks and ransomware too. Yet New Delhi has not publicly declared a stance on hoarding or disclosing bugs. It has no known equivalent of the U.S. VEP, and it didn’t sign France’s Paris Call. Instead, India focuses on defense measures (like CERT-In advisories and laws) and on international cooperation.
India is part of the Quad (with the U.S., Japan, and Australia) on cybersecurity issues. In that group, the emphasis is on practical steps: securing critical infrastructure, supply chains, and sharing threat info. As one recent analysis puts it, engaging India on “shared cybersecurity interests — rather than shared values” is key. In short, India is building its cyber defenses and working with allies, but it has been “circumspect on…issues where it wishes to retain more autonomy” (like offensive cyber tools).
Conclusion
There are no easy answers. Keeping bugs secret may give one country a spy advantage, but history shows it can blow up globally. Most experts say the safer path is transparency: fix problems before they get weaponized. Even U.S. policy emphasizes patching unless there’s a clear need to wait. Meanwhile, international norms and forums are slowly encouraging openness (for example, by urging that vulnerabilities be disclosed responsibly).
Ultimately, zero-days force a tough choice: protect the public by patching software, or protect national security by stockpiling secrets? It’s a balance of risk and trust. As cyberattacks become more powerful, governments and citizens must weigh which path leads to a safer internet. Will the world ever agree on a “patch-and-share” approach, or will zero-day arsenals remain locked away in secret? The debate is far from over – and it will shape how safe our digital future really is.
Subscribe to my newsletter
Read articles from Atharv Patil directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Atharv Patil
Atharv Patil
Encrypting my life one bit at a time from the comforts of 127.0.0.1