There were more than 23,000 vulnerabilities discovered and disclosed.
While not all of them had associated exploits, it has become more and more common for there to be a proverbial race to the bottom to see who can be the first to release an exploit for a newly announced vulnerability.
Instead of racing to be the first to publish an exploit, the security community should take a stance of coordinated disclosure for all new exploits.
Coordinated Disclosure vs. Full Disclosure In simple terms, coordinated disclosure is when a security researcher coordinates with a vendor to alert them of a discovered vulnerability and give them time to patch before making their research public.
Nondisclosure is the policy of not releasing vulnerability information publicly, or only sharing under nondisclosure agreement.
For coordinated vulnerability disclosure, while there is no specific endorsed framework, Google's vulnerable disclosure policy is a commonly accepted baseline, and the company openly encourages use of its policy verbatim.
Google will notify vendors of the vulnerability immediately.
90 days after notification, Google will publicly share the vulnerability.
On the full disclosure side, the justification for immediate disclosure is that if vulnerabilities are not disclosed, then users have no recourse to request patches, and there is no incentive for a company to release said patch, thereby restricting the ability of users to make informed decisions about their environments.
If vulnerabilities are not disclosed, malicious actors that currently are exploiting the vulnerability can continue to do so with no repercussions.
There are no enforced standards for vulnerability disclosure, and therefore timing and communication rely purely on the ethics of the security researcher.
Consumers have a right to know about vulnerabilities in the devices and software in their environments.
As defenders, we have an obligation to protect our customers, and if we want to ethically research and disclose exploits for new vulnerabilities, we must adhere to a policy of coordinated disclosure.
To be clear, reputation isn't the only reason security researchers release exploits - we're all passionate about our work and sometimes just like watching computers do the neat things we tell them to do.
This is not to say that researchers shouldn't publish their work ethically, they should follow the principle of responsible disclosure, both for vulnerabilities and for exploits.
We recently saw this around the ScreenConnect vulnerability - several security vendors raced to publish exploits - some within two days of the public announcement of the vulnerability, as with this Horizon3 blog post.
Two days is not nearly enough time for customers to patch critical vulnerabilities - there is a difference between awareness posts and full deep dives on vulnerabilities and exploitation.
Exploits are intended to be researched in order to provide an understanding of all the potential angles that the vulnerability in question could be exploited in the wild.
The research for exploits should be internally performed and controlled, but not publicly disclosed in a level of detail that benefits the threat actors looking to leverage the vulnerability, due to the frequency that publicly marketed research of exploits via well-known researchers and research firms, are monitored by these same nefarious actors.
While the research is necessary, the speed and detail of disclosure of the exploit portion can do greater harm and defeat the efficacy of threat intelligence for defenders, especially considering the reality of patch management across organizations.
This Cyber News was published on www.darkreading.com. Publication date: Thu, 30 May 2024 14:00:27 +0000