A new CVE, published for all to see. Systems are vulnerable in ways we didn't see coming. From that point forward, the race is on. Defenders spring into action. Defenders make a big deal about it on Patch Tuesday. But by then, it's often too late, exploitation has already happened, and systems are already infiltrated. The vulnerability is already mature, and our existing safeguards sprung into action too late.
The oft-overlooked truth is that once a patch is issued, the vulnerability has already been weaponized against systems. In fact, by that point it is nearing the end of a lifecycle that started many phases before defenders caught up, with many different actors and formats along the way.
I have had the absolute pleasure of meeting so many people along the journey of building Desired Effect. I want to tell each and every one of them this truth. I'm often surprised by how few understand the full exploitation lifecycle, and I'm confounded that the role of researchers remains so critical yet so misunderstood.
As a 20+ year veteran of the software exploitation space, I take for granted how niche my community is. So, in this blogpost, I'm sharing an educational primer on the journey of exploitation. Consider it the vulnerability equivalent of Schoolhouse Rock's "How a bill becomes a law."
I want to start out by pointing to a huge misconception. For decades vulnerability researchers have been demonized by vendors. You found a bug? Great, let’s slap you with a lawsuit. A scary cease-and-desist from a billion dollar firm whose only interest is in protecting the brand’s reputation. A violation of terms of service, or of a EULA. This is complete garbage. It has fostered distrust in an entire generation of vulnerability researchers. But here’s the thing. Researchers do not create the risk. They don’t inject a flaw into an otherwise perfect product. Researchers are innovators. They aren’t breaking anything. They explore boundaries. They have a unique ability to put products into states that are unintended by their creators. States that are missed by the developers, the program managers, the unit tests, the red teams, the bug bounties.
At this stage, the researcher can reliably crash the vulnerable component; they understand the conditions under which the bug can manifest. Perhaps they can leverage a few makes and models of target platforms or even emulated products to do their experimentation. They may be able to control the target’s originally-intented code flow and insert malicious instructions, even if temporarily (i.e. before the operating system realizes the exploit has put the vulnerable software into an unrecoverable state and terminates the code and the proof of concept with it). If you’re in the industry, you may have heard the term “pop calc”, meaning upon successful exploitation, the effect is to run the operating system’s calculator program, illustrating successful compromise without doing anything malicious.
This specific step deserves a longer form post, and I promise to deliver one in the near future. But for now please recognize a few inconvenient truths which have directly led me to the conclusion of a tremendous market opportunity, hence the formation of Desired Effect:
Hollywood likes to project the lone genius who builds a solution on the fly. In reality, the person who finds the bug and the person who uses it are often different parties. So how does that operator get the bug? They need to acquire it. If you’re in China, this can be nationalization following awareness made possible by national hacking contests. If you’re most everywhere else, this can be a financial transaction between researcher and operator.
As already shared, historically vendors have often treated researchers poorly, even when those researchers were not seeking any financial compensation. As a result of decades of perceived mistreatment, repairing those relations still have a long ways to go. So who buys exploits? People who are going to use them. And that pool is quite small.
Coupling vendor hostility with the historical fact that governments refused to even acknowledge they conducted offensive cyber operations, transactions were pushed to shady decentralized bazaars (trade shows, dark web) where brokers (such as myself) played middle man between two parties that didn’t want the other to know they exist. No one quite knew what they were getting. No one had visibility into the totality of the market. It was perilous! When you have a broker who will not disclose the identity of the end purchaser, you run the risk of selling to someone you don’t want benefiting from your research. And recognizing the (slow) procurement cycle of any large bureaucracy, criminals could, and did, swoop in with an offer. “I may pay you less but I’ll pay you today,” they said, and researchers frequently take that deal. That’s the state today. Criminals can easily get their hands on Proof of Concept exploits, without leaving a trace.
This is where bad guys have their most potent head start. With a Proof of Concept in hand, they have awareness of a flaw. They have taken that PoC, and build code to do more than pop calc. They leverage control to steal data, upload intrusion software, etc. They victimize. They have free reign to attack with little impunity because the defenders don’t know there’s a flaw and their defensive tech is often completely blind. Tooling won’t have signatures, event log correlations, etc. The bad guys are not just winning, they are dominating.
So we have an attacker going buck wild on unsuspecting victims. At some point, they will hit a honeypot or sensor. Eventually, they will do so with enough frequency or “intensity” to warrant some kind of investigation. A defensive research team will confirm the traffic is exploiting something, and put together data to issue advisories. The secret is finally out! We have a CVE! How long did it take to get here? Months. Years in some cases. After organizations were breached.
Defenders twiddle their thumbs in eager anticipation. A vendor patch is finally issued! Only here have we reached Patch Tuesday.
Owner operators of vulnerable tech can apply vendor patches. But this, too, takes time. And that time may be increasing, rather than decreasing. As systems grow more complex and internal resources stay limited, patch timelines are getting longer. How long will it take to dig out? Long enough that there is an entire sub-industry called “vulnerability management” designed to help organizations prioritize the order to apply patches, under the premise that they’ll never be able to patch everything and will always be several steps behind. And while the vulnerability management space revolves around improving this process (based on risk, proliferation, and impact), until a patch is installed, the bug is present and systems are sitting ducks.
Hopefully this helps frame for you how much happens before most of the cybersecurity community has any awareness or ability to close vulnerabilities, and mitigate the existing risk.
Keep in mind: From steps 04 to 10, the adversary is in complete control. Even as some defenders are learning and putting together actionable steps, the attackers have the upper hand. They are starting with a huge lead. That’s why defenders can’t keep up.
I’ll close out with a plug. Desired Effect creates a new shortcut for the defensive community. A bridge, from Step 03 directly to Step 09. One that bypasses the weaponization phase. This is critical! The model of “get breached, respond,” does not work. Defenders need to be proactive. They need to engage the vulnerability research community and help get the word out before attackers have the ability to victimize anyone. This is our Desired Effect.