The invoice that almost paid itself to the wrong account…
The finance manager almost didn't flag it.
She was approving payments. Routine stuff. Then she noticed the bank details on an invoice were different from what she had on file.
She paused, then picked up the phone and called the supplier directly.
It was fraud. An attacker had impersonated the supplier over email, swapped the bank details on the invoice, and waited. The payment was sitting in the queue, ready to go.
Lucky she called.
That was a professional services company I worked with. Good revenue with lots of smart people. But for them security was an overhead cost, full stop.
Every budget cycle, the same conversation. Security puts a control on the table. The business asks why they should spend money on something that hasn't happened yet. They're not a bank and they don't hold credit card numbers. The threat feels abstract.
There was a specific email security tool that kept coming up. It would have made email impersonation attacks significantly harder to pull off. The cost wasn't unreasonable. But it didn't make the cut. Year after year.
So the gap stayed open.
After the incident, I expected an acknowledgement that there was a direct line between "we chose not to buy that tool" and "an attacker almost stole our money."
It didn't happen.
The business was relieved. Grateful for the finance manager's instinct. And within weeks, back to treating security as overhead. The email security tool came up again in the next budget cycle. Shelved again.
The frustration I couldn't shake was this: nobody understood they were making a choice.
Every time a security budget line gets cut, the business is choosing a protection level. They're saying: at this level of investment, we accept this level of exposure. That's a legitimate trade-off. Companies make them all the time.
The problem is doing it silently. With no record and no ownership.
When an incident happens, the question becomes: why didn't security prevent this? The budget conversation from six months earlier doesn't come up. The CISO ends up defending a gap the business chose to leave open, except nobody framed it that way at the time.
A Protection-Level Agreement fixes that.
Protection-Level Agreements (PLAs)
A PLA is a formal business decision to invest in a measurable level of protection at a defined cost. The board agrees on a protection level. The CISO commits to delivering against it.
The cost is explicit and the trade-offs are documented.
If an incident falls within the tolerances of that agreed level, it's a consequence of a business decision that was made, understood, and signed off on.
In the scenario I described: if the company had a PLA covering email security controls, one of two things would have happened. Either the email security tool would have been included in the agreed protection level, and the exposure would have been closed. Or the company would have formally decided to accept that risk at their chosen investment level, making the incident a consequence of their signed-off decision.
Either way, someone's name is on it. The business owns it, because they made the call with eyes open.
If you've read my piece on the CARE model, this will feel familiar. The Adequate pillar is exactly this: security as a business choice with defined expectations, documented and agreed upon. PLAs are what make Adequacy real in practice.
The measurement tool is Outcome-Driven Metrics (ODMs). Instead of reporting how many vulnerabilities were found, you report against agreed protection outcomes. Mean time to remediation. Phishing simulation click rates over time. Metrics that tell the board whether you're delivering at the level they paid for.
That's a different conversation from a heat map full of amber ratings.
The company I worked with never had that conversation. They kept making implicit risk decisions while believing they were managing overhead. The near-miss didn't shift that.
Maybe nothing short of an actual loss would have.
But if you're a security manager, you know that feeling. You're in a budget meeting, watching a control get shelved, knowing exactly what risk is being accepted. You can't make the business understand what they're deciding, because they have no frame for thinking about it as a decision.
PLAs give you that frame. The trade-off becomes visible. Someone puts a signature next to a risk level.
When something goes wrong, "we made that call together" is a very different place to start than "why didn't security stop this?"
Have you ever been in a position where your organisation accepted a risk without realising they were accepting it? How did it play out when something eventually went wrong?
Remember - trust but verify.
