I was feeling pretty good about myself.
Control effectiveness scores looked great. Control maturity scores were trending up. My ISMS was humming along nicely.
"I am so smart" actually crossed my mind.
Then my CISO made a suggestion: "We should get Carika to look at our processes and tell us what she finds."
Carika is a Process Analyst. A Six Sigma practitioner who makes a living asking deceptively simple questions and watching what happens to the people answering them.
She set up sessions with the team. For each process, she asked two questions.
First: "Tell me step by step what your process does."
We answered confidently. We knew our processes.
Then: "Why are you doing it that way?"
The second question is where things got quiet.
"Errm... not sure. It's just the way we've been doing it."
That came up more often than I'd like to admit.
Carika's assessment confirmed what I already knew: my controls were operating effectively. What it also surfaced was something I hadn't measured and probably wouldn't have thought to look for.
The processes running underneath those controls were full of waste.
The clearest example was our vendor due diligence process. It required a security analyst to manually copy and paste data across several platforms, multiple times per vendor. Tedious, repetitive work that nobody had questioned because it had always been done that way, and the control was passing.
Tedious, repetitive work done by humans produces mistakes. We had the error rates to prove it, once we bothered to look.
The fix was straightforward: automate the manual steps. When we did, we saved several hours a week and our error rates dropped close to zero.
Turns out I was perhaps not so smart!
Here's what that experience clarified for me.
ISO 27001 asks whether your controls are effective. Clause 10.1 demands continual improvement. But the standard doesn't tell you how to measure efficiency. It doesn't ask whether your vendor due diligence process is running at 60% of the capacity it could be, because someone decided years ago that copy-pasting across four platforms was just the process.
The Plan-Do-Check-Act cycle (PDCA) is what most ISMS teams reach for. And it works, in principle. Identify an improvement, implement it, check whether it worked, adjust. The problem is how most teams actually run it: they skip two things that make the exercise meaningful.
The first: establishing a measurable baseline before touching anything. The second: maintaining monitoring long enough to know the improvement held.
So you run the cycle. You close the corrective action. Three months later, the same problem creeps back in, and nobody notices because the monitoring stopped when the corrective action was closed.
Six Sigma's DMAIC methodology is built to close those gaps.
Define, Measure, Analyse, Improve, Control. It's been applied in manufacturing and service industries for decades to drive measurable, permanent process improvements.
It works in an ISMS context for the same reason it works everywhere else: it forces you to handle the data before you change anything, and it doesn't let you declare victory until the process is stable.
What Carika did with my team was a version of this. The two questions she asked were the Analyse phase in practice. But DMAIC has a beginning and an end that matter just as much as the middle.
Here's how it applies, using the vendor due diligence process as the running example.
Define
Write the problem down precisely.
"Our vendor due diligence process averages 4 hours per vendor, requires 23 manual steps, and produces a data entry error rate of 12%." That's a problem statement you can actually do something with.
At this stage you also define success. 2 hours per vendor? Error rate under 2%? That's a decision for whoever owns the risk, not just whoever owns the process.
Measure
Pull actual data before you change anything.
For the vendor process: how long does each step take, where do errors occur, which platform handoffs create the most friction? Whatever historical data you have.
You're building a baseline. A before picture. Without it, the after picture means nothing. Your ISMS likely has some of this data already under Clause 9.1 performance monitoring. This is the phase where you use it.
Analyse
Find the root cause from the data.
Carika's second question is essentially the Analyse phase compressed into one sentence: "Why are you doing it that way?" A Pareto analysis will usually surface 2 or 4 causes driving the majority of the waste. In the vendor process, manual data transfer between platforms was doing most of the damage.
You target those. The rest is noise.
Improve
Design the fix for the specific root cause you confirmed.
In our case: automate the data transfers. Not redesign the entire due diligence process. Not buy a new GRC platform. One targeted change against a confirmed root cause, tested against the baseline already established.
Targeted changes are faster to implement and far easier to evaluate.
Control
This is where most ISMS teams quit.
Once the improvement is live, the Control phase sets a threshold and keeps monitoring. Permanently. Not until the corrective action is closed. Permanently.
For the vendor process: if the error rate climbs above 3% for 2 consecutive months, or processing time per vendor exceeds 2.5 hours, the problem is formally re-opened. Specific owner. On the ISMS calendar. Wired into Clause 9.1 performance monitoring.
That's Statistical Process Control applied to an ISMS. You're defining the conditions under which the process counts as broken, so you know before an auditor tells you.
This is also what "continual" actually means in Clause 10.1. An ongoing state, not a periodic event.
What Carika's assessment showed me was that I'd been measuring the right things for ISO 27001 compliance, and missing a whole category of problem. Effective controls can sit on top of genuinely wasteful processes. The standard doesn't push you to look for that.
DMAIC does. And the Control phase is what separates a real improvement from a completed corrective action. The improvement happened. But it holds because someone is watching, with defined thresholds and a formal trigger to re-open.
There's also a certification angle worth noting.
When your next surveillance audit comes around and the auditor asks for evidence of continual improvement, you can show them something with a different character than a closed CAR. You can show them a before baseline, a confirmed root cause, a measured delta, and an active control chart. That's a complete story, and it satisfies Clause 10.1 in a way that's genuinely difficult to argue with.
The tools you need aren't complicated. Control charts work in Excel. Pareto analysis is a sorted bar chart. FMEA (Failure Mode and Effects Analysis), if you want to go deeper on risk prioritisation, is a structured table.
The hard part is the habit change. Agreeing that "done" means stable and monitored, not just implemented.
Carika taught me that my controls being green said nothing about whether my processes were efficient. The maturity scores didn't catch that. The PDCA cycle didn't either. DMAIC would have.
Which phase of DMAIC do you think your ISMS is weakest at? My guess is most will say Control. But I'm curious where yours actually breaks down.
Remember - trust but verify.
Before You Go
Did you find this useful? I would genuinely like to know.
If you did, please send me a reply and let me know! Else, if there are other topics you want to know more about, let me know!
