I always thought my reports were pretty good (or at least reasonably good). But looking back...I think they were actually pretty bad! They weren’t providing the value I thought they were.
For years, whenever I put together a report for management, it was packed with RAG statuses, heat maps, and trend lines (I looove a good trend line!). There is something deeply satisfying about watching a metric move in the right direction over time and being able to point to it and say, "See? Progress."
Then I had a moment where I realised I was delusional about the impact my reports were having.
When I recently opened the viewing stats for one of my reports...I could determine people were only reading very specific pages, and skipping the rest. Some never even opened the document.
The people who were supposed to read those reports did not care. Not because they were disengaged or irresponsible, but because the information I was presenting, as polished and colour-coded as it was, was not telling them what they actually needed to know. I was making them do the work. I was handing a senior executive a heat map and expecting them to translate it into a business decision on their own.
That is not their job. It is ours.
This is the core problem with how most cybersecurity teams report upward: we think we should be giving as much detail as possible, when we should cut to the main issues. We speak in technical dialect when we need to be speaking in business language. We show dashboards when we should be telling stories. We report activity when we should be demonstrating outcomes.
The good news is that there is a practical framework for fixing this. It is called the CARE model, and it shifts the entire framing of how we communicate about cybersecurity from "here is what we did" to "here is what it means for the business."
It has resulted in my last report going from what used to be a 30 slide deck, down to a 5 page slide deck. When I told the CFO my report will only be 5 slides from now on, I saw visible relief on his face!
What CARE Actually Stands For
CARE is a four-pillar model designed to give boards and executive teams a meaningful way to assess whether their cybersecurity programme is doing its job. The pillars are Consistent, Adequate, Reasonable, and Effective. Think of it as a reality check that keeps security investment tethered to actual business need rather than chasing an ever-expanding threat landscape with no clear finish line.
Let me walk through each one and explain why it matters in a reporting context.
Consistent: Answering the "Are We Stable?" Question
One of the most persistent problems in security communication is the idea that cybersecurity is never finished. While that is technically true from a threat perspective, it is a terrible message to give to a board. It sounds like you are asking for unlimited resources to fight an unwinnable war.
A better framing is stability. The question is not "are we done?" but "are we executing against defined controls and hitting agreed targets?" That is a question a board can actually work with.
Consistent reporting means you have a programme with defined controls, you monitor them regularly, and you can show that performance against those controls is predictable. This aligns closely with the "Govern" function in NIST CSF 2.0, which places strategy and monitoring at the centre of a mature programme rather than treating governance as a checkbox.
When you report consistently, you move from reactive incident summaries to a stable narrative: these are our targets, this is where we are, this is what has changed since last quarter and why. That is a story a CFO or a board director can follow without a cybersecurity degree.
Adequate: Security as a Business Choice, Not a Technical Imperative
Here is a reframe that tends to make security professionals uncomfortable: the right level of security is not the maximum technically possible level. It is the level that meets the expectations of your customers, regulators, and shareholders.
Adequacy means having a structured conversation with stakeholders about what level of protection is expected, documenting it, and then managing to that agreed level. The tool for doing this is a Protection Level Agreement, or PLA, which works similarly to a service level agreement in IT operations. You define what protection you are committing to deliver, and you report against it.
This matters a great deal in the current regulatory environment. Under emerging requirements around materiality and board-level risk oversight, directors are increasingly expected to demonstrate that they understand and oversee the organisation's risk management strategy. A PLA gives them something concrete to point to. It shows that the level of security investment was a deliberate business decision made with appropriate oversight, not a figure someone in IT arrived at independently.
Reporting on adequacy means shifting the conversation from "we are assessing threats" to "we are meeting the protection levels we committed to." That is a significant change in posture, and it resonates with board-level audiences in a way that vulnerability counts simply do not.
Reasonable: Justifying What You Spend
Budget conversations are where a lot of CISOs lose credibility. When challenged on spend, the instinctive response is often to point to a growing threat landscape and argue that more is always better. That argument rarely lands well with a CFO.
Reasonableness reframes the spend conversation entirely. If your protection levels are being met, then the cost of meeting them is simply the price of performance. It is not an IT expense to be scrutinised, it is the cost of the business decision the board made when they agreed to those protection levels.
This is where PLAs become genuinely powerful from a defensibility standpoint. If an incident occurs and it falls within the tolerances of the agreed protection level, that is a business risk that was understood and accepted, not a control failure. That distinction matters enormously when boards start asking hard questions after an incident. It moves the conversation from "why didn't security prevent this?" to "we accepted this risk at this level, and here is how we are responding."
Quantitative analysis, specifically the FAIR model (Factor Analysis of Information Risk), adds another layer of credibility here. FAIR lets you translate protection level decisions into financial terms, specifically Annualised Loss Expectancy. When you can show a board that a particular control costs less than the expected annual financial exposure it prevents, you have made a defensible, financially grounded case for that investment. That is a very different conversation from "we need this because the threat landscape is complex."
Effective: Measuring What Actually Matters
This is where most reporting fails most visibly. Effectiveness should be measured by outcomes, not by activity. Yet the vast majority of security dashboards are dominated by activity metrics: number of vulnerabilities found, number of patches applied, number of training completions.
These are process metrics. They tell you that work is being done. They do not tell you whether the organisation is better protected.
Outcome-Driven Metrics, or ODMs, are the alternative. Instead of reporting how many vulnerabilities were identified, you report mean time to remediation. Instead of tracking how many people completed security awareness training, you track phishing simulation click rates over time, which tells you whether behaviour is actually changing.
The distinction matters because executives do not think in process terms. They think in outcomes. "We closed 87% of critical vulnerabilities within 30 days" is an outcome. "We identified 1,400 vulnerabilities this quarter" is a number that raises more questions than it answers.
ODMs give stakeholders a direct line of sight into how well the organisation is protected. They are the answer to the "so what?" question that every executive is silently asking every time they look at a security dashboard.
Putting It Into Practice: The Reporting Cadence
Adopting the CARE model does not require rebuilding your entire reporting structure from scratch. It requires a shift in what you choose to emphasise and how you frame it.
A practical cadence looks something like this. Quarterly updates should focus on ODM performance, any changes to protection levels, and a brief narrative on what has shifted and why. Annual reviews should assess strategic alignment, revisit whether the agreed protection levels still reflect the business's risk appetite, and look at programme effectiveness over the longer term.
The one-page executive dashboard is a valuable tool here, not because RAG statuses are inherently bad, but because used correctly, they can tell a quick story. The difference is in the narrative that accompanies them. A red rating next to "phishing resilience" means nothing on its own. A red rating with the sentence "we are currently below our agreed protection level for human-vector attacks, and here is our 90-day plan to close that gap" is actionable information.
The goal is not to eliminate visual tools. It is to make sure those tools are communicating decisions and outcomes rather than presenting raw data and expecting the reader to do the analysis.
The Shift That Changes Everything
The CARE model is ultimately asking CISOs and security leaders to take on a different kind of responsibility. Not a technical one, but a communication one.
It means translating complexity into clarity. It means owning the "so what?" rather than leaving it to the board to figure out. It means treating every metric as an answer to a business question, not just a data point to be reported.
The executives in your boardroom are not failing to engage with cybersecurity because they are not smart enough to read a heat map. They are disengaging because nobody has translated the heat map into something that connects directly to the decisions they are responsible for making.
That translation is the job. And the CARE model gives you a structured way to do it.
Thanks for reading! As a thanks, here is fun fact for you:
The First PC Virus
In 1986 two brothers in Pakistan created the Brain virus to stop people from pirating their medical software. It unintentionally spread worldwide.Do you have a fun fact? Hit reply and tell me your fun fact!
