Security budgets keep rising, yet many organisations are not materially improving their risk posture. Gartner projects worldwide end-user spending on information security to reach about $213B in 2025 (up from $193B in 2024) and continue growing in 2026. That growth is real, but it often translates into more tooling rather than better security outcomes.
Meanwhile, the downside remains brutal. IBM’s Cost of a Data Breach report put the average global breach cost at $4.88M (2024). And breach patterns continue to emphasise the same fundamentals: the Verizon DBIR consistently shows a strong “human element” and credentials as a primary failure mode (e.g., 68% of breaches involved the human element in the 2024 DBIR dataset).
So the budgeting question is not “How much are we spending?” It’s “Are we buying down risk, or buying noise?”
The core budgeting failure: spending isn’t tied to loss drivers
Most environments lose to the same few root causes:
-
Credential compromise / identity abuse (stolen credentials, session hijack, MFA bypass, privilege escalation).
-
Unpatched or exposed attack surface (Internet-facing apps, vulnerable edge devices, weak configuration hygiene).
-
Low-fidelity detection and slow containment (poor telemetry, alert overload, weak incident muscle memory).
-
Cloud and SaaS sprawl (misconfigurations, over-permissioned identities, unmanaged data).
If your budget isn’t explicitly mapped to reducing these loss drivers, it will drift into waste.
Where cybersecurity money is wasted
Tool sprawl and overlapping point solutions
Tool sprawl is the most common budget sink because it feels like progress: more dashboards, more agents, more “coverage.” In practice, it increases complexity and dilutes ownership.
A Barracuda survey found 65% of IT/security professionals say their organisations are juggling too many security tools, and 53% say their tools cannot be integrated, creating fragmented environments. Fragmentation drives operational drag: more handoffs, duplicated triage, inconsistent policy enforcement, and gaps between products.
How this wastes money:
-
Paying twice (or five times) for the same control (endpoint, email, DLP, vulnerability scanning, CSPM, etc.).
-
Adding tools faster than you can integrate them into detection engineering and response playbooks.
-
Creating “alert theatre”: lots of red, little containment.
Practical test: If you removed a tool tomorrow, could you articulate exactly which detection rules, response actions, or preventive controls would stop working? If not, it’s probably shelfware.
Shelfware: unused licenses and unused features
Security is uniquely vulnerable to shelfware because:
-
Purchases are often justified by fear, audits, or “industry best practice,” not measured outcomes.
-
Implementation is hard; operations teams are understaffed; integrations don’t happen.
-
Renewals happen by inertia.
Shelfware is not just a licensing problem. It’s also capability shelfware: you own EDR but don’t have mature endpoint isolation procedures; you own SIEM but don’t tune detections; you own PAM but admins still have standing privileges.
Buying “visibility” without funding response
A common anti-pattern: invest heavily in SIEM, XDR, NDR, UEBA—then underfund incident response, detection engineering, and threat hunting.
Visibility without response capacity is a tax:
-
More telemetry generates more alerts.
-
More alerts require more triage.
-
Under-resourced teams respond by suppressing alerts or ignoring them.
The outcome is predictable: noisy SOC, low confidence, slow containment—exactly what drives breach cost.
Compliance spend that doesn’t reduce real risk
Compliance is necessary. But “audit-pass security” often over-allocates budget to evidence production, documentation tooling, and checkbox controls, while under-allocating to what attackers exploit daily: identity, patching, email security, and secure configuration.
If your spend is optimised to satisfy a framework rather than reduce the likelihood/impact of credential theft or ransomware, you’re budgeting for optics.
Premium consulting with no measurable transfer of capability
External expertise is valuable. The waste happens when engagements produce:
-
slide decks instead of implementable architecture,
-
recommendations without engineering backlog,
-
no operational handover (runbooks, tuning guides, measurable SLAs/SLOs).
If the engagement doesn’t leave your team more capable and your environment demonstrably safer, it’s not security—it’s procurement.
Cloud security spend that doesn’t address identity and misconfigurations
Cloud risk is frequently an identity and configuration problem masquerading as a tooling problem. Many organisations buy cloud security platforms but don’t fix:
-
over-permissioned IAM,
-
lack of conditional access,
-
weak secrets hygiene,
-
lack of guardrails (policy-as-code, baseline hardening).
A Fortinet-cited survey highlighted common cloud weaknesses, including IAM issues and misconfigurations, and also pointed to budget share and maturity gaps in cloud security postures. The key takeaway: buying cloud tools doesn’t automatically create cloud discipline.
Where the money actually matters
If you want measurable risk reduction, fund the capabilities that break breach chains.
Identity security (because credentials are the front door)
Credential abuse is foundational to modern intrusion. The Verizon DBIR explicitly calls out stolen credentials as dominant in key attack patterns (e.g., web application attacks).
Spend that tends to pay off:
-
Phishing-resistant MFA (FIDO2/WebAuthn) for admins and high-risk users first.
-
Conditional access (device posture, risk-based sign-in, geo/ASN anomaly policies).
-
PAM with removal of standing privileges (JIT/JEA, break-glass discipline).
-
Secrets management and rotation automation.
-
Identity governance (joiner/mover/leaver correctness, access reviews that actually revoke).
If you’re choosing between another tool and hardening identity, identity usually wins.
Attack surface and vulnerability reduction (especially internet-facing)
Most organisations still operate with:
-
incomplete asset inventory,
-
slow patch SLAs,
-
unmanaged edge services,
-
sprawling SaaS.
High-return spend:
-
asset discovery + ownership tagging,
-
vulnerability management tied to exploitability and exposure (EPSS/KEV-style prioritisation),
-
patch automation and maintenance windows that actually happen,
-
secure baseline configuration and drift control.
This reduces the number of ways an attacker can get in without relying on “detect everything.”
Detection engineering and response capability (not just tools)
Breach cost is heavily influenced by how quickly you identify and contain. You can’t buy that as a SKU; you build it as an operational capability.
Fund:
-
detection engineers to write and tune detections aligned to MITRE ATT&CK,
-
log engineering (clean telemetry, good parsing, sane retention),
-
incident response playbooks, tabletop exercises, and on-call readiness,
-
measured SOC outcomes (MTTD/MTTR, true positive rate, containment time).
Tools matter, but people + process determines whether tools work.
Resilience: backups, recovery, and segmentation (ransomware reality)
If you assume you will never have a successful intrusion, your budget is fantasy.
Spend that pays:
-
immutable/offline backups, tested restores, and recovery time objectives that reflect reality,
-
network segmentation and egress controls,
-
hardening of AD/Entra ID and tiering of admin systems,
-
rapid isolation capability (endpoint and identity).
This is what turns ransomware from an existential crisis into a bad week.
Secure-by-default engineering (reduce demand on the SOC)
A mature security program budgets into engineering guardrails:
-
secure CI/CD (code scanning that’s tuned, secrets scanning that blocks, signed artifacts),
-
infrastructure-as-code with policy enforcement,
-
hardened golden images and baseline templates,
-
SSO enforcement and SaaS approval workflows.
This reduces preventable incidents and alert volume.
A budgeting model that prevents waste
Step 1: Budget against your top loss scenarios
Write 6–10 concrete scenarios (not vague “cyber risk”):
-
“Admin credential theft leads to domain takeover”
-
“Ransomware via exposed VPN/edge device”
-
“Cloud storage exposure + data exfil”
-
“SaaS token theft + invoice fraud”
Then map each scenario to preventive, detective, and responsive controls, with owners and maturity targets.
Step 2: Force every line item to answer one question
What measurable capability improves if we fund this?
Examples of measurable capabilities:
-
% privileged accounts on phishing-resistant MFA
-
patch SLA compliance for critical internet-facing assets
-
MTTD/MTTR for high-severity incidents
-
backup restore success rate and time-to-restore
-
reduction in unresolved critical misconfigurations
If you cannot measure the outcome, you are probably buying vibes.
Step 3: Consolidate platforms where it reduces operational load
Consolidation is not a goal; operability is the goal. Consolidate when it:
-
reduces agent count,
-
improves correlation and response automation,
-
reduces contract/vendor overhead,
-
improves coverage consistency.
Do not consolidate into a platform your team cannot operate.
Step 4: Reserve budget for “closing the loop”
Common failure: buying controls but not funding:
-
integration,
-
tuning,
-
runbooks,
-
training,
-
exercises,
-
ongoing maintenance.
If you can’t fund the last mile, you can’t justify the purchase.
A blunt rule of thumb
If your security budget is increasing but you still have:
-
weak MFA for admins,
-
poor asset inventory,
-
slow patching of internet-facing systems,
-
untested restores,
-
a SOC drowning in alerts,
then you are not underfunded. You are misallocated.
Security spending is only “expensive” when it doesn’t change outcomes. The objective is not more tools, it’s fewer successful intrusion paths, faster containment, and recoverable impact when prevention fails.