Menu
Subscribe to Holyrood updates

Newsletter sign-up

Subscribe

Follow us

Scotland’s fortnightly political & current affairs magazine

Subscribe

Subscribe to Holyrood
by Ciaran McHale
26 February 2024
Who's to blame, the analyst or the tools?

Partner content

Who's to blame, the analyst or the tools?

A colleague once told me, breaches occur not because of the analyst but because of the tooling not being fit for purpose”. I wanted to explore this idea in more detail because quite frequently I’ll stray across articles and op-ed pieces describing in detail the “human element” behind a breach. Many claim that in and around 45 per cent of breaches, hacks and incidents were the result of “lack of training”.

While most of these claims focus on regular employees instead of cybersecurity analysts, it’s still an interesting point and, in my opinion, a relevant indicator of a larger problem with cybersecurity tooling. Surely one could argue that a robust, “defence in depth” security posture should account for and negate the impact of the “human element”.

On average, enterprise security teams seem to rely on anywhere from 30 to 70 security point products (depending on which report you read). This information, combined with the fact that 58 per cent of Security Decision makers don’t believe their organisation to be cyber resilient, makes for a very interesting contrast. Organisations are increasing security spending, in some cases drastically, yet still the majority do not feel they’ve reached an acceptable level of resilience.

Every new tool adopted comes with its own challenges. There are implementation concerns, compatibility concerns, best practices and configuration standards to adhere to, staff to be trained, etc. At the end of this significant time investment, an organisation may walk away with one problem solved, but many more introduced.

And then how does an organisation address the interoperability of these tool sets? How does the organisation deal with the disparate data sets that each tool creates and relies on? If a different stage of a single attack is detected by individual point products how does the organisation join these dots together to form a clear picture? If a SIEM is used, this introduces additional upkeep and maintenance on an ever growing library of correlation rules (of which some seem to be entirely useless).

With all this in mind, I’m starting to agree with my colleague; security tooling, at least in so far as how (and why) we acquire and implement these tools, is a far cry from being fit for purpose. In my (roughly) 15 years experience working with NextGen AV, and then EDR, I have encountered exactly 0 incidents whereby the analyst was at fault. For example every ransomware incident was caused by lack of visibility combined with poor implementation and configuration.

Given our current understanding of Security Decision Maker sentiment, and considering the average cyber security tool set per organisation and its ever growing complexity, the question of “How does this affect our analysts?” remains. In a word: poorly.

Organisations have, through the magpie approach of acquiring new technologies to address specific use cases, effectively stacked the cards against their analysts. The SANS Institute’s 2023 SOC Survey lists the top three barriers to utilising SOC capabilities to their fullest as 1) Lack of context, 2) Lack of skilled staff and 3) Lack of enterprise-wide visibility. I would argue that these issues are directly proportional to the number of disparate technologies implemented in an organisation.

Interestingly the same survey lists “Too many tools” and “too many alerts” as the least impactful barriers. Given what every other study says I’ll chalk that up to participants only being asked to select one option.

The Anomali Cybersecurity Insights Report states an MTTD of 2.7 days and MTTR of 2.4 for cyber attacks. Other studies report that less than 30 per cent of SOC teams meet their KPIs.

Given that most teams feel they lack context, skills and visibility these statistics are not surprising. It’s unconscionable to expect analysts to effectively keep up with the deluge of threats they face on a daily basis considering the sprawl of point products. Dealing with a single alert may very well mean context switching between numerous tools and data sets in order to fully understand what’s happening.

There’s a reason why 75 per cent of organisations are pursuing security vendor consolidation, which is one way of addressing the issue. Another option is the implementation of a well integrated SOAR solution which effectively becomes a nexus bridging these technologies via a single interface.

A properly configured SOAR allows analysts to work across all of the integrated technologies from one console. It automatically enriches incidents with contextual data from other tools and threat intelligence feeds. It also enables the analysts to automate response to a wide variety of events and alerts as well as liaising with different teams and technology owners.

Intelligent XDR platforms utilising machine learning to stitch logs and correlate security events also offer some reprieve from the high alert volume.

In short, organisations need to be mindful of how technologies are adopted and that human talent is an already stretched and finite resource. We as an industry need to transition from an analyst led approach augmented by AI and ML, to an AI and ML led approach overseen by human intelligence either via SOAR or unified SOC platforms.

What is the point in technological advances in AI and ML if not to free up human intelligence for more engaging and important work?

Ciarán McHale is a Systems Engineer with Cortex, a Palo Alto Networks company. This article was sponsored by Palo Alto Networks.

Holyrood Newsletters

Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe

Get award-winning journalism delivered straight to your inbox

Get award-winning journalism delivered straight to your inbox

Subscribe

Popular reads
Back to top