Loading...
Loading...

With the rise of increasingly effective (and lower cost) Large Language Models (LLMs), it became feasible for engineers to build fully autonomous systems that can follow the same multi-step investigative process as human analysts:
Look at the initial alert evidence.
Use domain and tribal knowledge to recognize any benign explanation for the event(s).
Perform queries against various data sources as needed to gain more context.
Based on the totality of information, determine whether the alert is true or false positive.
If it’s a true positive, document why that determination was made.
In practice, it’s more complicated than it sounds, but at Simbian we’re proud to say we’ve done it, and that our agent can fully complete an accurate investigation faster and more thoroughly than a human analyst. This isn’t to suggest that humans aren’t capable of the same investigation quality—they simply don’t have time to be as thorough as an analyst operating at machine speed and separate signal from noise, considering the increasing number of false positives. And in my experience, analysts are perfectly happy to let the AI agent do the investigating.

Your best cybersecurity talent is wasting away training & mentoring analysts after analysts, just for them to hop from T1 to a T2 (less monotonous and higher paying) role after a year of experience. Not only are you paying the overhead costs of interviewing and hiring a never-ending stream of replacements, but the cost of your senior talent not doing the advanced proactive automation, tuning, and detection engineering work they could otherwise be doing. And to train and mentor people only to have to start over again, year after year—that takes a toll on the morale of your senior talent.
Junior analysts churn because they’re simultaneously bored by the monotony of the work and overwhelmed by the volume, and thus eager to jump to the next tier as soon as possible (the pay bump doesn’t hurt either). AI SOC promises to free up your senior analysts to spend far more time on the skilled, proactive work they want to do because your junior analysts are not overwhelmed, not in need of nearly as much guidance, and are happy in their jobs, thus not so eager to job hop to the next tier. We at Simbian believe that with the rise of AI SOC as the new normal, it will become far more common for analysts to, feeling happier with their newfound role as a supervisor of AI analysts, wait for a promotion to T2 within their own SOC rather than churn.
But more interestingly, as AI bridges the skill gap dividing T1, T2, and T3 analysts, we believe the need for granular tires will begin to fade as humans shift to being valued (rather than for their labor), for their credibility—trust. You’ll have one unified pool of middle to senior-level analysts overseeing the AI SOC’s investigation results. A more senior analyst can be trusted to oversee higher-risk investigations, which is just as important as being capable of conducting complex investigative workloads.
Speaking of time, what will we do once we have so much more of it with AI taking over the largest portion of our investigative efforts?

You can’t improve what you can’t measure. But measurement takes time and energy to implement and manage. What KPIs (or Key Risk Indicators—KRIs) could your team measure next?
We all know practice beats policy. Drills with your red team testing AI-driven tactics give analysts safe reps but require lots of time and planning. And now you have it. What will your next practical exercise be like?
Designing and shipping new controls or automations demands time and focus—time that evaporates when your top talent is busy training the next batch of analysts—so what security engineering project will you choose with the extra time?
Whether you’re doing security through MDR services or in-house, you’ll likely soon be enjoying the upside of AI powered investigation in your SecOps program.
The future of the profession lies in experts providing the supervision required to trust AI decision making both through technical skills to validate the reasoning (at Simbian, transparency of AI decision making is a top priority) provided and soft skills to communicate requirements to AI and outcomes to leaders.
And that’s a great thing. AI Supervisor is an empowering role to take on, in any domain – especially as this technology and the way we leverage it improves quality. And luckily for all of us, with models improving on a weekly basis, we won’t have to wait long to see even better performance at even lower costs.