There is enormous pressure on corporate security functions right now to adopt AI. Boards are asking for it. Budgets are being allocated for it. Vendors are selling it from every direction. But if you deploy AI on top of fragmented systems, overwhelmed teams, and broken data – you will not fix the problem. You will entrench it. This article outlines the case for a different approach.
Written by Botan Osman, CEO & Co-Founder at Restrata

The Reality Nobody Is Talking About
Corporate security is being asked to do more with the same fragmented tools it had a decade ago. Before we talk about what AI can do for this function, we need to be honest about what the function is actually dealing with.
The numbers are stark. The average enterprise security team is currently juggling between five and twelve disconnected tools. More than 98% of GSOC alarms are false positives – operators spend the majority of their time processing noise rather than responding to genuine threats. And GSOC annual turnover runs at 70 to 100% in most studies, meaning the teams expected to operate these complex, fragmented environments are constantly rebuilding themselves from scratch.
The tools are overwhelmed. The people are overwhelmed. Adding AI to this doesn’t fix it. It makes it worse.
This is the reality that rarely gets discussed at industry events. There is a version of the AI conversation that treats adoption as the goal – as though deploying the technology is itself the achievement. It is not. 42% of companies abandoned AI projects in 2025. They skipped the foundation. They deployed AI without the software infrastructure or human governance to support it, and when it failed, it failed expensively.
For corporate security functions – which already operate under significant budget pressure and are still fighting to be seen as value centres rather than cost centres – a failed AI deployment is not a learning experience. It is a setback that can set the function back years.
A Different Way of Thinking About AI
The conversation I encounter most often in corporate security is binary: adopt AI and everything will be fine, or resist AI because it is not ready for operational use. Both positions are wrong, and both lead to bad outcomes.
What I am proposing is a three-part operating model. Not because it is theoretically elegant, but because it reflects how the most effective security operations actually work – and how they will need to work as AI matures.
The model is simple:
Software for precision. Exact calculations, audit trails, systems of record, data routing.
AI for interpretation. Context assessment, pattern recognition, drafting, anomaly detection, natural language. The layer that surfaces what you had not seen and handles complexity that software cannot.
Humans for judgement. Final approval, escalation decisions, ethical oversight, stakeholder communication. The highest-value activity in security operations. The part no board will ever hand to a machine.
Three capabilities. Each with a defined role. None of them work alone.
This is not a new insight in technology deployment. Twenty-five years ago, when I was working on digital transformation programmes, the finding was the same: the most successful projects were the ones that first understood the business and its workflows before deploying technology. Those that deployed technology for technology’s sake did not just fail to improve things – they entrenched the old ways of working inside expensive new systems.
Corporate security is at risk of doing exactly that with AI.
Why the Lines Between Software, AI, and Human Have to Be Clear
The critical discipline in this model is knowing which capability handles which task. When you blur those lines, things break. And in security operations, the consequences of things breaking can be severe.
Consider a practical example. An earthquake occurs. Your security operations centre needs to assess exposure and respond. Here is what each layer of the model is actually doing:
Software handles precision
How many people are within 50 kilometres of this earthquake? Software ingests the structured data feeds, calculates the exact radius, identifies the affected personnel, and pulls site status and access logs. Same input, same output, every time. This is not a job for AI – language models are not designed for precise counting, and you cannot afford an approximate answer when you are deciding who to contact.
AI handles interpretation
What should we do about it? This is where AI earns its place. Once software has established the precise picture – who is affected, where they are, what sites are involved – AI assesses the event in context, correlates it with travel itineraries and business continuity plans, drafts initial communications, and recommends an escalation path. It surfaces the complexity. It handles the ambiguity. And it does so at a speed that would take human analysts hours to match.
Humans make the judgement call
Should we evacuate? No board in the world – and no security leader worth their position – will accept that decision being made by a machine. Not in five years, probably not in twenty. The human reviews the situational briefing that AI has assembled, applies local knowledge and contextual judgement that no model possesses, approves or adjusts the response, and takes accountability for the outcome. That is not a limitation of AI. That is the correct use of human capability.
Whether you’re going to evacuate from a Gulf country on Monday or Tuesday – that is a human decision. What AI can do is make sure you have everything you need to make it faster and with more confidence than ever before.
In the best-run security operations, this cycle – detect, identify, assess, respond, execute, follow up – runs with all three layers working in parallel at every step. Event to coordinated response in under two minutes. Not because AI has replaced the security team, but because software and AI are handling everything that does not require human judgement, so that humans can focus entirely on the part that does.
On AI Washing: A Word of Warning
The corporate security technology market currently has an AI problem. Not because AI does not work – it does, when deployed correctly. The problem is that everything now has an AI label on it, including solutions that do not need AI to achieve what they achieve.
If it only queries the public web, it’s a chatbot. Not an operational tool.
This matters for a specific reason. If a vendor’s AI feature is built on top of a weak or proprietary software foundation, it is extremely difficult to separate the two once you have adopted the platform. You end up locked in – not to a vendor that has earned your loyalty, but to an architecture that limits what your AI can actually do with your data.
The most important question you can ask any vendor right now is not what their AI can do. It is: what did this product do before you added AI? If the answer is not much, the software foundation is not there. And without the software foundation, the AI has nothing meaningful to work with.
The Maturity Ladder: Where Are You Today?
Before any organisation can deploy AI effectively in its security function, it needs to understand where it currently sits in terms of operational maturity. There are four stages, and most enterprise security teams – even at large, sophisticated organisations – are still operating at stage one or two.
1. Reactive Operations
Spreadsheets, travel tracking via email, incident reports written in Word and emailed to a shared inbox. The first few hours of every crisis are spent assembling a picture of who is where. Real case: 4+ hours to assess a security event. 98 people on site were missed entirely.
2. Point Solutions
Dedicated tools for notifications, travel risk, and incident management – but none of them share data with each other. If your CIO asked you where your corporate security data is held, you could not point to a single system of record. The human is the integration layer.
3. Open Connected Platform
People, assets, risk, and communications unified in one platform. HR, access control, GPS, and travel all connected. A single operational picture. But every response is still manually typed and executed. This is AI-ready, not AI-active. The human still does the preparation work.
4. Hybrid AI Operations
Software detects, calculates, routes, and logs. AI interprets context, drafts, and recommends. Humans approve, escalate, and decide. Event to coordinated response in under two minutes. The three-part operating model fully in place.
The critical point here is that these stages have to be done in order. When there is pressure from the board or the CIO’s office to deploy AI, the temptation is to skip to stage four. Organisations that do this are the 42% that abandoned their AI projects in 2025. The foundation has to come first.
There’s a boring data layer that you have to do before you get to the exciting AI stuff. You cannot skip it. And if you do, the AI will give you the wrong answers – because it depends on data that is broken, locked away, or not yours.
Four Questions to Ask Every Vendor
As corporate security leaders navigate the current AI landscape, the challenge is separating genuine operational tools from well-marketed features. Here is a practical filter:
1. “Which parts use software, which use AI, and where is the human?”
If a vendor cannot answer this clearly and specifically, they have not designed an operating model. They have built a product and added an AI label.
2. “What happens when the AI is wrong?”
Look for human-in-the-loop processes, fallback procedures, and audit trails. Every AI system will be wrong sometimes. The question is what happens next.
3. “What data is the AI working with – yours, mine, or the internet?”
This is the most important question. If the AI is working from your organisation’s own operational data – personnel locations, travel itineraries, asset records, incident history – it can give you genuinely useful answers. If it is querying the public web, it is a general-purpose chatbot, not an operational security tool. Your data is also your long-term asset. When you change vendors, that data should come with you.
4. “What did this product do before you added AI?”
This question cuts through the AI washing instantly. If the core software platform has genuine depth – if it was already managing workflows, data, and operations effectively before AI was added – then the AI layer has a foundation to work on. If the answer is not much, move on.
What Security Leaders Can Do Right Now
The pressure to act on AI is real and it is not going away. Here is a practical starting point for security leaders who want to move forward without making an expensive mistake:
1. Get a seat at the AI table
Corporate security has some of the most deep and real-time operational data of any function in an enterprise. That makes it one of the most compelling AI use cases available. Meet your CIO. Frame security as an AI-ready function with a credible operating model – not a cost centre asking for budget, but a value centre with a clear deployment path. 15% of security spend already comes from outside the CISO office.
2. Audit your security stack
Count your tools. Then run this test: how long does it take your team to answer the question ‘how many of our people are near this event?’ If the answer is more than five minutes, your software foundation is not ready for AI. You have a point-solutions problem, not an AI problem.
3. Fix your data first
Data quality and accessibility is the number one blocker for enterprise AI deployments – cited by 48% of organisations. Connect your sources into one operational picture. This is the campfire around which all of your silo teams can gather. Without it, any AI you deploy will be working with incomplete, fragmented, or inaccurate information – and it will produce incomplete, fragmented, or inaccurate outputs.
4. Start with one use case
The worst AI implementations I have seen are the ones where an organisation buys a technology and hopes it will go well. Pick a specific problem – automated welfare checks, intelligent alert triage, natural language querying of your operational data, faster VP briefings when an event breaks. Define the problem first. Then identify where software, AI, and human judgement each have a role. Then deploy.
5. Demand open architecture
Can you bring your own intelligence feeds? Can you swap communications channels? Can you export your data if you change your mind in six months? Open architecture is not a nice-to-have. It is the only way to build an operating model that evolves with your needs and keeps your data where it belongs – with you.
The Restrata Approach
resilienceOS is the software foundation – the open, connected platform that consolidates people, assets, locations, and threat intelligence into a single operational view. It handles the precision layer: exact calculations, geofencing, communications routing, audit trails, and data harmonisation across all connected sources.
rosa is the AI interpretation layer built on top of that foundation. It draws on your organisation’s own data – personnel locations, asset information, travel itineraries, incident feeds – to assess context, draft responses, recommend escalation paths, and synthesise situational intelligence. rosa assists everywhere. It automates the preparation work. It gives your team back the time and cognitive space to do what only humans can do.
Your team provides the judgement. They approve, override, escalate, and decide. Always accountable. Always in control.
Software for precision. AI for interpretation. Humans for judgement. That’s not the future. It’s available today.
The operating model is not a vision. It is a working architecture that the most operationally mature security teams are beginning to deploy right now. The organisations that build towards it – that fix their data first, consolidate their platforms, and deploy AI with the right foundation underneath it – will be the ones that set the standard for what corporate security looks like in the next decade.
Want to see the three-part operating model in action?