The margin of error: AI and the Security Industry

02 July 2025

This month we sit down with David Crawford, UX Strategist, to learn more about the different types of AI, their limitations, and how the security industry specifically can navigate this ‘new’ technology.

Conflated AI Expectations

I think there’s a lack of clarity about the kind of AI that’s topical at the moment. ChatGPT and CoPilot are household names – if you’re not using either, you’ve at least heard of them – but they’ve painted a picture of one AI form and blotted out the rest.

We’ve seen other forms of artificial intelligence in use across industries for decades. Every time you solve those ‘I am not a robot’ tasks on a website, you’re training a Convolutional Neural Network to hone its image recognition accuracy. And image recognition, given the prevalence of Security Cameras in our industry, is something we’ve been doing for a while.

Having a computer work out what’s an animal, a person, a stop sign, or a pedestrian crossing way predates what we’re seeing now.

There’s a lot of talk about vendors ‘doing AI,’ but the scope of artificial intelligence – as an umbrella term – means it’s not always what people are expecting.

Breaking Down the Technology

ChatGPT and CoPilot are Generative, which means they create new content on the back of user input. They’re built on Large Language Models, which involve massive amounts of text – picture half a terabyte of text data as 600 million pages on Word.

Large Language Models and Generative AI are stochastic, and that’s where their value lies. From the Greek stókhos – literally ‘guess’ – stochastic models rely on probability. You give it input, and (with a margin of error) it will give you an output of what you might be after. It can bend and flex in response to infinitely varied user inputs.

Where probability isn’t valuable, for example, is access control. We can’t ship a technology that relies on any margin of error. Customers need to know that 100% of the time, our solutions will give the right people the right access at the right place at the right time. Asking GenAI models to write a recipe is one thing – the business of keeping the good guys in and bad guys out is quite another.

The Best of Both Worlds

To embrace AI (as most users know it) at Gallagher, we’ve had to apply a degree of rigour that other industries might not need to. Internally, our governance board established robust processes around how we use GenAI to optimise our workflows; there’s competitive advantage utilising the best tools out there.

Concerning User Experience, GenAI is one of the forms we’ve adopted to make our customer experiences as quick and efficient as possible. We’ve combined the best of different AI technologies (and steered well-clear of their pitfalls) to avoid a haphazard reliance on any one model. It’s a buffered approach, but circumstance necessitates complexity.

The Command Centre manual is over 4,000 pages long. That’s about half a metre of stacked copy paper. Add in the documentation across all our solutions and that’s swathes of information an AI model can make more accessible.

We’re trialing Retrieval Augmentation Generation (RAG) to help users interact with those cinderblocks of text. RAG is the process of connecting LLMs with existing (and secure) textual databases.

To help the end user navigate their choices, traditionally you would need to code or plan each avenue based on preconceived assumptions of what they may want. Browsing Amazon is a great example, where you get the filter interface. Because the paths you could take to a singular product in each category are so complex and so varied, if you didn’t have that filter UI, you’d have to make thousands and thousands of permutations to help people find what they’re after.

Agentic AI has the ability – the agency – to draw from a prescribed LLM and create code in response to user requests. It can understand what’s capable, what the user wants to do, and it can bridge that gap on the fly. And if the user doesn’t get to exactly what they’re looking for, that system can take on additional feedback and refine its results, which improves customer experience.

‘Summarise this block of text for me:’

GenAI has entered the public domain and changed the way we access information. And with businesses in a rush to capitalise on this new technology, we’re focusing on best-practice implementations that can make our users’ experiences as quick and efficient as possible.

When those GenAI models first dropped, the collection of data en-masse had people raising (necessary) questions about where that information was going, and its implications for data security.

That doesn’t mean there isn’t value in the technology, but people need to have a nuanced understanding of where the value sits. Where it’s appropriate to use different AI models, and where it’s not.