Skip to content
Toro Insight

How AI Is Disrupting Security

Toggle

Thank you to everyone who attended our recent webinar, where our speakers Katie, Toro’s Director of Cyber Security and Gavin, Toro’s Director of Physical Security discussed Artificial Intelligence (AI) in Security. 

For those who missed it, here's what was discussed.  

AI in security is here to stay. In the UK alone, spend on AI technologies is predicted to rise to over £35 billion by 2025, from a current worth of £15 billion. The cyber security industry is expected to grow exponentially compared to the physical security industry and whilst this cannot be completely attributed to AI it can certainly be considered a major factor. 

Why do we use AI?  

We typically use AI to improve economic efficiency. For example, automating legal work, increasing the productivity of call-centre workers, AI enabled robotics for manufacturing and production, or accelerating academic research and medical diagnosis and treatment. 

AI can write long sequences of well-functioning code, instructed by people who don’t know how to write code themselves. AI can write news articles, translate between multiple languages, summarise large volumes of content, generate creative content and answer questions that require common-sense reasoning but what does this mean for security?  

How are we using it in physical security?  

AI is being used in security technology, such as CCTV, intruder detection and access control systems for its ability to manage mass data for threat detection. AI powered systems can analyse large volumes of data and identify anomalies in real-time far quicker than humans can.  

An example of this is weapons detection where AI can process thousands of images a second from multiple CCTV camera video feeds or the recent use of AI in surveillance systems by London Underground to detect passengers passing through barriers without paying. London Underground loses £130 million a year from fare dodgers!  

Machine learning gives accuracy to video surveillance systems enabling security teams to optimise their resources. This is especially important in the UK where the manned guarding industry is suffering a labour shortage largely attributed to low pay and unsociable working hours. AI-powered robotics can replace human tasks such as patrolling, access management, explosives and weapons detection and emergency response. Drones have long been used for wide area surveillance and threat tracking from the air.  

Security technology can now autonomously integrate data with other building and operational management systems to apply learning, optimise performance and improve efficiencies.  

A study found 76% of Chief Operating Officers believe increasing automation in buildings and asset management will have a positive impact on operational efficiency. 

How does AI disrupt physical security?  

It can make security technology work better to detect a wider range of threats. It will remove the mundane elements of manned guarding perhaps resolving the labour issues. It will improve operational management and help organisations to optimise resource and improve performance. It will also require an upskilling in the workforce and enhance the human capability. 

Gavin, Toro’s Director of Physical Security believes that the benefits will be the convergence of security practices where there is a greater reliance on cyber security and cyber awareness in the physical security tech enabled environment. 

How are we using it in cyber security? 

AI can upskill and enhance the capability and accuracy of our cybersecurity professionals by providing the ability to detect threats in real time. AI tools can continuously monitor system activity and network traffic for suspicious behaviour by spotting anomalies.  

AI-based cybersecurity tools can learn from past attacks and refine their detection, becoming more accurate over time and improving their response mechanisms. For instance, this year’s Cost of a Data Breach Report by IBM cites that companies with AI solutions in place were able to identify and contain a breach 108-days quicker than those without. 

This acceleration is crucial in mitigating both financial and reputational damage to the business in the wake of a cyber-attack. Those companies with AI solutions that had a breach also reported a $1.76 million lower data breach cost compared to organisations that didn’t have such capabilities.  

Automation has the potential for streamlining those ‘business as usual’ tasks such as patch management and vulnerability management, that are often deprioritised in favour of new projects and business change – despite often being the cause of cyber-attacks and data breaches. 

So those are the benefits, but what about the risks…? 

Moving into the realm of cybersecurity, AI's ability to detect threats in real-time is a game-changer. Continuous monitoring of system activity and network traffic allows for swift identification and containment of breaches. Companies with AI solutions report quicker breach identification and containment, leading to substantial cost savings. 

While AI brings numerous benefits, it also introduces risks that transcend natural, ethical, and national boundaries. From political and economic challenges to environmental and human factors, the risks are diverse and complex. 

Politics 

There is a lack of regulation and controls where economies of scale and other market risks are evident. Work by standards organisations such as IEEE, ISO/ IEC is still ongoing. 

Market failures are observed in many global challenges – think of climate change. When a company produces carbon emissions, the harms are not only incurred by them, but by the whole world. They do not incur the full cost so there is an externality and, therefore, a lack of incentive to reduce harm. Companies will prioritise economic growth (often reflected by speed of development) over risk mitigation within the technology they use. 

Policy makers will struggle to reconcile, as they always have done, societal protection against technological advancement and economic growth. 

Physical Environment 

There are biological and chemical security risks where language models can and will teach threat actors how to make and deploy harmful substances including bombs. 

There may also be critical infrastructure attacks and mass data exposures following a systems breach, or unlawful use by an insider. 

People 

Inaccuracy and AI hallucinations are currently rife. Degradation of the information environment encourages individuals to make dangerous decisions. There are already cases of AI tools inadvertently radicalising individuals, nudging users towards harmful actions. We’ve sadly seen the harmful effects of self-perpetuating algorithms exposing vulnerable people to increasing amounts of harmful content in social media and contributing towards suicide. Whilst these dangers already exist in the technology and are not specific to AI – AI can speed up and exacerbate these harms. 

Humans could become too reliant on AI tools and ultimately de-skill because they hand over control of important decisions to AI systems and do not need to use their own judgment to gauge a security concern or respond appropriately. 

Gartner predicts that by 2025, general human failure will be responsible for over half of all significant cyber incidents. The technology will get smarter and, as a result, the people will become more reliant and less able to challenge AI generated results. Who can read a paper map using a compass these days versus blindly following Google Maps?  

Another people risk is that AI gives people access to far more information, increasing the insider threat.  

Deepfake 

Has anyone ever heard their bank use voice authentication to access their account? “My voice is my password”. If we look at telephone banking – Santander, NatWest, Barclays, Lloyds, HSBC… all use voice ID… despite cases of AI voice cloning being widely publicised. 

Social Media 

Further degradation of the information environment combined with the increasing amounts of personal data that is publicly available, leads to manipulation of the people using these platforms and increasing amounts of misinformation or disinformation. 

Influence 

As the information environment degrades, people and events will be easy to falsely portray, and this may compromise decision-making by individuals and institutions who rely on inaccurate or misleading information. We’ve already seen a lot of noise about this in relation to election interference and information warfare related to the conflict in Ukraine and, more recently, Palestine. 

Information overload leads to saturation and already people turn off and ignore information, whether it is verified or not. A study by Ofcom revealed that 30% UK adults who go online are unsure about, or do not even consider, the truthfulness of information. 

Cyber 

At the moment, there are no robust safeguards to prevent AI from complying with harmful requests, such as designing cyber-attacks. There are already nefarious equivalents of ChatGPT, such as WormGPT, FraudGPT and DarkBART, which are specifically designed to create custom malware and well-written spear-phishing emails, for example. 

Safety testing and evaluation of AI is ad-hoc, with no established standards, scientific grounding or engineering best practices. When building software, developers can precisely describe instructions for specific actions. This enables them to predict the system’s behaviour and understand its limitations. By contrast, AI developers merely specify a learning process, but the system produced by that process is not interpretable, even to the system’s developers. We genuinely cannot know their limitations, except as we observe them and, most crucially, recognise them. 

So, what can we do about it? 

We believe to navigate the AI landscape organisations need to focus on the brilliant basics. 

We’d recommend starting with a plan. Understand your threats, the current and future projected situation, and where security improvements or enhancements need to be made. Having a clear purpose will help you to identify where investments need to be made in the short and long term. 

Start with a policy that defines legitimate use and make sure it is published and understood. Involve your workforce in this process.  

  • Understand why they are already using AI – what task are they trying to automate or augment?  
  • What are the potential benefits for your organisation?  
  • How can you continue to leverage these benefits whilst recognising and mitigating the potential risks? 

From this, create a process to assess and approve or decline existing use cases. 

It’s also important to not neglect the underlying IT systems and infrastructure.  

At an organisational level, if you are developing something for public consumption, ensure it is secure by design. Have separate environments for development and testing to reduce the risk of compromise to your production systems and networks. Ensure a robust due diligence process for new suppliers and tools that staff want to integrate into your business processes.  Protect the underlying infrastructure against vulnerabilities and exploitation and validate that existing processes for patch management and vulnerability management still apply to your new technology. Check whether the controls, monitoring and alerts still apply to any new business tools and processes. 

At an endpoint user level, protect against staff downloading/using applications that aren’t subject to the appropriate level of due diligence; or uploading business information and data into hosted AI engines where control is lost. This may result in the loss of effectiveness of your existing controls. 

  • Ensure local admin rights and antimalware tools prevent the use of unapproved applications on devices and that staff understand how to use AI tools.  
  • Improve authentication and conditional access controls to better safeguard credentials against theft by criminals. 
  • Layers of defence will be essential. If a human gets duped, ensure that there is sufficient control and alerting to stop the progression of an attack. 

Policies will need to consider the AI influence. Stricter rules and more training will need to be applied to prevent unethical use and the insider threat. Ensure that staff understand how to use the AI tools that are available to them. A higher level of education will need to be incorporated into training programs to develop competencies. A higher level of threat will require a higher level of awareness.  

Assessment and assurance will become increasingly important. Frequent assessment by experts will be required to keep you hardened against the increasing sophistication and scale of attack. A faster paced and progressive environment will require greater levels of auditing and testing to ensure security stays ahead. Adopting AI enabled operational management systems to monitor and detect operating risk is likely to become a standard for many businesses.  

So, what did we conclude?  

AI in security is a double-edged sword—bringing unprecedented capabilities while demanding a thoughtful and collaborative approach to mitigate risks. The future undoubtedly holds more advancements, and our ability to navigate this landscape will determine the success of AI in reshaping security practices. 

In the grand scheme, AI is a powerful tool, but its efficacy ultimately depends on the humans behind it. As we embrace the AI revolution in security, the human element remains irreplaceable, ensuring responsible and effective deployment. 

To watch the full webinar please click here https://www.torosolutions.co.uk/on-demand-webinars