TRYING TO ASSESS A NEW AI TOOL FOR POTENTIAL CYBER VULNERABILITIES? NEW GUIDANCE FROM THE NCSC MAY HELP  

At the end of last year, the National Cyber Security Centre (NCSC) published guidelines for secure AI system development. These guidelines were developed by the NSCS and the US’s Cybersecurity and Infrastructure Security Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from across the world – including those from all members of the G7 group of nations and from the Global South. So you can be confident that they are packed with great advice.

Although the guidelines are aimed at AI systems developers, they can be used as a cybersecurity checklist for companies looking to adopt a third-party system. The guidelines are broken down into four key areas within the AI system development lifecycle: secure design, secure development, secure deployment, and secure operation and maintenance. Each section highlights considerations and mitigations that will help reduce the cybersecurity risk to an organisational AI system development process. 

In more detail

Here is a detailed explanation of each of these areas: 

  1. Secure Design: This section provides guidelines on how to design AI systems that are secure by design. It highlights the importance of identifying and mitigating potential security risks at the design stage of the system development lifecycle. The guidelines suggest that you should ensure that the AI system is designed to be secure, resilient, and reliable. You should also consider the potential impact of the system on privacy, fairness, and ethics. 
  1. Secure Development: This section provides guidelines on how to develop AI systems that are secure by default. It highlights the importance of using secure coding practices, such as input validation and output encoding, to prevent common security vulnerabilities. The guidelines suggest that you should ensure that the AI system has undergone rigorous testing and that it is developed using secure design principles. 
  1. Secure Deployment: This section provides guidelines on how to deploy AI systems in a secure manner. It highlights the importance of ensuring that the AI system is deployed in a secure environment and that it is protected from unauthorized access. The guidelines suggest that you should ensure that the AI system is deployed using secure configuration settings and that it is monitored for security incidents. 
  1. Secure Operation and Maintenance: This section provides guidelines on how to operate and maintain AI systems in a secure manner. It highlights the importance of ensuring that the AI system is kept up-to-date with the latest security patches and that it is monitored for security incidents. The guidelines suggest that you should ensure that the AI system is operated in a secure environment and that it is protected from unauthorized access. 

How you could use the guidelines

When adopting a third-party system, it is important to ensure that the system does not expose your company to cyber vulnerabilities. By using the NCSC guidelines as a cybersecurity checklist, you can ensure that the system is secure and meets your organisation’s cybersecurity requirements.  

So if you are looking to carry out a deep dive on the cyber security of a potential new AI system, you could use the guidelines as a prompt – in other words, you could either conduct the analysis yourself but with limited information on the tool likely to be publicly available, you could turn the guidance in to a questionnaire (or use it to update your existing cybersecurity questionnaire. 

To summarise

In conclusion, the NCSC guidelines for secure AI system development can be used as a cybersecurity checklist for companies looking to adopt a third-party system. By following the guidelines, you can ensure that the system is secure and meets your organisation’s cybersecurity requirements. The guidelines provide a comprehensive list of considerations and mitigations that can be used to assess the security of a third-party system. I hope this helps!

Get in touch

If you have any questions or concerns about AI adoption, do get in touch with one of our team here.  

REQUEST A CALLBACK

We will use this information to respond to your enquiry.  Please see our Privacy Notice for further information about our use of personal data.