ASSESSING PRIVACY RISKS IN RELATION TO GENERATIVE AI TOOLS

An investigation by the Information Commissioner’s Office (ICO) has provisionally found that the businesses behind the popular social media platform, Snapchat (Snap) have not properly assessed the privacy risks associated with the use of ‘My AI’ – Snap’s generative AI chatbot.

Snap first released My AI in the UK in February 2023 to premium subscribers, rolling it out to the remainder of UK Snapchat users in April 2023. The chatbot (which uses OpenAI’s advanced language model technology) is described by Snap as a ‘personal chatbot sidekick’ – it is designed to be a conversational AI which can discuss and learn about a user’s interests and make recommendations relating to the real world. Snap has over 21 million users in the UK, a large proportion of which are teenagers.

The ICO have provisionally found that Snap’s risk assessment prior to launching My AI did not sufficiently evaluate the data protection risks posed by the AI technology, particularly when considering how the technology might impact on children and their privacy. Whilst the details of the preliminary notice have not been made available, John Edwards, the Information Commissioner has stated that ‘the provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’.

It is important to note that this preliminary enforcement notice does not mean that Snap has breached data protection laws and the ICO have given Snap an opportunity to respond before it comes to a final decision.

If a final enforcement notice is issued by the ICO, Snap may be required to stop processing personal data in connection with My AI – this will inevitably mean that Snap cannot offer My AI to UK Snapchat users until it complies with the enforcement notice and carries out an adequate risk assessment.

What does this mean for businesses?

This preliminary enforcement notice comes after a reminder from the ICO to businesses back in June this year that businesses must consider the data protection implications of AI technologies during their development or before integrating them into their operations. This preliminary enforcement notice shows that the ICO is serious about taking action against developers and companies who use AI where they have not sufficiently considered the data protection risks posed by such AI.

The ICO has recommended that developers and companies who use AI need to consider the following points when contemplating processing personal data using generative AI tools:

  • What lawful basis will you use to process personal data? When processing personal data, you need to have a lawful basis to do so. You will need to determine which one is most appropriate for your organisation in this context (e.g. legitimate interests, consent, etc)
  • What is your role in relation to the personal data being processed? Are you a controller, joint controller or a processor?
  • Whether you need to prepare Data Protection Impact Assessment (DPIA). The ICO requires you to carry out a DPIA if you plan to use ‘innovative technology’ to process personal data.
  • How do you intend to ensure that your use of AI is transparent? You must ensure that personal data is processed in a transparent manner – this will mean making sure that individuals are aware that you are using AI to process their personal data.
  • What measures do you have in place to protect against security risks? You will need to consider how you will deal with data breaches as well as how you will mitigate against risks of data poisoning and other attacks.
  • How will you minimise personal data processed? One of the principles of UK GDPR is to ensure that personal data is ‘adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed’ – this principle needs to be embedded in your use/development of AI tools.
  • Have you considered how you will deal with individuals who are exercising their data protection rights? You will need to be able to comply with such requests in accordance with data protection law.
  • Will the AI tool be making automated decisions on people or processing personal data to evaluate individuals? UK data protection law contains restrictions on such activities to ensure that they are carried out in a responsible manner – this may include ensuring that individuals can obtain human intervention in automated decision- making processes.

The above are just some of the points that you will need to consider when developing an AI tool or when choosing to incorporate one in your business. The legal landscape in this area is, at the moment, largely unchartered, but it is important to ensure that your actions are compliant with current legislation to avoid exposing your business to potential liability to customers and individuals as well as data protection regulators.

Here at ClaydenLaw we have been helping clients with the risk management process of integrating AI tools into their business. If you would like us to help you as well, please contact us here to arrange a complimentary 30 minute AI Readiness Consultation.

This consultation will help you to identify the risk areas in implementing the proposed AI, work out the best way to proceed for your business, and see what groundwork is advisable before clicking ‘Go’ on your new AI tool.

REQUEST A CALLBACK

We will use this information to respond to your enquiry.  Please see our Privacy Notice for further information about our use of personal data.