There has been a huge surge in interest in AI tools in recent months, and lots of useful AI tools are now coming to market. Your organisation may therefore be contemplating incorporating some AI tools into its day-to-day practices in order to experience the benefits that AI can bring.

This could either be for internal use or for incorporation into products which your organisation then makes available to its customers. It is important however to ensure that AI on-boarding is done in an ethical and responsible manner.

If not enough planning goes into introducing AI tools into an organisation, then there is a risk of extensive reputational damage if things go wrong, especially given that AI is such a hot media topic at the moment. The risks of something going wrong in implementation stage span from feeding an AI tool a flawed data set from which it generates results, to not securing an AI tool leaving it vulnerable to cyber-attack.

In addition, whilst, at the time of writing, specific AI legislation is still in the drafting stages, there are other laws and regulations that affect the use of AI technologies, for example, data protection law and equalities legislation. If companies do not introduce AI with an eye on applicable laws and regulations, they risk fines and enforcement action, as well as reputational damage as mentioned above.

Tips for using AI responsibly

So how do you put yourself in the best possible position to incorporate these technologies? The tips below are not exhaustive, but they should help to focus your mind on what your organisation needs to do.

  1. Align your approach with your organisation and document your AI technologies

    As an organisation you will need to consider what your values are when it comes to AI and update your governance policies to set out your organisational AI principles and objectives and how you are going to address the risks that your use of AI tools imposes. You will also need to ensure that all the various stakeholders are on the same page when it comes to your approach on AI, including what actually constitutes ‘Artificial Intelligence’ as the concept does not have an established definition.

    It is helpful to create an inventory of the AI technologies you are already using in your organisation and consider what processes you have in place already in relation to these technologies and whether you can widen these processes to encompass other AI tools rather than having to recreate policies and procedures.

  2. Who is accountable?

    With each AI tool that is introduced into your organisation, you need to work out where the accountability for that tool sits. For example accountability may sit with the compliance team, the information security team or perhaps with your data protection officer. You might even decide to take a blended approach, with several teams taking ownership of the tool. Accountability will look different for each organisation as it will depend on the organisation’s structure, as well as what AI is being used and where it is being used. What is important is that the team(s) that have accountability are aware that they are accountable for that tool.

    Many commentators also consider that a pillar of responsible AI is ensuring that AI is deployed in a human-centred manner. This means that if something goes wrong, organisations don’t blame the machine but instead shoulder the risk and hold themselves accountable. This should also be considered in your contracts with AI sellers: where does the risk lie if the AI doesn’t work as expected?

  1. Do your research

    You need to be clear on what you are buying. Ask questions of the developers – for example, is the tool secure from cyber-attack, how have they addressed concerns of bias/fairness, and have they designed it keeping data protection rules and regulations in mind? Alongside these relatively generic questions, you will also need to work out whether the proposed tool will solve the issue you are trying to address, and whether it will tie in with your organisation’s values. You don’t want to invest in an AI tool just to discover that it doesn’t do what you thought it would, or that it leaves your organisation vulnerable to cyber-attack.

  2. Be clear on your legal obligations

    Depending on what AI tool you would like to integrate, you are going to need to ensure that your use of it is in compliance with any applicable legislation and regulations.

    For example – data protection legislation. Will you be processing personal data using this new AI tool? Will the AI tool be making automated decisions on people or processing personal data to evaluate individuals? UK data protection law contains restrictions on such activities to ensure that they are carried out in a responsible manner and that individuals are properly informed as to how you are using their personal data. In certain situations data protection law also imposes obligations on organisations to ensure that individuals can obtain human intervention in automated decision- making processes.

    In addition, you need to be careful that you are permitted to use the AI in the way you intend to. This will be set out in your licence to use the AI and you should review this carefully before agreeing to it not only to ensure that your intended use case is covered but also to check that it doesn’t contain any other unwanted restrictions on your use of the AI. In addition, if the AI tool is generating content for you, you will need to clarify who owns the intellectual property in that content and whether you have the right to use it in the way you want to.

  3. Training

    Before rolling your new AI tool out in full, you will need to sure that all relevant personnel have been given appropriate training. Training should include things like how to use the AI properly, when it should be used and, most importantly, what should happen if things go wrong. Even if the AI works exactly as intended from day one, you will not get the most out of your investment if those tasked with using it on a day-to-day basis have not been trained on what they should be doing.

How can we help?

Whilst integrating AI into your business can bring exciting opportunities, it is important to ensure that AI is introduced and used responsibly within your organisation – at ClaydenLaw we can provide valuable assistance with this. If you have found an AI tool that you would like to integrate into your business and you aren’t sure where the risks lie or how to mitigate against them, or if you would like some advice on what legislation may affect the use of AI in your organisation , call Louisa Taylor now on 01865 953545 or email louisa@claydenlaw.co.uk to arrange a complimentary 30-minute AI Readiness consultation.

This consultation will help you to identify the risk areas in implementing the proposed AI, work outt he best way to proceed for your business, and see what groundwork is advisable before clicking ‘Go’ on your new AI tool.

For more information on the relevant areas mentioned in this case study, please click below:


We will use this information to respond to your enquiry.  Please see our Privacy Notice for further information about our use of personal data.