________________
OPINION
By Joel Basoga
Artificial Intelligence (AI) is affecting various aspects of work and education. Forcing us to rethink what it means to work and learn. Have you used ChatGPT or Gemini?
I had the privilege of delivering a presentation on the impact of AI in the workplace at the Labour Law Conference this month.
After the presentation, a member of the audience asked me, 'What if I already ran confidential data through ChatGPT?' ‘Can I delete that information?’ Others were concerned about how their children were using AI for homework and were not sure whether that helped with their intellectual growth.
AI has permeated every aspect of our lives. From AI voice assistants on mobile phones to large language models (LLMs) like ChatGPT, which individuals use to complete professional and personal tasks.
A 2024 Microsoft survey found that 75% of workers reported using AI in the workplace, with the most commonly used applications being ChatGPT, Google Gemini, and Microsoft Co-Pilot.
Different AI models have been put to use by organisations. For instance, banks have used AI models to study and monitor patterns for suspicious activity on bank accounts, which has enhanced the detection of fraud in the financial services industry.
In legal proceedings, AI has been used to transcribe proceedings and conduct documentation review. In terms of recruitment, several organisations have deployed AI to screen resumes/curriculum vitae.
We have seen the rise of AI powered research tools. For instance, Jus Mundi launched Jus AI, which aids legal research on matters concerning arbitration and international law.
Organisations have designed AI tools trained on the data used internally to generate draft documents and increase efficiency. In the United Kingdom, the Solicitors Regulatory Authority (SRA) approved the first AI driven Firm, Garfield.Law Limited on May 6, 2025. The SRA notes that Garfield.Law offers the use of an AI-powered litigation assistant to help persons recover unpaid debts, guiding them through the small claims court process up to trial.
As discussed above, AI is incorporated in several aspects of our professional and personal lives. However, it comes with its associated risks.
AI is only as good as the data on which it is trained. For instance, AI may generate and fabricate information that does not exist. For instance, in Mata v Avianca Inc F. Supp. 3d, 22-cv-1461 (PKC), where a New York Attorney cited court cases that did not exist and had been made up by AI.
Further, AI can be biased depending on the information it has been trained on. For example, a job review application run by Amazon was scraped because it favoured male candidates over female candidates. The AI system had been trained on CVs submitted primarily by male candidates. This negatively affected women applicants.
The use of AI must be within the acceptable legal frameworks relating to protected or confidential data. Data protection and intellectual property laws restrict the exposure of personal or confidential data to AI systems. Organisations or persons may have duties of confidentiality in the context of their clients and professional standards. These are all considerations that one must keep in mind as they use AI.
While there are limitations and challenges, AI offers a unique opportunity to enhance productivity. AI's brute processing ability can greatly enhance the capacity of the AI user, allowing them to focus on other tasks that require their cognitive abilities. For instance, in arbitration and several meetings, AI has been incorporated to transcribe proceedings. In one of my meetings with a working group on AI, we had AI notetakers that generated the minutes of the meeting and shared them via email 10 minutes after the meeting.
As opposed to skepticism, we need to engage more with AI by equipping ourselves with more knowledge about it and not shying away from it. Boards are advised to undertake capacity building and training on AI. We must first know its sheer power and potential. If organisations feel LLMs like ChatGPT are a risk, they should design their own internal models for use, which can be made with full appreciation and context of the risks to the actual business.
We should focus on understanding the risk and overlapping challenges, specifically, how we can use AI to achieve our goals or business objectives. For example:
(1) How do we use AI without breaching the duty of professional competence (whether as a lawyer, doctor, risk advisor or engineer, banker, insurance practitioner, among others)?
(2) data privacy obligations, and
(3) confidentiality? (4) How can we prevent AI discrimination and hallucinations?
Consult lawyers in order to understand how to ethically engage with AI models.
Organisations should define the acceptable use of AI and its effect on employee assessment. They should design AI policies which state whether AI use is permitted, and the parameters of its use within the organisation.
The writer is the Head of Technology at H&G Advocates and is a member of the International Bar Association Task Force on AI. jbasoga@handgadvocates.com