Many sectors, including the legal profession, are integrating Artificial intelligence in the administration of justice. There have been varying and competing views on using AI as a transformative technology tool that can be used to reduce case backlogs, enhance efficiency, kill biases on one hand and fears about the same undermining the human-centered nature of justice on the other. These divergent views were recently at the center of the discourse at an academic conference held at O.P. Jindal Global University (JGU), India 18-22 February 2026 where jurists and scholars from over 20 law schools that are part of the Law Schools Global League (LSGL) examined the implications of AI for legal systems worldwide, with particular attention to developments in the Global South and Africa. The Centre for Human Rights, Faculty of Law, University of Pretoria was represented by Michael Aboneka from the Freedom of Expression, Access to Information and Digital Rights Unit who presented on the topic, AI and the evolution of African Legal Research. The paper focused on the right for Africa to participate in the architecture of AI models.
Many scholars and experts across the globe agree that AI should primarily serve as an assistive tool rather than replace human judgment. Former Chief Justice of India, Uday Umesh Lalit highlighted that AI can meaningfully enhance judicial efficiency where certain tasks are automated, such as, case summaries, organising documents and identifying relevant legal precedents. He further pointed out that in India, where courts face high case backlog rates estimated at over 55 million cases, AI can salvage the situation by reducing the time required for legal research and case preparation for both judges and lawyers. Be that as it may, it was emphasised that the final legal determination must remain in human hands as this will ensure that ethical reasoning, contextual understanding, and judicial discretion are not eroded.
Much as AI can transform legal processes, it poses challenges. Notable is the “black box” problem where it is difficult for legal practitioners to understand how an AI system reached a particular recommendation or conclusion because of the opaqueness of the AI systems. This raises transparency and accountability challenges and may undermine fairness, and due process. The other concern is one of “algorithmic bias”. Given the fact that most AI systems are trained on historical legal data, they may reproduce existing social inequalities embedded in past judgments. The effect of this is that these AI tools may unintentionally reinforce biases linked to gender, race, class, or caste, potentially producing what some scholars describe as “technocratic justice” that lacks the compassion and discretion inherent in human decision-making.
One of the most pressing debates around AI in the African context concerns “coded exclusion”, where African languages are marginalised in training datasets and digital research tools. African presenters emphasised that most large language models (LLMs) are built primarily on Western and English-language data, which restricts their ability to interpret African languages such as “Luganda” or “isiZulu." This linguistic imbalance produces critical “blind spots”, undermining AI’s effectiveness in engaging with local discourse and legal contexts.
To address these constraints, there is a need to focus on linguistic sovereignty, ensuring that AI technologies are capable of recognising and interpreting indigenous languages and cultural nuances. Without this, the current AI systems which are built on western data are likely to continue misinterpreting local speech patterns, political metaphors, or cultural expressions which can lead to gagging or suppression of online speech and failure to detect harmful content in indigenous languages.
Ultimately, the future of legal practice will depend not on replacing lawyers with AI but on adapting legal education and professional skills. Lawyers and judges must develop technological literacy while remaining responsible for ethical oversight. Africa must participate and be at the forefront in the development of AI models that take into account the African linguistic and cultural realities.