By Sir Ifeanyi Ilukwe-LLM
ABSTRACT
The rapid advancement and deployment of Artificial Intelligence (AI) technologies have exposed fundamental gaps within existing legal frameworks, particularly in developing jurisdictions such as Nigeria. This article critically evaluates the adequacy of Nigeria’s current statutory regime—including the Nigerian Data Protection Act 2023, the Copyright Act 2022, and the Cybercrime Act 2015—in addressing the legal and ethical challenges posed by increasingly autonomous and data-driven systems. It argues that Nigeria’s legal framework remains largely reactive and conceptually strained, particularly in relation to liability, algorithmic bias, intellectual property, and criminal accountability. Drawing comparative insights from Canada’s evolving regulatory approach—most notably the proposed Artificial Intelligence and Data Act (AIDA), Ontario’s Enhancing Digital Security and Trust Act, and Quebec’s Law 25—the article highlights the importance of a structured, risk-based governance model grounded in human rights principles. It contends that Nigeria’s failure to adopt such an approach undermines both legal certainty and public trust. The article concludes by advocating for a deliberate transition towards proactive AI governance in Nigeria, anchored in transparency, accountability, fairness, and institutional oversight.
Keywords: Artificial Intelligence; Nigerian Law; Data Protection; Liability; Algorithmic Bias.
1. INTRODUCTION
Artificial Intelligence (AI) has moved decisively from theoretical abstraction into the realm of practical infrastructure, shaping decision-making processes across finance, healthcare, employment, security, and governance. Across the world, governments and private institutions increasingly rely on machine learning systems to automate decisions that were traditionally made by humans. In Nigeria, the adoption of AI technologies is accelerating within banking institutions, telecommunications, digital identity management, e-commerce, and law enforcement operations. Yet, despite this rapid technological transformation, the Nigerian legal system has not evolved at a commensurate pace. Existing legal frameworks were neither designed nor structured to accommodate systems capable of autonomous learning, predictive analytics, and independent decision-making.
The challenge posed by AI is not merely technological; it is fundamentally jurisprudential. Traditional legal doctrines are grounded in assumptions of human intent, foreseeability, causation, and accountability. These assumptions become increasingly difficult to sustain where decisions are generated or heavily influenced by opaque algorithmic systems. Consequently, regulatory responses remain largely reactive, addressing issues only after harm has occurred rather than proactively establishing safeguards against foreseeable risks.
Scholars have increasingly argued that legal systems must move beyond conventional liability frameworks and adopt more flexible governance models capable of responding to technological complexity. Contemporary discourse surrounding AI regulation now centres on accountability, transparency, explainability, and human rights protections. The European Union, Canada, and several American states have already begun implementing structured regulatory systems that classify AI technologies according to levels of risk. Nigeria, however, remains dependent on fragmented statutory provisions that were enacted before the widespread deployment of modern generative and predictive AI systems.
This article critically examines Nigeria’s preparedness to regulate AI within its existing legal framework. It evaluates the adequacy of statutes such as the Nigerian Data Protection Act 2023, the Copyright Act 2022, and the Cybercrime (Prohibition, Prevention, etc.) Act 2015. The article also adopts a comparative perspective by examining Canada’s proposed Artificial Intelligence and Data Act (AIDA), Quebec’s Law 25, and Ontario’s Enhancing Digital Security and Trust Act. It is argued that Nigeria must move beyond reactive legal adaptation and embrace a proactive, risk-based regulatory approach grounded in human rights principles, transparency, institutional accountability, and democratic oversight.
2. Conceptual Challenges in AI Regulation
The regulation of Artificial Intelligence presents significant conceptual and doctrinal difficulties for modern legal systems. Unlike traditional technologies, AI systems possess adaptive capabilities that enable them to generate outputs beyond direct human programming. This creates uncertainty regarding accountability, foreseeability, and legal responsibility. The subsections below examine some of the major conceptual difficulties confronting Nigerian law in the regulation of AI systems.
2.1 Legal Personhood and the Nature of AI Systems
Under Nigerian law, AI systems are presently treated as tools or property, devoid of independent legal personality. This approach reflects the dominant global legal position, which continues to reserve legal personality for natural persons, corporations, and other recognised legal entities. However, modern AI systems increasingly display characteristics traditionally associated with autonomous decision-making. Through machine learning processes, AI systems may generate outputs that were neither expressly programmed nor fully anticipated by their developers.
Some scholars have argued that highly autonomous AI systems should eventually be granted a form of limited legal personality similar to corporate entities. Others strongly oppose this position, arguing that extending legal personality to machines risks diluting human accountability and undermining existing legal doctrines. The latter position appears more persuasive. AI systems, irrespective of sophistication, remain incapable of moral agency or consciousness. Nevertheless, the continued classification of AI merely as passive instruments fails to capture the complexity of modern AI operations.
Rather than extending legal personality to AI, a more practical regulatory approach would involve recognising varying degrees of autonomy and assigning corresponding levels of legal responsibility to developers, deployers, operators, and institutions exercising control over such systems.
2.2 Mens Rea and Criminal Responsibility
The doctrine of mens rea remains central to criminal liability within both Nigerian and common law jurisprudence. Criminal responsibility generally requires proof of intent, recklessness, knowledge, or negligence. AI systems, lacking consciousness and subjective awareness, cannot independently satisfy this requirement.
As a result, criminal liability arising from AI systems must necessarily be attributed to human actors responsible for the design, deployment, supervision, or misuse of such technologies. However, this attribution becomes complicated where AI systems operate through distributed decision-making structures involving software developers, data providers, corporate institutions, and end users.An opposing viewpoint suggests that existing criminal law doctrines are sufficiently flexible to address AI-related harms through principles of vicarious liability and corporate responsibility. While this argument has some merit, Nigerian law presently provides little guidance regarding how responsibility should be allocated where harm results from autonomous algorithmic processes.
The absence of statutory clarity creates uncertainty not only for courts but also for businesses and institutions increasingly relying on automated systems. Without clear accountability frameworks, victims may face significant obstacles in seeking legal remedies.
2.3 The Black Box Problem and Causation
One of the defining features of modern AI systems is their opacity. Many machine learning models function as “black boxes,” meaning that the reasoning process through which decisions are generated cannot easily be understood, interpreted, or reconstructed. This phenomenon poses a direct challenge to traditional legal principles of causation and foreseeability.
The classical negligence framework articulated in Donoghue v Stevenson assumes a relatively identifiable causal relationship between conduct and harm. AI systems disrupt this assumption. Where algorithmic decision-making processes cannot be explained, courts may struggle to determine whether a particular actor acted negligently or whether harm was reasonably foreseeable.
Critics of stringent explainability requirements argue that excessive regulation may discourage innovation and undermine technological competitiveness. However, transparency remains essential where AI systems significantly affect employment, healthcare, policing, or financial access. The inability to explain how decisions are made undermines procedural fairness and weakens public trust.
Nigeria’s current legal framework does not adequately address the evidentiary and procedural complications arising from opaque AI systems. This represents a significant doctrinal gap that requires legislative intervention.
3. Data Protection and AI Governance
The Nigerian Data Protection Act 2023 represents a significant development in Nigeria’s regulatory approach to digital technologies and personal information management. The Act establishes principles such as transparency, lawful processing, data minimisation, purpose limitation, and safeguards against certain forms of automated decision-making. These provisions are highly relevant within the context of AI governance because contemporary AI systems depend heavily on the collection and analysis of large volumes of personal data.
However, the practical application of these principles to AI systems reveals substantial tensions. AI technologies often require extensive datasets for training, optimisation, and predictive analysis, thereby conflicting with data minimisation requirements. In addition, algorithmic opacity complicates compliance with transparency obligations because individuals affected by automated decisions may not fully understand how such decisions were reached.
A further concern relates to consent. Critics have argued that meaningful consent becomes increasingly difficult where individuals cannot realistically comprehend the scale or complexity of AI-driven data processing. This creates a significant imbalance between corporate technological power and individual informational autonomy.
Comparative developments in Canada provide useful insight. Quebec’s Law 25 imposes obligations relating to automated decision-making, including requirements for explainability, transparency, and opportunities for human intervention. Similarly, Canada’s proposed Artificial Intelligence and Data Act (AIDA) introduces a risk-based framework for regulating high-impact AI systems and imposes obligations regarding harm mitigation and accountability.
Nigeria’s framework, while commendable, lacks the institutional specificity and enforcement mechanisms necessary to address these challenges comprehensively. Regulatory agencies such as the Nigeria Data Protection Commission (NDPC) and the National Information Technology Development Agency (NITDA) require stronger technical capacity and clearer statutory mandates.
4. Intellectual Property in the AI Era
The emergence of generative AI systems has generated substantial uncertainty within intellectual property law. The Copyright Act 2022 ties authorship and ownership primarily to human creators. Consequently, where literary, artistic, or musical works are autonomously generated by AI systems, uncertainty arises regarding whether copyright protection exists and who may claim ownership.
One perspective suggests that copyright should remain exclusively tied to human creativity because intellectual property rights are intended to reward human labour and ingenuity. Another perspective argues that denying protection to AI-generated works may discourage investment in innovative technologies and create commercial uncertainty.
Canadian jurisprudence illustrates the complexity of these issues. In Toronto Star Newspapers Ltd v OpenAI Inc, the court examined allegations concerning the use of copyrighted materials in AI training datasets. Similarly, Doan v Clearview AI Inc addressed the unauthorised harvesting of publicly available images for facial recognition systems.
Nigeria presently lacks clear statutory guidance regarding AI-generated works, data scraping, or the use of copyrighted materials in machine learning processes. Without reform, courts may struggle to apply existing copyright principles to emerging technological realities.
5. Liability and Tort Law
Traditional negligence frameworks require proof of duty, breach, causation, and damage. AI systems complicate each of these elements. The decentralised and adaptive nature of AI technologies makes it difficult to identify a clear duty of care or determine whether harm was reasonably foreseeable.
Some scholars maintain that existing tort principles are sufficiently adaptable to address AI-related harms through established doctrines of negligence and product liability. However, others argue that conventional tort law is ill-equipped to address systems characterised by machine autonomy, evolving datasets, and distributed decision-making structures.
A more effective regulatory approach would involve layered liability, assigning responsibility based on the degree of control exercised by developers, deployers, data providers, and institutional users. Canada’s proposed AIDA reflects elements of this model by imposing stricter obligations on high-impact systems. Nigeria currently lacks a comparable statutory framework.
Without reform, victims harmed by AI systems may encounter serious evidentiary barriers in establishing liability, particularly where decision-making processes remain opaque.
6. Algorithmic Bias and Human Rights
AI systems may reproduce or amplify biases embedded within training datasets, thereby generating discriminatory outcomes in employment, healthcare, lending, policing, and immigration processes. Algorithmic discrimination may occur even where developers possess no discriminatory intent because machine learning systems often replicate historical inequalities reflected in underlying data.
Nigeria’s Constitution prohibits discrimination under section 42. However, these constitutional protections have not yet been meaningfully extended to algorithmic decision-making systems. There is presently no statutory requirement for bias auditing, fairness assessments, or mandatory human oversight in AI deployment.
An opposing viewpoint suggests that excessive regulation of algorithmic systems may hinder innovation and economic development. Nevertheless, the absence of safeguards risks entrenching systemic inequalities and undermining public confidence in digital governance.Canada’s regulatory approach places stronger emphasis on proactive bias mitigation and human rights protection. Such safeguards are necessary to ensure that technological advancement does not undermine constitutional values and procedural fairness.
7. AI and Criminal Law
The Cybercrime (Prohibition, Prevention, etc.) Act 2015 addresses certain technology-enabled offences, including fraud, identity theft, and unauthorised access to computer systems. However, the Act does not directly address broader issues relating to autonomous AI systems and criminal accountability.
AI systems themselves cannot be held criminally liable because they lack intent, consciousness, and moral agency. Responsibility must therefore rest with human actors involved in the design, deployment, or misuse of such technologies. However, this approach becomes increasingly complicated where autonomous systems operate with minimal human supervision.
The growing use of AI within law enforcement and criminal justice systems also raises concerns regarding procedural fairness, due process, and discriminatory profiling. In R v Natomagan, Canadian courts recognised the risks of algorithmic bias within criminal justice decision-making.
Nigeria must ensure that the adoption of AI technologies within policing and public administration is accompanied by strong safeguards relating to transparency, accountability, and judicial oversight.
8. Comparative Lessons from Canada and the Province of Quebec
Canada’s evolving regulatory framework provides valuable guidance for jurisdictions seeking to balance technological innovation with human rights protections. The proposed Artificial Intelligence and Data Act (AIDA) adopts a structured risk-based approach that distinguishes between varying categories of AI systems according to their potential societal impact. High-impact systems are subject to enhanced obligations relating to risk assessment, transparency, monitoring, and accountability.
Quebec’s Law 25 goes further by mandating explainability, transparency, and opportunities for human intervention in automated decision-making systems. Ontario’s Enhancing Digital Security and Trust Act similarly reinforces accountability obligations within the public sector.
These comparative frameworks demonstrate that effective AI regulation need not stifle innovation. Rather, carefully designed safeguards may strengthen public trust, encourage responsible technological development, and provide greater legal certainty.
Nigeria can draw significant lessons from these jurisdictions by adopting AI-specific legislation grounded in constitutional values, democratic oversight, and human rights principles.
9. The Path Forward for Nigeria
Nigeria must adopt a proactive and structured approach to AI regulation in order to address emerging technological risks effectively. Several reforms are necessary.
First, Nigeria should enact AI-specific legislation establishing clear obligations regarding transparency, accountability, explainability, and risk management.
Second, regulatory institutions such as the Nigeria Data Protection Commission and the National Information Technology Development Agency require stronger technical expertise, funding, and enforcement powers.
Third, enforceable ethical standards should be developed to address algorithmic fairness, bias mitigation, and human oversight.
Fourth, judicial officers, lawyers, and policymakers require specialised training to address the increasingly technical nature of AI-related disputes.
Finally, Nigeria should encourage collaboration between government institutions, private sector actors, universities, civil society organisations, and international partners in developing comprehensive governance frameworks.
10. CONCLUSION
Artificial Intelligence presents both unprecedented opportunities and substantial legal risks. Nigeria’s existing legal framework remains insufficient to address the doctrinal, ethical, and regulatory challenges posed by increasingly autonomous technologies.
A reactive approach to regulation is no longer sustainable. Nigeria must adopt a proactive, rights-based governance framework capable of balancing innovation with accountability, transparency, fairness, and constitutional protections.
The comparative experiences of Canada and the Province of Quebec demonstrate that structured and human-centred AI regulation is both achievable and necessary. Nigeria must therefore act decisively to align its legal system with rapidly evolving technological realities.
Reference List
Barocas S and Nissenbaum H, ‘Big Data’s End Run Around Anonymity and Consent’ in Julia Lane and others (eds), Privacy, Big Data and the Public Good (Cambridge University Press 2014).
Bayern S, ‘The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems’ (2015) 19 Stanford Technology Law Review 93.
Bryson J, Diamantis M and Grant T, ‘Of, For, and By the People: The Legal Lacuna of Synthetic Persons’ (2017) 25 Artificial Intelligence and Law 273.
Calo R, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51 UC Davis Law Review 399.
Coeckelbergh M, AI Ethics (MIT Press 2020).
Constitution of the Federal Republic of Nigeria 1999 (as amended).
Copyright Act 2022.
Cybercrime (Prohibition, Prevention, etc.) Act 2015.
Doan v Clearview AI Inc 2025 FCA 133.
Donoghue v Stevenson [1932] AC 562 (HL).
European Parliament and Council Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence [2024] OJ L1689.
Ginsburg J and Budiardjo LA, ‘Authors and Machines’ (2019) 34 Berkeley Technology Law Journal 343.
Guadamuz A, ‘Artificial Intelligence and Copyright’ (2017) 70 Intellectual Property Quarterly 1.
Hallevy G, Liability for Crimes Involving Artificial Intelligence Systems (Springer 2015).
Hildebrandt M, Smart Technologies and the End(s) of Law (Edward Elgar 2015).
Mittelstadt B and others, ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3 Big Data & Society 1.
Nigerian Data Protection Act 2023.
O’Neil C, Weapons of Math Destruction (Penguin Books 2016).
Pasquale F, The Black Box Society (Harvard University Press 2015).
R v Natomagan 2022 ABCA 48.
R v Prince (1875) LR 2 CCR 154.
Scherer MU, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’ (2016) 29 Harvard Journal of Law & Technology 353.
Thierer A, ‘Artificial Intelligence and Public Policy’ (Mercatus Center 2018).
Toronto Star Newspapers Ltd v OpenAI Inc 2025 ONSC 6217.
In this article