Regulating algorithmic governance: a legal vacuum in policy-making and public accountability

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Postgraduate Unit, Faculty of Arts and Culture, South Eastern University of Sri Lanka.

Abstract

In recent years, governments around the world have integrated algorithmic systems into core governance functions such as welfare eligibility, immigration control, predictive policing, and credit assessments. While these innovations promise efficiency and consistency, their unregulated deployment raises serious legal and ethical concerns. In countries with underdeveloped digital laws, especially across the Global South, this shift is occurring without sufficient legal frameworks to protect individual rights or ensure transparency in decisionmaking. This paper critically explores the legal vacuum surrounding algorithmic governance. It argues that these systems, often seen as objective tools, can in fact embed and amplify societal biases, disproportionately disadvantaging marginalized communities. Unlike human decision-makers, algorithms operate with limited transparency, making it difficult for affected individuals to understand, contest, or appeal decisions. The absence of explicit legal protections such as the right to explanation, access to due process, and algorithmic accountability raises fundamental questions about justice, fairness, and democratic oversight in digital public policy. To address this, the study employs a comparative legal analysis of selected jurisdictions including the European Union, United States, and India to examine how existing laws respond to the governance challenges posed by artificial intelligence. It explores concepts such as data justice, algorithmic transparency, and human oversight, identifying gaps and opportunities in current regulatory frameworks. The paper proposes a comprehensive legal framework to regulate algorithmic governance, grounded in constitutional principles and administrative justice. Key recommendations include establishing algorithmic review boards, mandating public audits of AI systems, and enforcing legal standards for fairness, explainability, and accountability. By intersecting legal theory with real-world digital policy, this research highlights the urgent need for legal safeguards that ensure technology serves all citizens equitably—not just the digitally privileged.

Description

Citation

Two-Day Multi–Disciplinary International Conference - Book of Abstracts on "Digital Inequality and Social Stratification" - 2025 (Hybride Mode), 20th-21th 2025. Postgraduate Unit, Faculty of Arts and Culture, South Eastern University of Sri Lanka. pp. 94.

Endorsement

Review

Supplemented By

Referenced By