Bidisha Roy
DOI:
Abstract
Artificial Intelligence (AI) governance has traditionally focused on automated decision-making systems that classify, predict, and allocate resources. However, recent advances in Generative Artificial Intelligence (GenAI) and emotionally responsive AI challenge this decision-centric regulatory model. This article examines Algorithmic Accountability through a legal lens, focusing on systems that influence emotional behaviour and process intimate personal data without producing discrete, reviewable decisions. Based on the Indian data protection and constitutional privacy framework, this article demonstrates that consent-based regulation and existing accountability mechanisms are insufficient to address emotional and generative AI. It suggests an influence-based model of algorithmic accountability that acknowledges delegated influence, structural consent failure, and accountability gaps based on comparative insights from the US and the EU. In doing so, this article reframes algorithmic accountability as a tool for regulating influence rather than merely relying on automated decisions. The article concludes that human-centric AI governance in India must move beyond data-centric regulation to protect autonomy, dignity, and trust in an age of emotional artificial intelligence.
Keywords
Artificial Intelligence; Large Language Model; Affective Computing; Emotionally responsive AI; Generative AI; Deepfake; Algorithmic Accountability
Download Full Manuscript




