Who in Indian Finance Is Liable?
Agentic AI — systems that do not just generate outputs but take sequences of autonomous actions to complete tasks — moved from a technical concept to a live deployment reality across global financial services in 2024 and 2025. The same shift is now underway in India. HDFC Bank, ICICI Bank, and a range of fintech lenders including Bajaj Finserv and Slice are using AI-driven systems that go beyond fraud scoring to actively approve or decline small-ticket credit, generate customer-facing communications, and in some cases execute compliance reporting with minimal human intervention in the loop.
The capability argument for these deployments is sound. India processes approximately 14 billion UPI transactions annually. The volume of credit applications through digital lending platforms reached an estimated 80 million in FY2024. No human underwriting workforce can operate at this scale with the turnaround times customers expect. Automation is not optional — it is the only way the system functions at Indian volume.
The compliance argument is more complicated. As we argued in our analysis of why agentic AI represents a structural shift rather than a feature upgrade, the fundamental change is not speed or scale — it is accountability. When an AI agent executes a decision autonomously, the chain of responsibility that consumer protection law and financial regulation were built around becomes genuinely ambiguous.
What the Regulatory Gap Actually Looks Like
India's financial regulatory architecture assigns clear liability to regulated entities — banks, NBFCs, brokers, insurers — for the decisions they make and the products they offer. This works well when a human officer makes a lending decision. It works less clearly when an AI model trained by a third-party vendor, deployed on cloud infrastructure managed by a fourth party, and fine-tuned by the bank's data science team produces a credit decline that the customer believes is discriminatory.
RBI's guidelines on digital lending, updated in 2022 and supplemented in 2023, establish that regulated entities cannot outsource the credit decision itself — a human or institution-level decision must be preserved. In practice, the line between an AI recommendation and an AI decision is being interpreted permissively by many lenders. The practical question — whether a loan officer who rubber-stamps 97% of AI recommendations constitutes genuine human oversight — has not been tested by RBI enforcement.
SEBI's 2024 circular on algorithmic trading extended some oversight requirements to AI-assisted order generation. It stopped short of addressing autonomous execution agents. The Digital Personal Data Protection Act, passed in 2023, includes provisions relevant to automated decision-making but its implementing rules — which would clarify the right to contest an automated decision — had not been finalised as of Q1 2026.
Three Failure Modes Indian Regulators Have Not Priced In
The first is model bias at scale. AI credit models trained on historical lending data will replicate historical lending patterns — including patterns that reflect decades of unequal access to formal credit. A model that declines applicants from certain geographies or certain occupational categories at higher rates than a comparable human underwriter would is not obviously broken; it may be optimising accurately on the data it has seen. The harm is real but the cause is diffuse, and existing consumer redress mechanisms in India are not designed to handle statistical discrimination claims.
The second is cascading automation failure. In financial systems where AI agents are communicating with and triggering other AI agents — a credit assessment agent feeding into a fraud detection agent feeding into a payment routing agent — a single model error can propagate through a transaction chain before any human has the opportunity to intervene. The 2010 Flash Crash in US equity markets, where algorithmic systems amplified a minor liquidity disruption into a trillion-dollar intraday collapse, was a preview of what interconnected autonomous systems do under stress. India's financial infrastructure is more interconnected than it was five years ago and becoming more so.
The third is explainability under challenge. RBI's fair lending guidelines require that credit decisions be explainable to customers on request. Neural network models — the architecture underlying most production-grade credit AI — do not produce decisions that are straightforwardly explainable in human terms. Banks are deploying explainability wrappers, post-hoc rationalisation systems that generate plausible-sounding reasons for decisions the model itself cannot articulate. This is not explainability. It is the appearance of explainability, and it will not survive a serious regulatory audit.
The Counter-Argument — and Why It Does Not Settle the Issue
The standard response from the financial industry is that AI-assisted decisions are already more consistent and in some studies less biased than human ones — that a model applying the same criteria to every applicant is fairer than a loan officer who applies criteria inconsistently. This is partly true and partly a category error. Consistency and fairness are not the same thing. A model that consistently rejects all applications from a particular demographic cohort is perfectly consistent and structurally discriminatory. The relevant question is not whether AI is more consistent than humans — it is whether the outcomes are equitable and the process is contestable.
The financial institutions deploying these systems are not acting in bad faith. They are responding to genuine scale pressures with the best available tools. The problem is that the regulatory framework they are operating within was designed for a world where consequential decisions had human authors. Updating that framework is not an anti-technology position; it is the precondition for AI deployment in finance that can sustain public trust.
What the Next Twelve Months Will Establish
RBI's expected guidelines on AI governance in financial services — flagged in the 2024–25 annual report as a priority — will be the most consequential regulatory document in this space when they arrive. The key question they must answer: whether regulated entities bear full liability for AI-driven decisions regardless of whether those decisions were made by proprietary or third-party systems. The EU's AI Act, which came into force in 2024, takes the position that high-risk AI systems — including credit scoring — must meet explainability, human oversight, and accuracy standards, with liability sitting with the deployer. India's framework, when it arrives, will likely draw on this precedent while adapting for a domestic context where the scale of digital financial inclusion makes blanket restrictions on AI unworkable.
The financial sector's AI deployment will continue regardless of regulatory pace. The compliance risk for institutions operating ahead of the framework is that they are building operational dependencies on systems whose liability profile is not yet settled. When the first significant enforcement action arrives — a wrongful credit denial at scale, a fraud detection failure with systemic consequences — the absence of a clear framework will not protect the institution. It will simply mean the outcome is decided case by case, which is worse for everyone.