The infinite Loop #010 Copy

Can RBI really afford to go all-in on AI in finance?

Srijan Nagar

Co-founder  

·

Aug 26, 2025

Hi,  

A ‘tolerant supervisory stance’ for AI. It sounds like something a tech company would ask for, but the surprising bit is that India’s banking regulator suggested this stance. Yes, the Reserve Bank of India which is praised for being consumer-first and hawkish in regulations seems to be batting for a more tolerant view towards AI in financial services.  

As surprising as this is in terms of a departure from the general policy direction of the regulator, it’s worth unpacking to figure out what’s really going on.  

First, it immediately raises a crucial question: Are RBI’s AI aspirations so high that they’re willing to overlook, even forgive, costly errors on the path to innovation? 

On one hand, the intent is clear – foster innovation, encourage adoption, and don’t stifle progress with ironclad rules from day one. On the other hand, I’m compelled to think, what could be the true cost of this unprecedented tolerance? 

The underlying truth is that RBI, much like any other global central bank and regulator, is acutely aware of the transformative potential of AI. They desperately want AI to work, especially across the breadth of the financial sector. And if you think about it, AI is already revolutionising everything from personalised banking services, dynamic credit scoring, and advanced fraud detection.  

It’s streamlining regulatory compliance and even offering predictive analytics for financial forecasting. Banks, NBFCs, and other financial institutions stand to benefit immensely in terms of reduced operational costs and improved efficiencies.  

The push in India for 'ingenious models’ and ‘AI testing sandboxes’ speaks volumes about this urgent desire to integrate AI into the very fabric of financial services. The aspiration is visionary. But a vision without a grounded understanding of AI’s pitfalls can be short sighted.  

Vision vs myopia  

This is where the stance of ‘tolerant supervision’ might start feeling like a high stakes gamble. Are we so focused on the promised potential of AI that we are rushing into unsupervised, or even careless implementation? This could very well result in outcomes that are not just costly, but irreversible. I’m talking about more than minor glitches-- think systematic failures with far reaching consequences. 

Consider the cautionary tale of Commonwealth Bank of Australia, when their shiny new AI-driven anti-money laundering system failed spectacularly. Designed to be a digital watchdog, this system missed flagging over 53,000 large cash deposits through its intelligent deposit machines. And once the criminals discovered this oversight, it went downhill from there, with millions of dollars laundered through the bank.  

The ripple effect was just as devastating as you would imagine: a record $700 million in fines, severe reputational damage, and fundamental erosion of trust in an institution that customers previously relied upon. This catastrophic blind spot was created by an autonomous system that likely wasn’t adequately tested or monitored. 
 
Beyond such dramatic incidents, there's now an almost ironic problem with AI. Especially in its generative form, AI promises the ultimate efficiency engine, liberating us from tedious tasks. And yet, several companies are facing a peculiar complication.  

An MIT report recently revealed that a staggering 95% of companies’ generative AI pilots are yielding zero returns on investment. Surprisingly, the core barrier isn’t infrastructure, it’s AI’s ability to learn. Despite all their promised prowess, many sophisticated GenAI systems are struggling to retain feedback, adapt to context, and improve over time in the way that was expected.  

This fundamental shortcoming is giving rise to a new niche in the gig economy: human AI fixers. Companies are now being forced to hire human experts to meticulously review and fix the sloppy output generated by AI. It’s creating an infinite loop, firstly teaching AI what to generate, only to fix its output to align with expectations, and then trying to teach it again with the refined output.  

This brings us back to RBI, an institution that has been notoriously heavy handed on human error, scrutinising every lapse, every deviation from regulations when a human is involved.  

So why is AI getting such unprecedented leeway? Why are ‘strong safety measures’ for AI not being defined with the same granularity as regulations for human experts? 

The fine balance  

While the argument for innovation is compelling, it cannot override the principles of stability, integrity, and most importantly, customer protection. There needs to be far greater clarity in the guardrails for AI, not just for safeguarding the interest of financial institutions but also of the average Indian customer. Transparency in how AI makes decisions, easily accessible grievance redressal systems when AI errs, and robust cybersecurity measures to thwart AI-generated threats are a few non-negotiables.  

RBI’s current recommendations for AI policies are a start, but the practical, enforceable lines of accountability and liability seem fuzzy. So, where exactly do we draw to line? When and how do you decide that the pursuit of AI innovation has crossed into precarious territories, without clear boundaries? 

While there are several unanswered questions, one trusts the Indian banking regulator to always do the right thing – given their spotless track record. It’s a bit early to jump towards scary conclusions but the public conversation on exactly how AI can be integrated into our financial systems must be had today. Tomorrow might be too late.  

What are your thoughts?  


I’ll see you next week.   

Cheers,  

Srijan Nagar  

Co-founder  
 
FinBox  


Solutions

Products

Resources