What we do

Leadership solutions

End-to-end leadership advisory — from C-suite search to board placement, backed by India's only 12-month candidate guarantee.

View all services

Gladwin International

& Company

Contact Us
Gladwin International · Research & Insights
AI in IndustryBanking Financial ServicesAIGenAIRisk Management

AI Risk and the Risk of AI: How India's Chief Risk Officers Are Navigating the GenAI Paradox

GenAI simultaneously creates powerful new risk tools and introduces entirely new risk categories. India's CROs are grappling with both dimensions at once.

Gladwin International& CompanyResearch & Insights Division
15 June 202510 min read

The Chief Risk Officer's relationship with artificial intelligence is unlike any other C-suite executive's. For the CFO, AI is primarily a tool for automating financial reporting and detecting fraud. For the CTO, AI is a capability to be built and deployed. For the CMO, AI is a channel for personalisation and demand generation. For the CRO, AI is all of these things simultaneously — and also the source of a new category of risk that did not exist in the pre-AI era. India's CROs are navigating this paradox in real time, deploying AI to strengthen risk management while simultaneously building frameworks to govern the risks that AI itself introduces.

The GenAI paradox, as we term it, has two sides. On the beneficial side, generative AI offers genuine capability improvements in risk identification, fraud detection, regulatory compliance monitoring, and risk communication. On the hazardous side, GenAI introduces risks that are qualitatively different from anything in the traditional risk management taxonomy: hallucination risk in AI-generated risk assessments, deepfake fraud that bypasses identity verification systems, algorithmic bias in AI-driven credit decisioning, and adversarial attacks that manipulate AI models' inputs to produce wrong outputs. India's financial sector is encountering both sides of this paradox with increasing urgency.

AI as a Risk Management Enabler

The beneficial applications of AI in Indian financial sector risk management are substantial and growing. In credit risk, machine learning models trained on alternative data — GST filing patterns, UPI transaction history, utility payment behaviour, e-commerce transaction patterns — have extended credit access to MSME segments that traditional bureau-based models excluded, with default rates that are competitive with or better than traditional underwriting. Lenders like Aye Finance, Capital Float (acquired by Axio), and NBFC arms of fintech companies have built ML-driven underwriting that the RBI has acknowledged as a meaningful expansion of formal credit access.

In fraud detection, AI models are operating at a speed and pattern-recognition capability that is simply beyond human capacity. The National Payments Corporation of India (NPCI) uses AI-based fraud detection across the UPI network — processing signals from billions of transactions to identify suspicious patterns in real time, with false positive rates that are low enough to avoid disrupting legitimate transactions. Bank of Baroda, Union Bank, and several private sector banks have deployed real-time fraud scoring systems that analyse transaction context, device fingerprinting, behavioral biometrics, and network signals simultaneously.

In regulatory compliance, natural language processing models are being deployed to monitor regulatory updates, identify compliance obligations from new circulars, and map those obligations to internal policy frameworks. Given the volume of RBI, SEBI, IRDAI, and PFRDA circulars that Indian financial institutions must track, NLP-based regulatory monitoring represents a genuine efficiency gain for compliance-heavy risk functions.

AI as a Risk Category: The CRO's New Challenge

The risks that AI introduces into the financial system are receiving serious attention from regulators globally, and India's RBI is no exception. The RBI's draft guidelines on AI governance in financial services, circulated for comment in late 2024, outline expectations around AI model explainability, fairness testing, human oversight, and incident reporting for AI-related failures. These guidelines signal that AI risk is now a regulatory concern that will be examined during supervisory inspections — not just a technology concern managed by the CTO.

For the CRO, AI risk governance involves four distinct challenge areas.

Model hallucination and reliability. Large language models — GPT-4, Gemini, Claude, and their successors — are being deployed in Indian financial institutions for tasks ranging from credit memo generation to customer-facing Q&A. LLMs are known to hallucinate: to generate confident, plausible-sounding statements that are factually incorrect. In a consumer service context, hallucination is an inconvenience. In a credit decision context, a hallucinated financial ratio or an incorrectly summarised loan covenant could cause material financial harm. The CRO must establish governance frameworks that define which risk decisions AI can make autonomously, which require AI-assisted human review, and which must remain entirely human-driven.

Algorithmic bias and fair lending. ML models trained on historical credit data can perpetuate and amplify historical patterns of discrimination — geographic redlining, caste-based credit exclusion, or gender bias in credit scoring — if those patterns are present in the training data. India's financial sector does not yet have the equivalent of the US Fair Credit Reporting Act or the Equal Credit Opportunity Act, but the RBI's guidelines on fair lending and the Consumer Protection Act create legal exposure for discriminatory credit outcomes, regardless of whether they are produced by human or algorithmic decisions. The CRO is responsible for testing AI credit models for disparate impact and establishing ongoing monitoring frameworks.

Deepfake fraud. AI-generated synthetic media — deepfake videos, voice clones, and synthetic identity documents — is increasingly used in financial fraud. India has already seen cases of deepfake-enabled KYC fraud, where synthetic images are used to pass video-based identity verification checks, and voice clone fraud targeting senior citizens. As deepfake technology improves and becomes more accessible, it will increasingly challenge the identity verification foundations of India's digital financial system.

"We are seeing fraud patterns that our traditional models have never encountered — deepfake-based KYC fraud, synthetic identity credit applications, and AI-generated phishing that is indistinguishable from legitimate communications. The attack surface has expanded faster than our defences." — Executive Vice President and Head of Risk, a leading digital lending NBFC.

Model concentration risk. India's financial sector is converging on a small number of AI model providers — OpenAI, Google, Microsoft, and a handful of open-source alternatives. Concentration on a small number of foundational models creates systemic risk: if a major model provider experiences a security breach, a significant capability degradation, or a policy change that restricts financial services use, the impact could simultaneously affect multiple institutions. This concentration dynamic is analogous to the software vendor concentration risk that RBI's IT guidelines already address, but with additional complexity because AI model dependencies are often less visible than traditional software dependencies.

Building the AI Risk Governance Framework

India's most forward-thinking CROs are building AI risk governance frameworks that sit alongside their existing credit, market, and operational risk frameworks. The key components of an effective AI risk governance structure in an Indian financial institution include: an AI model inventory that captures every AI system in production, its purpose, its developer, and its governance status; a model risk management policy that specifies validation requirements for AI models, including testing for bias, hallucination, and robustness; an AI incident reporting process that captures AI-related failures and feeds them into the regulatory incident reporting framework; and a board-level AI governance committee or sub-committee that provides oversight of the institution's overall AI risk posture.

HDFC Bank, Axis Bank, and Kotak Mahindra Bank have been the most public among Indian banks in discussing their AI governance investments. HDFC Bank's AI governance framework, which was referenced in its annual report and board committee disclosures, includes an independent AI risk function that reports to the CRO — a structural choice that reflects the bank's assessment that AI risk is a risk management responsibility, not purely a technology governance responsibility.

The CRO of an Indian financial institution in 2025 must therefore be simultaneously a user of AI — deploying it to strengthen the risk function's capabilities — and a governor of AI — ensuring that its use across the institution does not create risks that exceed the risk appetite. This dual role requires a combination of intellectual curiosity about AI's potential, technical literacy about its limitations, and the governance instinct to build frameworks that protect the institution from novel failure modes. It is, in short, one of the most intellectually demanding aspects of modern risk leadership — and one where India's CROs are still in the early stages of developing the frameworks and capabilities the task requires.

Key Takeaways

  • 1The GenAI paradox for CROs is that AI simultaneously strengthens risk management capabilities and introduces novel risk categories — hallucination, deepfake fraud, algorithmic bias — that require new governance frameworks.
  • 2AI-driven credit underwriting using alternative data has demonstrably expanded formal credit access in India while maintaining competitive default rates — a genuine risk management success story.
  • 3LLM hallucination risk in financial contexts requires CROs to establish clear governance policies on which decisions AI can make autonomously versus which require human oversight.
  • 4Deepfake fraud is already challenging India's KYC infrastructure and will intensify, requiring investment in liveness detection and multi-factor identity verification that goes beyond traditional document-based checks.
  • 5AI model concentration risk — India's financial sector converging on a small number of foundational model providers — is an emerging systemic risk that CROs and the RBI are beginning to examine.
Tags:AIGenAIRisk ManagementModel RiskCyber RiskBankingIndia Financial Services
Gladwin International& Company

About This Research

This analysis is produced by the Gladwin International Research & Insights Division, drawing on our proprietary executive talent database, over 14 years of senior placement experience, and ongoing conversations with C-suite executives, board members, and investors across India's major industries.

Gladwin International Leadership Advisors is India's premier executive search and leadership advisory firm, with deep expertise across 20 industries and 16 functional specialisations. We have placed 500+ senior executives in mandates ranging from CEO and board director to functional heads at India's leading corporations, PE-backed businesses, and Global Capability Centres.

Related Insights

India's Premier Executive Search Firm

Ready to Build Your Leadership Team?

Gladwin International has placed 500+ senior executives across 20 industries. Let's discuss your next critical leadership mandate.