This two-part series explores the regulatory context and applications of Generative AI in Banking, Financial Services and Insurance (BFSI). This is Part One, outlining the regulatory challenges. Part Two will focus on the solutions.
What makes Generative Artificial Intelligence or GenAI seemingly omniscient with its responses to questions on almost any topic? GenAI algorithms apply complex mathematical models, such as deep learning techniques and neural networks, to recognize data patterns and create outputs that mimic human responses or real-world data, based on how the large language models that power GenAI are trained. For context, an average human reads 800 books in a lifetime, while GPT-3 was trained on content that roughly equals 1.3 million books.
What can GenAI do for the BFSI industry? It can significantly optimize human-intensive processes, such as customer servicing, customer onboarding, claims management, fraud investigation, and sentiment analysis. Customer acquisition and up/cross-sell — the ultimate challenges in this sector — can be bolstered by GenAI’s prowess at wooing customers through personalized and textured 3-D experiences.
As pressures mount on businesses for competitive spending, BFSI firms are hurrying to roll out GenAI productivity tools for their employees and pilot applications for their end-consumers. As such, businesses have no choice but to employ the technology or pay the price of living without it. So much so that “AI washing” (companies making groundless AI-related claims) is of concern to the US Security and Exchange Commission (SEC).
Despite all the enthusiasm for using GenAI to drive business growth, a few significant challenges remain unsolved – hallucinations, explainability, and bias are generating to ethical dilemmas. For financial services, where transparency and accountability are paramount, the challenge is very real. However, there is a safety net that not only protects consumers, but also helps firms test limits of implementing GenAI use cases without hurting themselves — regulations.
Close Encounters of the Regulatory Kind
As use of GenAI grows to legendary proportions, governments and regulators worldwide are quietly working on creating a balanced response to the possibility of misuse of the technology. The patterns of emerging AI regulation (including GenAI) across the world revolve around three primary themes regarding the commercial usage of AI:
- Bias, discrimination, and fairness
- Transparency and data governance
- Human oversight
Cases in which regulatory entities have provided direct inputs concerning the use of AI are of particular importance:
- The US SEC proposed new requirements for protection of investors from risks resulting from the use of predictive analytics models and AI-powered investment advisors.
- The Colorado Division of Insurance (CDOI) released its Algorithm and Predictive Model Governance Regulation (AI regulation), effective November 14, 2023: algorithms, and predictive models (i.e., AI models) should not result in unfairly discriminatory insurance practices with respect to race. Notably, according to the law: “Insurers must apply BIFSG (Bayesian Improved First Name Surname Geocoding) estimated race and ethnicity to each of their application and premium datasets to perform testing of Hispanic, Black and Asian Pacific Islander applicants and insureds relative to White applicants and insureds. Depending on the outcomes of the application approval and premium rate testing, variable testing of both datasets may be required to isolate and identify potentially discriminate variables.”
- The US Federal Trade Commission has applied “algorithm disgorgement” which has already been used in a few cases. The disgorgement calls for not just “model deletion” but the complete deletion of data along with AI models and products built on the models if found guilty of using data inappropriately.
- The draft EU AI act proposes hefty fines for businesses operating in member states that use AI and data, particularly in the context of privacy. For example, the Italian Data Protection Authority (GPDP) has fined the city of Trento for privacy breaches in the use of AI.
- Monetary Authority of Singapore (MAS) detailed a seven-dimension, detailed GenAI risk management framework in partnership with the financial industry.
Authorities in the UK, Australia, Singapore, China have similar laws and dispositions toward the usage of AI.
Harnessing GenAI as a Tool for Compliance
Besides preventing inappropriate collection and misuse of data, why are BFSI firms concerned about embracing GenAI on a larger scale? The reason is that regulatory compliance does not allow for interpretation and the consequences can put a firm at a significant competitive disadvantage through loss of customer trust, as well restrictions on further usage of technology.
In this context, a few things become significant for GenAI:
- Discovering the right sources of data and its usage within the enterprise
- Ensuring architectural and procedural compliance with evolving regulations
- Asking the right questions to LLMs (or prompt engineering)
- Interpretation of the AI/LLM responses and applying these responses to business decisions
Through their ingenuity (and out of necessity), businesses are turning GenAI from a challenge into an opportunity by utilizing GenAI to comply with regulations. GenAI can now be applied to automating, augmenting, and innovating various aspects of risk and compliance management in the financial industry.
In our subsequent blog, we’ll detail how GenAI can help BFSI firms address these concerns through GenAI-related strategies and deployments, and how firms can take their first steps toward harnessing this technology’s capabilities.