Sunday, December 28

Explainable AI in Finance | Research & Policy Center


Decision-making systems orchestrate our world, powered by machine learning (ML) systems based on artificial intelligence (AI). These AI-based systems help underwriters and credit analysts to assess risk, portfolio managers to optimize security allocation, and individuals to select investment and insurance products. As the digital economy grows, so does the need for immense computing power. This power comes at a cost, however: Systems based on deep learning algorithms in particular can become so complex that even their developers cannot fully explain how these systems generate decisions. This, in essence, is the “black-box problem,” which makes it difficult to trust an AI system’s decisions, assess model fairness, and meet regulatory demands. Consequences include actual or perceived discrimination against protected consumer groups and violation of fair lending rules.

This problem has led to the consideration of various proposed solutions — the most well known being explainable AI (XAI) technologies — to create a cognitive bridge between human and machine. XAI refers to AI and ML techniques, or capabilities, that seek to provide human-understandable justifications for the AI-generated output. Implicit in explainable AI is the question “explainable to whom?” In fact, defining “whom” (or the user group) is essential to determining how the data are collected, what data can be collected, and the most effective way of describing the reason behind an action. This report focuses on the human behind human–machine collaboration. The objective is to generate discussion on the best way to support the needs of diverse groups of AI users. As such, this report explores the role of XAI in modern finance, highlighting its applications, benefits, and challenges, with insights from recent studies and industry practices. It presents a detailed analysis of the explainability needs of six stakeholder groups, the majority of which are nontechnical users. The analysis includes matching their needs with their job responsibilities and assessing the most relevant XAI methods. Finally, the report reviews some alternative approaches to XAI — evaluative AI and neurosymbolic AI.

With its focus on AI explainability, this study represents a deeper analysis of transparency and explainability issues raised in earlier CFA Institute works. These publications include “Ethics and Artificial Intelligence in Investment Management” (Preece 2022) and “Creating Value from Big Data in the Investment Management Process” (Wilson 2025)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *