Institutions cite fragmented infrastructure, manual workflows and skills gaps as barriers.
Asia-Pacific banks, compliance practitioners and asset managers stated difficulties integrating artificial intelligence (AI) with existing systems and keeping pace with changing regulations across the region.
A survey by Risk.net and Fenergo of 110 compliance practitioners at banks and asset managers in Singapore, Malaysia and Australia found that data quality is the main concern for institutions looking to adopt artificial intelligence in compliance functions.
Many financial institutions still rely on legacy processes with significant manual intervention and inconsistent interpretations of data, said Fenergo’s Stephen Keasberry.
He noted that improving data quality is an iterative process that requires updates to operating models and consistent standards for collecting, processing and storing data.
Without this foundation, organisations struggle to manage AI inputs and outputs effectively.
Survey respondents also pointed to fragmented systems, resistance to change and skills shortages as barriers to adoption.
CIMB compliance executive Jaya Ravindranath said compliance roles are shifting towards more analytical and advisory work, with new positions resembling risk analysts and technology-enabled investigators rather than traditional reviewers.
She added that the industry needs to build workforce capabilities to support this shift.
The findings suggest full automation of compliance functions remains several years away.
Keasberry said institutions at early stages of AI adoption should keep humans involved in decision-making to validate outcomes until sufficient data and performance history are established.
Regulatory expectations are also shaping adoption. The Monetary Authority of Singapore recently issued a consultation on responsible AI use in finance, warning that poorly performing risk models could lead to financial losses, operational disruption or customer harm.
Yong Yeah Seah, head of regulatory compliance at MariBank, said the regulator’s principles-based approach requires firms to demonstrate clear governance, risk controls and explainability as AI use expands.
