Companies cannot delegate full responsibility to an AI program, APRA member says
The Australian Prudential Regulation Authority (APRA) has advised financial services firms to exercise caution in their adoption of artificial intelligence (AI) systems.
APRA member Therese McCarthy Hockey (pictured above), speaking at the Australian Finance Industry Association (AFIA) Risk Summit, said that while the regulator has no immediate plans to introduce new AI-specific rules, the current regulations are sufficient to manage the integration of AI in the financial sector.
“We believe our prudential framework already has adequate regulations in place to deal with generative AI for the time being,” McCarthy Hockey said. “Our prudential standards may not specifically refer to AI but nor do they need to at the moment. They have intentionally been designed to be high-level, principles-based and technology neutral.
“We are confident for now that we have the tools to act, including formal enforcement powers, should it be necessary to intervene to preserve financial safety and protect the community.”
McCarthy Hockey noted that APRA’s initial guidance on AI was to tread carefully when using advanced technologies: conduct due diligence, put appropriate monitoring in place, test the board’s risk appetite, and ensure there is adequate board oversight.
She then acknowledged AI’s impact – its potential benefits as well as the associated risks.
“The potential benefits are enormous, with an Australian government report noting that AI and automation could add an additional $170 billion to $600 billion a year to Australia’s GDP by 2030,” McCarthy Hockey said.
“The advances promised by generative AI will ideally deliver benefits for customers and shareholders. But just as the potential rewards of generative AI are bigger, so are the risks.”
“Regulators globally are increasingly concerned about the potential for AI to create deepfake videos and spread convincing disinformation. While AI can improve business decision-making when used effectively, it could also worsen decision-making and even spark a financial crisis if it malfunctions or isn’t applied appropriately.”
McCarthy Hockey stressed that companies cannot delegate full responsibility to an AI program, adding that “entities must have a ‘human in the loop’: an actual person who is accountable for ensuring it operates as intended.”
“While we are not adding to our rule book at the moment, we will be using our strong supervision approach to stay close to entities as they innovate and consider management of AI risks,” she said.
Want to be regularly updated with mortgage news and features? Get exclusive interviews, breaking news, and industry events in your inbox – subscribe to our FREE daily newsletter. You can also follow us on Facebook, X (formerly Twitter), and LinkedIn.