Deputy: OCC to Step Up Oversight on Banks’ Use of Artificial Intelligence

During a hearing before the House Financial Services Committee Task Force on Artificial Intelligence, Kevin Greenfield, deputy comptroller for operational risk policy at the Office of the Comptroller of the Currency, stressed the need for stepped up risk management when banks use AI. He also said the OCC will be keeping a watchful eye on how banks use artificial intelligence

In his remarks, Greenfield acknowledged that AI offers numerous benefits to banks of all sizes. “AI can augment overall risk management, compliance monitoring and management, and internal controls,” he said. “This can be seen in the development of advanced tools to improve the quality of fraud prevention controls, to increase the effectiveness of anti-money laundering and the countering of terrorist financing monitoring activities, and can help to identify and mitigate the risk of fair lending violations,” he said.

Where banks rely on third-party vendors, service providers, and their expertise for the development and implementation of AI tools and services, Greenfield warned that, “banks’ use of third parties does not diminish the responsibility of the board and management to implement and operate new products and services in a safe and sound manner and in compliance with applicable laws and regulations.”

He further warned the OCC has authority to conduct examinations of third-party services under the Bank Service Company Act, “which could, depending on the facts and circumstances, include the banking services supported by AI.”

Unintended Consequences
Greenfield also stated in his testimony that OCC “remains focused on the potential risks of adverse outcomes if banks’ use of AI is not properly managed and controlled,” and that “potential adverse outcomes can be caused by poorly designed underlying mathematical models, faulty data, changes in model assumptions over time, inadequate model validation or testing, and limited human oversight, as well as the absence of adequate planning and due diligence in utilizing AI from third parties.”

Greenfield cited the following non-exhaustive list of risks banks should ensure they manage appropriately in their use of AI:

Explainability: “The extent to which AI decisioning processes are reasonably understood and bank personnel can explain outcomes is critical,” Greenfield said. “Lack of explainability can hinder bank management’s understanding of the conceptual soundness of the technology, which may inhibit management’s ability to express credible challenge to models used or understand the quality of the theory, design, methodology, data, or testing. This may also inhibit the bank’s ability to confirm that an AI approach is appropriate for the intended use.”

Data management: “Understanding data origins, use, and governance when adopting traditional models, advanced analytics, and AI is also critical,” he said. “Data analytics and governance are particularly important when AI involves dynamic updating or algorithms that identify patterns and correlations in training the data without human context or intervention, and then uses that information to generate predictions or categorizations.”

Privacy and information-security: “Banks must comply with applicable privacy and information security requirements when using AI,” Greenfield said. “We also expect banks to practice sound cyber hygiene and maintain effective cybersecurity practices to prevent or limit the impact of corrupted and contaminated data that may compromise the AI application and to safeguard sensitive data against breaches.”

Third-party risk: “As part of an effective third-party risk management program, banks are expected to have robust due diligence, effective contract management and ongoing oversight of third parties based on the criticality of the services being provided,” Greenfield said. “This includes ensuring effective controls over aspects relevant to many AI services, including use of cloud-based entities, availability of documentation on models used, establishing roles and responsibilities, and defining data ownership and permitted uses, security, privacy, and limitations of any data that is shared with or exchanged among parties and other key governance expectations for the delivery of AI services.”

Added Greenfield, “It is important for banks to monitor a third party’s performance over time, and have controls to ensure data is used consistent with what the consumer originally permissioned, and that the results of independent assessments are available to assess if the AI service performs as intended.”

OCC’s supervisory approach
Greenfield said the OCC will continue to update its supervisory guidance, examination programs, and examiner skills to respond to banks’ growing use of AI.

Recently, the OCC has coordinated with other agencies to address governance and risk management practices for the use of innovative solutions in specific banking areas, including the:

Greenfield added that the OCC “will continue to perform robust supervision of banks’ use of AI, whether directly or through a third-party relationship.” Such areas of supervision will include “evaluating fair lending concerns and other consumer protection issues, such as unfair or deceptive acts or practices,” he said.

“Banks should maintain a well-designed risk management and compliance management program, as well as monitor for and identify outcomes that create unwarranted risks or violate consumer protection laws,” Greenfield advised. “If these outcomes occur, the OCC has a range of tools available and will take supervisory or enforcement actions as appropriate, to achieve corrective actions and address potential adverse consumer impacts.”  end slug


Jaclyn Jaeger is a contributing editor at Compliance Chief 360° and a freelance business writer based in Manchester, New Hampshire.

Leave a Reply

Your email address will not be published. Required fields are marked *