About the Report
In March, the U.S. Department of the Treasury released a comprehensive report focusing on managing AI-specific cybersecurity risks within the financial services sector. This document was crafted following Executive Order 14110, which emphasizes the safe and trustworthy development of artificial intelligence. The report was developed by the Office of Cybersecurity and Critical Infrastructure Protection (OCCIP), responsible for risk management in the financial sector.
The Treasury gathered insights through interviews with 42 companies spanning the financial and technology sectors, including both small local banks and large global institutions. Key contributors also included major tech companies, regulatory bodies, and anti-fraud service providers.
The report provides a detailed overview of how AI is currently utilized for cybersecurity and fraud prevention, along with best practices and suggestions for AI adoption, without imposing any obligatory measures or making recommendations for or against AI usage in the financial sector.
Recommendations from the Report
The Treasury's report identifies several opportunities and challenges that AI presents to the security of the financial sector, and suggests actions to mitigate related risks:
Addressing the Capability Gap
There's a growing divide between large and small financial institutions regarding in-house AI systems. While larger firms develop proprietary AI models, smaller ones often lack the data resources and expertise needed. Migration to the cloud offers a strategic advantage for leveraging AI securely.
Narrowing the Fraud Data Divide
The disparity in available data for training AI models, especially for fraud prevention, creates challenges. Larger institutions benefit from extensive historical data, while smaller entities struggle due to limited internal data and technical prowess.
Regulatory Coordination
There's a need for cohesive regulations as different regulators explore AI oversight. The collaboration between financial institutions and regulators is crucial to address these challenges effectively.
Expanding the NIST AI Framework
The National Institute of Standards and Technology (NIST) could expand its AI Risk Management Framework to better address governance and risk management needs specific to financial services.
Data Supply Chain and Nutrition Labels
With rapid AI advancements, monitoring data supply chains for accuracy and privacy is critical. The introduction of nutrition labels—akin to those on food products—can help clarify data origins and usage for AI systems.
Explainability in AI
Understanding machine learning models, particularly generative AI, remains challenging. Research into explainability for these 'black box' systems, which are often opaque in their function, is essential.
Addressing Human Capital Gaps
The fast-paced evolution of AI has highlighted a talent gap. There is a need for best practices to guide less-experienced users and role-specific training to bridge this gap within financial institutions.
Developing a Common AI Lexicon
Consistency in defining AI terms is lacking. A unified lexicon would benefit institutions, regulators, and consumers alike in understanding AI-related technologies and applications.
Digital Identity Solutions
Robust digital identity solutions are necessary for enhancing cybersecurity and reducing fraud. However, the effectiveness of these solutions varies, necessitating standardization across technology and governance.
International Coordination
The Treasury continues to engage internationally to explore AI's benefits and risks within financial services, aiming for a harmonized approach to AI regulation globally.