11-4-24 NYDFS Guidance on Artificial Intelligence
On October 16, 2024, the New York State Department of Financial Services (NYDFS) issued guidance concerning the use of artificial intelligence (AI) in financial services. This guidance does not impose new requirements on NYDFS-regulated companies[1] but aims to ensure they are informed about how to use AI technologies responsibly and in compliance with existing regulatory frameworks.
The guidance highlighted two major AI-related threats to cybersecurity: AI-enabled social engineering and AI-enhanced cybersecurity attacks. AI-enabled social engineering refers to a threat actor's ability to use AI to create convincing and interactive audio, video and text to target specific individuals. AI-enhanced cybersecurity attacks more quickly amplify the potency, scale, and speed of existing types of cyberattacks.
Key Highlights of the Guidance:
- Risk Assessments: Companies are encouraged to design risk assessments to specifically address AI-related risks which may include risks from the organization's own use of AI, the AI technologies utilized by third party service providers (TPSPs) and any potential vulnerabilities stemming from AI applications posing a risk to the company's data.
- Access Controls: Implementing robust access controls is a recommended defensive measure to combat AI threats. It is recommended that companies use multi-factor authentication and limit employees' access to data to that which is necessary for their roles.
- Training: AI-cybersecurity training is important to ensure all personnel are aware of the risks posed by AI, procedures adopted by the company to mitigate risks, and how to respond to AI-enhanced social engineering attacks.
- Monitoring: Companies must have monitoring in place to identify new security vulnerabilities, as required by the Cybersecurity Regulations. Companies that use AI applications, like ChatGPT, should consider monitoring for unusual query behaviors that might indicate an attempt to extract non-public information.
- TPSPs and Vendor Management: Companies should maintain policies and procedures that include guidelines for conducting due diligence before using a TPSP that will access the company's information system. Companies should consider the threats facing TPSPs and how those threats, if exploited, could impact them.
- Data Management: Data minimization is already considered best practices and required for certain regulated companies. If a company uses AI, controls should be in place to prevent threat actors from accessing the vast amounts of data maintained for the accurate functioning of the AI.
- Compliance with Existing Regulations: Institutions must align their AI practices with existing regulatory requirements and guidelines, including those related to privacy, security, and risk management.
Next Steps:
- Review Current AI Practices: Assess existing AI applications and determine whether they align with the NYDFS guidance.
- Enhance Risk Management: Implement or strengthen risk management frameworks for AI systems.
- Monitor Developments: Stay informed about any updates or additional guidance from the NYDFS regarding AI and other emerging technologies.
Companies should review their existing cybersecurity policies and procedures, and meet with their IT and cybersecurity providers, to determine whether they should be updated in light of the guidance. We are ready to assist clients with their reviews. If you have any questions, please reach out to your Woods Oviatt Gilman attorney or any member of the Business and Tax Department at Woods Oviatt Gilman LLP.
[1] NYDFS-regulated companies, or Covered Entities, are defined as any individual or non-governmental partnership, corporation, branch, association, or other entity operating or required to obtain a license or similar authorization under New York’s Banking Law, Insurance Law, or Financial Services Law. For example, state-chartered banks, mortgage brokers, insurance brokers and agencies, and service providers are Covered Entities.