This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on International Trade

| 2 minutes read
Reposted from Our Take on AI

U.S. Government Proposes New Requirements for U.S. IaaS Providers and Foreign Training of AI

The U.S Commerce Department’s Bureau of Industry and Security (BIS) has proposed a rule that will require U.S. Infrastructure as a Service (IaaS) providers to undertake heightened know your customer, monitoring and reporting obligations, and will also require disclosure to the U.S. government when foreign persons train large AI models with the potential for malicious cyber-enabled activity using U.S. IaaS products. 

Under the proposed rule, published on January 29, 2024, U.S. persons that offer IaaS products and their foreign resellers will be required to verify the identity of their foreign customers and will be subject to certain record keeping and reporting obligations.  To meet these obligations, U.S. IaaS providers will need to implement a “Customer Identification Program” (a know-your-customer program) to verify the identity of foreign persons that utilize their IaaS product.  This will include submitting annual certifications to BIS about the provider’s and resellers’ CIP.  U.S. IaaS providers will be responsible for their foreign resellers’ compliance with the CIP requirements.

Additionally, U.S. IaaS providers and their foreign resellers will be required to report to BIS any instances of training runs by foreign persons for large AI models with potential capabilities that could be used in malicious cyber-enabled activity. The providers and resellers also would be required by BIS to prohibit or limit access to accounts that foreign malicious cyber actors use to conduct malicious cyber-enabled activity. 

The proposed rule defines a large AI model as “any AI model with the technical conditions of a dual-use foundation model, or that otherwise has technical parameters of concern, that has capabilities that could be used to aid or automate aspects of malicious cyber-enabled activity, including but not limited to social engineering attacks, vulnerability discovery, denial-of-service attacks, data poisoning, target selection and prioritization, disinformation or misinformation generation and/or propagation, and remote command-and-control, as necessary and appropriate of cyber operations.”  The proposed rule notes that BIS will publish further guidance on the technical conditions around large AI models that could be used in malicious cyber-enabled activity, and it requests comments on this guidance.  Persons interested in commenting on this proposed rule may file their comments with BIS at the Federal eRulemaking Portal through April 29, 2024.

BIS’s proposed rule, if implemented, will have a significant impact on U.S. IaaS providers and their foreign resellers who will need to implement a number of measures to comply with the CIP and certification requirements involving their foreign customers. These IaaS providers and their resellers will also need to develop systems for identifying and reporting certain foreign customers of large AI model training, and complying with related prohibitions involving malicious cyber-enabled activities. Failure to do so could result in substantial civil monetary and criminal penalties.