This, along with other AI regulations, sparks worries for enterprises about escalating compliance costs and curbing innovation.
The US Department of Commerce’s Bureau of Industry and Security (BIS) plans to introduce mandatory reporting requirements for developers of advanced AI models and cloud computing providers.
The proposed rules would require companies to report on development activities, cybersecurity measures, and results from red-teaming tests, which assess risks such as AI systems aiding cyberattacks or enabling non-experts to create chemical, biological, radiological, or nuclear weapons.
“This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security,” Gina M. Raimondo, secretary of commerce, said in a statement.
Impact on enterprises
The proposed regulations follow a pilot survey by the BIS earlier this year and come amid global efforts to regulate AI.
After the EU’s landmark AI Act, countries such as Australiahave introduced their own proposals to oversee AI development and usage. For enterprises, …