Lawmakers and regulators are behind the curve when it comes to the mass popularization of AI and its usage by the public.
The California State Senate passed a bill last month to regulate the development and training of advanced, cutting-edge AI models, aiming to ensure they can’t be used by bad actors for nefarious purposes. The passage of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is generating uproar among developers, many of whom run their operations out of the state, arguing that it could critically stifle innovation. Supporters of the bill, however, believe the rapidly evolving technology requires guardrails and the legislation is a reasonable—if limited and imperfect—first step toward legally enshrining safety best practices.
This article was written by Owen J. Daniels and originally published by The Bulletin Of The Atomic Scientists.
Click here to read the rest of the article.