A new report from the Joint California Policy Working Group on AI Frontier Models, co-chaired by AI pioneer Fei-Fei Li, recommends that legislators consider future, unobserved risks when regulating advanced AI systems. Commissioned by Governor Gavin Newsom following his veto of SB 1047, the 41-page interim report calls for stronger transparency from AI developers and third-party verification of safety testing.
Li, alongside Jennifer Chayes and Mariano-Florentino Cuéllar, emphasizes a “trust but verify” approach, advocating for laws that mandate disclosure of testing practices, data sources, and security protocols. While acknowledging inconclusive evidence for extreme risks, the authors argue policy must anticipate AI’s future threats. The final report is expected by June 2025.