California
加利福尼亚州
California was one of the more active states in terms of introducing and enacting AI legislation. In September, the California Government announced that Governor Gavin Newsom had signed 18 AI bills into law. However, Newsom vetoed one bill, and the California State Senate did not end up passing one bill, both of which garnered significant attention when they were introduced.
在引入和颁布人工智能立法方面,加利福尼亚州是较为活跃的州之一。9 月,加利福尼亚州政府宣布,州长加文-纽森(Gavin Newsom)已签署 18 项人工智能法案,使之成为法律。不过,纽森否决了一项法案,加州参议院最终也没有通过一项法案,而这两项法案在提出时都引起了极大的关注。
AB-2885: Artificial Intelligence. Signed by Newsom, this bill amends California law to provide a uniform definition for “Artificial Intelligence.” Now, California law defines Artificial Intelligence as: “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” In plain language, California now defines AI as a system that has some sense of autonomy and can generate outputs based on inferences derived from input data.
AB-2885:人工智能。该法案由纽森签署,修订了加州法律,为 “人工智能 ”提供了统一的定义。现在,加州法律将人工智能定义为 “一种基于工程或机器的系统,其自主程度各不相同,并且能够为了明确或隐含的目标,从其接收到的输入中推断出如何产生能够影响物理或虚拟环境的输出"。用通俗易懂的语言表达,加州现在将人工智能定义为具有一定自主性并能根据输入数据推断产生输出的系统。
AB-2013: Artificial Intelligence Training Data Transparency. Signed by Newsom and taking effect January 1, 2026, this bill requires developers of GenAI systems or services to make certain public disclosures related to its training data. More specifically, they must post on their website a high-level summary of the datasets used to develop the GenAI, including information such as (1) the sources or owners of datasets, (2) the number of data points included, (3) a description of the types of data points, (4) whether the datasets include any protected intellectual property, (5) whether the datasets were purchased or licensed, (6) whether the datasets include personal information, and (7) whether the datasets were cleaned, processed, or modified, and the intended purpose of doing so. Given this, developers of GenAI systems will face significant disclosure requirements in California. However, AB-2013 does not apply to GenAI systems or services with the sole purpose of helping ensure security and integrity, the operation of aircraft in the U.S., or national security, military, or defense.
AB-2013:人工智能培训数据透明度。该法案由纽森签署,于 2026 年 1 月 1 日生效,要求 GenAI 系统或服务的开发者公开披露与训练数据相关的某些信息。更具体地说,他们必须在其网站上公布用于开发 GenAI 的数据集的高级摘要,包括以下信息:(1)数据集的来源或所有者,(2)包含的数据点数量,(3)数据点类型的描述,(4)数据集是否包含任何受保护的知识产权,(5)数据集是否被购买或授权,(6)数据集是否包含个人信息,以及(7)数据集是否被清理、处理或修改,以及这样做的预期目的。有鉴于此,GenAI 系统的开发者在加州将面临重大的信息披露要求。不过,AB-2013 并不适用于仅以帮助确保安全和完整性、美国飞机的运行或国家安全、军事或国防为目的的 GenAI 系统或服务。
SB-942: California AI Transparency Act. Signed by Newsom and taking effect January 1, 2026, SB-942 enacts provisions that require developers of GenAI systems that have over 1 million monthly visitors or users to take actions that help the public differentiate between AI-generated and non-AI-generated materials. First, these developers must provide a free AI detection tool to the public that allows users to determine whether content was created or altered by the developer’s GenAI system. Additionally, the free AI detection tool must allow users to upload or link content and must provide system provenance data detected within the content (not including any personal provenance data).
《SB-942:加州人工智能透明度法案》。SB-942 由纽森签署,将于 2026 年 1 月 1 日生效,该法案制定了相关条款,要求月访问量或用户超过 100 万的 GenAI 系统开发商采取行动,帮助公众区分人工智能生成的材料和非人工智能生成的材料。首先,这些开发者必须向公众提供免费的人工智能检测工具,使用户能够确定内容是否由开发者的 GenAI 系统创建或更改。此外,免费人工智能检测工具必须允许用户上传或链接内容,并且必须提供在内容中检测到的系统出处数据(不包括任何个人出处数据)。
Second, these developers must provide users the option of including a clear and conspicuous disclosure in the content that identifies it as AI-generated and that is permanent or difficult to remove. Furthermore, the developers must provide a latent disclosure in the content that includes the developer’s name, the version of the GenAI system that generated or altered the content, the time and date of the generation or alteration, and a “unique identifier.” This disclosure must be detectable by the developer’s tool and must be permanent or difficult to remove. Given the bill’s consistent use of “image, video, or audio content,” SB-942 does not apply to GenAI models that do not output one of these types of content.
其次,这些开发者必须为用户提供在内容中加入明确而醒目的披露内容的选择,以识别其为人工智能生成的内容,且该披露内容必须是永久性的或难以删除的。此外,开发者还必须在内容中提供潜在披露,包括开发者的姓名、生成或更改内容的 GenAI 系统版本、生成或更改的时间和日期以及 “唯一标识符”。这种披露必须能被开发者的工具检测到,并且必须是永久性的或难以删除的。鉴于该法案始终使用 “图像、视频或音频内容”,SB-942 并不适用于不输出这些类型内容之一的 GenAI 模型。
SB-1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. SB-1047 aimed to prevent the risk of certain “critical harms” associated with AI systems, including those related to chemical, biological, radiological, or nuclear weapons. However, the bill was limited to covering only AI models that met certain computational and training cost thresholds. These thresholds were exactly why Newsom vetoed SB-1047. In a letter penned by Newsom, he stated, “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.” Although Newsom recognized the need to mitigate the risk of a “major catastrophe” before one is caused by AI, the current legislation did not meet the necessary balance, and California has yet to enact AI legislation aimed at this specific purpose.
SB-1047:《前沿人工智能模型安全可靠创新法案》。SB-1047 法案旨在防止与人工智能系统相关的某些 “重大危害 ”的风险,包括与化学、生物、放射性或核武器相关的危害。然而,该法案仅限于涵盖符合特定计算和训练成本门槛的人工智能模型。这些门槛正是纽森否决 SB-1047 法案的原因。纽瑟姆在一封信中写道:"SB 1047 法案只关注最昂贵和最大规模的模型,它建立的监管框架可能会让公众对控制这种快速发展的技术产生错误的安全感。尽管纽森认识到有必要在人工智能引发 “重大灾难 ”之前降低其风险,但目前的立法并未达到必要的平衡,加州尚未针对这一特定目的制定人工智能立法。
AB-2930: Automated Decision Systems. AB-2930 was not passed by the California State Senate this legislative session. The bill aimed to prevent “algorithmic discrimination” that can occur in AI models. To do so, AB-2930 attempted to introduce requirements for both developers and deployers of AI processes or systems. For example, developers would have been required to perform and provide deployers an impact assessment before deployment and annually thereafter, including information such as the types of personal characteristics that the AI process or system will assess. And deployers who deploy an AI process or system that makes “consequential decisions” would have been required to inform those affected by the use that the AI system is being used and other information related to the nature of the AI system’s use. Although California did not pass this bill, Colorado passed a very similar bill.
AB-2930:自动决策系统。在本届立法会议上,加州参议院未通过 AB-2930 法案。该法案旨在防止人工智能模型中可能出现的 “算法歧视”。为此,AB-2930 法案试图对人工智能程序或系统的开发者和部署者提出要求。例如,要求开发者在部署前进行影响评估并提供给部署者,此后每年进行一次影响评估,包括人工智能流程或系统将评估的个人特征类型等信息。而部署人工智能流程或系统以做出 “相应决定 ”的部署者,则必须告知受影响者正在使用人工智能系统,以及与人工智能系统使用性质相关的其他信息。虽然加州没有通过这项法案,但科罗拉多州通过了一项非常类似的法案。
Colorado
科罗拉多州
Colorado enacted three AI bills, one of which is similar to California’s AB-2930. By enacting its own bill, Colorado enacted the first legislation aimed at algorithmic discrimination that can occur within AI systems.
科罗拉多州颁布了三项人工智能法案,其中一项与加州的 AB-2930 法案类似。通过颁布自己的法案,科罗拉多州首次颁布了针对人工智能系统中可能出现的算法歧视的立法。
CO SB205: Consumer Protections for Artificial Intelligence. In May, Governor Jarod Polis signed CO SB205 into law. Just like California AB-2930, this law aims to prevent “algorithmic discrimination” in AI systems. The law defines algorithmic discrimination as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”
To prevent this, like California AB-2930, the Colorado law enacts different requirements for developers and deployers of AI systems that make or are a substantial factor in making a “consequential decision.” First, developers have several requirements, including that they must (1) make specific types of information available to deployers and other developers, including reasonably foreseeable risks of algorithmic discrimination, and (2) provide a public statement including how the developer manages reasonably foreseeable risks. Second, deployers have an even longer list of requirements, including that they must (a) implement and maintain a risk management policy and program, (b) complete impact assessments annually and after major modifications to the AI system, and (c) provide consumers affected by the consequential decision-making with a statement informing them of the use of the AI system.
《科罗拉多州 SB205 法案:人工智能消费者保护法》。今年 5 月,州长 Jarod Polis 签署了 CO SB205,使其成为法律。与加州 AB-2930 法案一样,这项法律旨在防止人工智能系统中的 “算法歧视”。该法律将算法歧视定义为 “任何情况下,人工智能系统的使用导致了非法的差别待遇或影响,使个人或群体因其实际或感知的年龄、肤色、残疾、种族、遗传信息、英语水平有限、民族血统、种族、宗教、生殖健康、性别、退伍军人身份或本州法律或联邦法律保护的其他分类而受到不利影响”。
为了防止出现这种情况,与加州 AB-2930 法案一样,科罗拉多州的法律也对做出 “重大决定 ”或成为做出 “重大决定 ”的重要因素的人工智能系统的开发者和部署者提出了不同的要求。首先,开发者有几项要求,包括他们必须(1)向部署者和其他开发者提供特定类型的信息,包括可合理预见的算法歧视风险,以及(2)提供一份公开声明,包括开发者如何管理可合理预见的风险。其次,对部署者的要求就更多了,包括他们必须:(a) 实施并维护风险管理政策和计划;(b) 每年并在对人工智能系统进行重大修改后完成影响评估;(c) 向受相应决策影响的消费者提供一份声明,告知他们人工智能系统的使用情况。
Task Forces
工作组
Lastly, outside of specific legislation aimed at the use or implementation of AI, several states enacted laws creating an AI task force. The states that created some form of an AI task force in 2024 are: Colorado; Illinois; Indiana; Massachusetts (by Executive Order); Oregon; Washington; and West Virginia. Although the language creating the task force in each State is unique, at a high level, the purpose of these task forces is to issue and recommend protections for consumers, workers, or the more general public from the risks of AI. Given this, the creation of these task forces can be seen as a sign that 2025 will likely see even more legislation or executive rulemaking governing the use of AI.
最后,在针对使用或实施人工智能的具体立法之外,有几个州颁布了成立人工智能工作组的法律。2024 年成立了某种形式的人工智能工作组的州有 科罗拉多州、伊利诺伊州、印第安纳州、马萨诸塞州(通过行政命令)、俄勒冈州、华盛顿州和西弗吉尼亚州。虽然每个州设立专责小组的措辞各不相同,但从高层次来看,这些专责小组的目的是发布和建议保护消费者、工人或更多公众免受人工智能风险的措施。有鉴于此,这些特别工作组的成立可以被视为一个信号,即 2025 年可能会有更多的立法或行政规则制定来规范人工智能的使用。
Lots of Legislation to Come
大量立法即将出台
In conclusion, 2024 saw a large number of significant developments in AI legislation. Although the federal government did not enact AI legislation this year, the continued introduction of AI bills and the decision of Committees to pass AI bills through markup suggests that we will see some form of AI legislation in 2025 or even further in the future. At the same time, although many states did not enact AI legislation, nearly every state introduced AI legislation this year, suggesting that states will continue to evaluate their need for AI legislation in 2025 and beyond.
总之,2024 年在人工智能立法方面取得了大量重大进展。虽然联邦政府今年没有颁布人工智能立法,但不断推出的人工智能法案和委员会通过标注通过人工智能法案的决定表明,我们将在 2025 年甚至更远的未来看到某种形式的人工智能立法。与此同时,尽管许多州没有颁布人工智能立法,但今年几乎每个州都提出了人工智能立法,这表明各州将继续评估其在 2025 年及以后的人工智能立法需求。