New draft poised to replace Colorado AI Act

Fewer than two years after one of the nation’s most sweeping artificial intelligence laws (SB 24-205) was passed, the revised draft policy framework released by the Colorado AI Policy Working Group is expected to reshape how the state regulates AI.

The new framework, which is intended to inform legislation moving through the General Assembly, shifts away from the original risk-based approach toward a disclosure-driven model.


Sophie Baum, a senior associate with Hogan Lovells, said the change represents “a fundamental shift in how Colorado is looking at approaching AI regulations.”

The update follows sustained criticism from industry, technologists and Gov. Jared Polis, who warned the earlier version of the law was overly complex and risked stifling innovation. He convened the AI Policy Working Group after the passing of SB 205.

The original law, titled the Colorado AI Act, is set to take effect June 30, 2026, leaving lawmakers a narrow window to pass replacement legislation before the end of the current session.

From Risk-Based Regulation to Transparency

When it passed in 2024, SB 205 positioned Colorado as a leader in AI regulation. The act targeted “high-risk” AI systems used in consequential decisions, such as hiring, lending, housing and education. It also imposed compliance requirements, including impact assessments, risk management programs and monitoring.

“The Colorado AI Act is more of what I would describe as sort of a European-style risk-based governance model,” Baum said. “It looks at tiers of AI systems and regulates them accordingly.”

The new framework is “less about ‘How do you need to build your systems and monitor and assess them on the front-end?’ and more about disclosing to consumers what is happening,” she said. “When is AI being used and how does it affect them?”

Developer, Deployer Obligations

Focusing on transparency, the revised framework clarifies responsibilities between developers, who build AI systems, and deployers, who use them.

Developers would need to provide documentation detailing intended uses, risks and limitations. Deployers would be responsible for using systems appropriately.

“Where AI technology would be used in connection with consequential decisions (or where it could reasonably be expected to make consequential decisions), AI developers would be required to notify AI deployers of how the AI technology works in connection with those decisions,” explained Clark Hill member Jason Schwent in a written publication.

He continued, “Further disclosures would be required where AI is used to make an adverse decision. So, for example, where an apartment complex uses AI to screen applicants and rejects an application, that apartment complex would be required to provide a description of the consequential decision and the role the AI technology played in the decision within 30 days.”

In addition, the AI deployer would need to create a simple process through which “impacted individuals could learn about the types of personal data that were used in making the decision as well as information on how those impacted individuals can request a human-led review or reconsideration,” he said.

Liability could arise if deployers do not use the technology as described. Baum said, “If, for example, a developer says, ‘This is not an approved use,’ and then a deployer decides to use it anyway, that’s where you would start to see some liability.”

Fewer Explicit Rules, Ongoing Evaluations

The most significant changes from the 2024 legislation may be the removal of several explicit compliance requirements, including a defined duty of care and certain reporting obligations.

Despite that shift, Baum cautioned that companies should not interpret the changes as a reduction in responsibility. “Just because the explicit duty of care has been removed does not necessarily mean that companies can handwave this away,” she said. Companies will still be expected to understand and evaluate their AI-use systems to meet transparency obligations.

The proposal introduces provisions that could affect how companies structure contracts. “There are explicit liability provisions in this proposal, which did not exist in the current Colorado AI Act,” Baum said.

Like the current law, the new draft does not include a private right of action. Enforcement would remain with the state attorney general.

Industry Response and Consumer Rights Concerns

Industry groups have largely welcomed the revised approach, which they view as less burdensome and more clearly defined, said Baum.

However, consumer advocates have expressed reservations, suggesting targeted revisions may be needed as the legislation moves forward.

Because SB 205 has been viewed as a test case for state-level AI regulation, the rapid shift from the 2024 policy underscores the difficulty of regulating a fast-moving technology, particularly in the absence of federal standards.

A formal bill aligned with the working group’s framework is expected soon, with lawmakers expected to make further adjustments.

Previous article5 questions with Katherine VanderVeen at Dickinson Wright
Next articleJenny Brook returns to Greenberg Traurig

LEAVE A REPLY

Please enter your comment!
Please enter your name here