If you're running a business in Colorado or selling AI tools to companies there, the clock has officially run out. As of February 1, 2026, Colorado SB24-205 is no longer just a piece of legislation-it's the law of the land. Formally known as the Consumer Protections for Artificial Intelligence Act, this law isn't about banning AI; it's about making sure the algorithms we use to make life-changing decisions don't bake in bias or discriminate against people.
For most companies, the big question is: "Am I actually affected?" The answer depends entirely on whether your AI is making what the state calls "consequential decisions." We're talking about things that actually matter to a person's life-getting a job, securing a loan, getting into a school, or receiving healthcare. If your AI influences those outcomes, you're in the high-risk category and need to get your paperwork in order immediately.
Who Exactly Needs to Comply?
The law splits the world into two groups: Developers and Deployers. It's a bit like the relationship between a car manufacturer and a taxi company. The manufacturer builds the engine; the taxi company puts it on the road to serve customers. Both have responsibilities, but they aren't the same.
Developers are the ones who build or significantly tweak the AI. Their job is to be transparent. They have to tell the deployers how the system works, what its limits are, and where the potential risks lie. If a developer finds out their system is causing discrimination, they have a strict 90-day window to notify the Colorado Attorney General and everyone using the tool.
Deployers are the businesses using the AI in the real world. This is where the heavy lifting of compliance happens. If you're using an AI tool to screen resumes or determine insurance premiums in Colorado, you are a deployer. Your main job is to ensure the tool is being used fairly and that the people affected by it know what's happening.
The Heavy Lift: AI Impact Assessments
The core of SB24-205 is the Impact Assessment. Think of this as a rigorous "stress test" for bias. You can't just check a box once and forget about it; this is a repeatable process that must happen before you launch a system, every single year, and within 90 days of any major update.
What actually goes into one of these assessments? You can't be vague. You need to document:
- The Purpose: Exactly why are you using this AI and what is it supposed to achieve?
- The Data: What categories of data are going in, and what is coming out? If you customized the model with your own data, you have to explain that too.
- The Risk Analysis: Do you see any way this could result in algorithmic discrimination? If so, how are you stopping it?
- Transparency: How are you telling consumers that an AI is making a decision about them?
- The Safety Net: What's your plan for monitoring the system after it's live? How do you handle errors?
If you're feeling overwhelmed, you aren't alone. Many firms are now using specialized compliance software, like VerifyWise, which provides templates specifically mapped to the 13 protected classes (like race, age, and gender identity) that the law protects.
Building a Real Risk Management Program
A PDF document sitting in a folder isn't a "program." Colorado expects you to have an operational system for managing risk. The law suggests aligning your internal policies with recognized global standards. If you're wondering where to start, look at the NIST AI RMF (AI Risk Management Framework) or ISO/IEC 42001. These aren't just suggestions; using these frameworks proves you're following a professional, industry-standard approach to AI governance.
| Requirement | Developer Responsibility | Deployer Responsibility |
|---|---|---|
| Impact Assessments | Provide documentation to support it | Conduct and maintain the assessment |
| Risk Management | Implement a policy for the system | Implement a program for the deployment |
| Consumer Notice | N/A | Notify consumers of AI's role in decisions |
| Human Review | N/A | Offer human review for adverse decisions |
| Reporting | Notify AG of discrimination risks | Conduct annual reviews for bias |
The Generative AI Twist
There's a common misconception that this law only applies to "black box" scoring algorithms. If you're using Generative AI-like a Large Language Model-to help draft employment contracts or screen candidates, you are still subject to the high-risk rules. In fact, GenAI comes with extra baggage.
For these tools, the law adds a few more layers of scrutiny. You need to keep a tighter grip on your training data and ensure there's a way to detect that content was AI-generated. Copyright compliance is also a major point here. If your generative tool is influencing a consequential decision, you don't get a "GenAI pass"; you still need the full impact assessment and the annual review.
Transparency and the "Human in the Loop"
One of the most critical parts of the law is the right to a human. If your AI denies someone a loan or a job, you can't just say "the computer said no." Deployers must provide a clear notice to the consumer that AI was used. More importantly, they must offer a human review of that adverse decision.
The only exception to this is if a human review would pose a safety risk, which is a very high bar to meet in a business context. This requirement forces companies to keep a human-in-the-loop, ensuring that the final call on a person's life still rests with a person, not a set of weights and biases in a neural network.
Timelines, Retention, and the "Cure Period"
Let's talk about the clock. The law became effective on February 1, 2026. If you're a deployer, you had a 90-day window from that date to finish your first impact assessment. If you haven't done that yet, you're already behind.
You also need to be careful with your files. All impact assessments and documentation must be kept for at least three years. This creates a long-term audit trail. If the Attorney General comes knocking in 2028, you need to be able to show exactly how you were assessing risk back in 2026.
The good news? There is a 60-day "cure period." If the state finds a violation, you generally have 60 days to fix it before the heavy enforcement hammers start falling. But don't rely on this as a strategy-the goal is to have a repeatable, demonstrable governance program that runs in the background of your business.
Does SB24-205 apply to all AI used in Colorado?
No. It specifically targets "high-risk AI systems." These are systems that make or significantly influence "consequential decisions," such as those affecting employment, housing, healthcare, education, and financial services. If your AI is just suggesting a playlist or optimizing a marketing image, it likely isn't high-risk under this law.
What happens if a developer follows the rules but the AI still discriminates?
The law provides a "rebuttable presumption" of reasonable care. If a developer can prove they provided the necessary documentation to the deployer and published their risk management summaries, the law assumes they acted reasonably. However, this isn't an absolute shield; it's a legal starting point that the developer can use to defend their actions.
How often do I need to update my impact assessment?
You must complete an assessment before the initial deployment, at least once every year, and within 90 days of any "intentional and substantial modification" to the AI system.
Can I use a third-party framework for risk management?
Yes, and it is highly encouraged. Aligning your program with recognized frameworks like the NIST AI RMF or ISO/IEC 42001 helps demonstrate that your governance is based on professional industry standards rather than guesswork.
Do I have to tell customers if AI is being used?
Yes. If a high-risk AI system is a substantial factor in making a consequential decision about a consumer, you must provide them with a clear notice explaining the AI's role in that decision.
Next Steps for Business Owners
If you're just now realizing you're subject to these rules, start with an inventory. List every AI tool you use and determine if it influences a consequential decision. Once you've identified your high-risk systems, reach out to your developers to get the documentation you need for your impact assessments. If you're a developer, start drafting your public summary of risk management and ensure your notification pipeline to the Attorney General is ready. The shift from "move fast and break things" to "move carefully and document everything" is here.