State-Level Generative AI Laws in the United States: California, Colorado, Illinois, and Utah

  • Home
  • State-Level Generative AI Laws in the United States: California, Colorado, Illinois, and Utah
State-Level Generative AI Laws in the United States: California, Colorado, Illinois, and Utah

By December 2025, generative AI laws in the U.S. aren’t just theoretical-they’re active, enforced, and changing how businesses operate. While Washington D.C. stays stuck in debate, states are stepping in. Four states stand out: California, Colorado, Illinois, and Utah. Each took a different path. And if you’re running a business that uses AI-especially in marketing, healthcare, or customer service-you need to know exactly what’s required, where, and when.

California: The Nation’s AI Regulatory Leader

California didn’t just pass a few bills. It built a full regulatory system. Starting in 2024 and accelerating through 2025, Governor Gavin Newsom signed over seven major AI laws. Most take effect January 1, 2026, with some delayed until August 2026 to give companies time to adapt.

The cornerstone is the California AI Transparency Act (AB853). It doesn’t just target big AI companies. It covers any platform, system host, or device maker that handles AI-generated content. If your app, website, or smart camera uses generative AI to create images, text, or voice-especially if it reaches over 100,000 users-you must label it. Not just with a tiny footnote. You need manifest labels-visible, clear, and unavoidable-and latent metadata embedded in the file that AI detection tools can read. This isn’t optional. Violations can cost up to $5,000 per day.

Then there’s AB 2013, the Training Data Transparency Act. If you trained an AI model on any data after January 1, 2022, you must document where that data came from, what it included, and whether it contained biased or copyrighted material. Retroactive. Yes, that means even AI tools you launched two years ago now need a full audit. Companies like Adobe and startups using open-source models are scrambling. One developer on Reddit said it took six months and $1.2 million just to build the tracking system.

Healthcare got its own rules. SB 1120 says any AI used by insurers to approve or deny medical claims must be supervised by a licensed physician. Kaiser Permanente spent $8.7 million training 12,000 doctors on how to review AI outputs. AB 489 bans AI developers from pretending they’re licensed medical providers. No more “AI doctor” chatbots giving diagnosis advice.

And then there’s AB 2602-the likeness law. If you want to use someone’s voice or face to generate synthetic content, you need their written consent. No exceptions. Even if you’re a marketing agency using AI to create a fake video of a customer praising your product. That’s now illegal without permission. Union contracts must now include AI likeness clauses.

California’s approach is aggressive because it’s unavoidable. The state accounts for 42% of all U.S. AI startups. If you want to sell anything here, you play by California’s rules. And many companies are adopting them nationwide just to avoid building separate systems for each state.

Colorado: Narrow Focus on Insurance

Colorado’s law, HB 24-1262, is the opposite of California’s broad sweep. It’s laser-focused: insurance underwriting. It bans insurers from using AI to deny coverage or set premiums based on protected characteristics like race, gender, or zip code. If an AI system makes a decision about your health or auto policy, the insurer must tell you it was AI-driven.

That’s it. No rules for social media bots, no transparency for deepfakes, no training data disclosures. It’s a smart, limited move. Insurance is a heavily regulated industry, and AI was already being used to cut costs-sometimes unfairly. Colorado addressed that specific risk without overreaching.

But the downside is clear. A marketing firm in Denver using AI to generate fake customer testimonials? Totally legal. A hospital using AI to triage ER patients? No oversight. The Center for Democracy & Technology called it “leaving significant gaps in consumer protection.” For businesses outside insurance, Colorado feels like a safe zone. But that safety could vanish if lawmakers expand the law in 2026.

Illinois: Deepfakes and Biometrics First

Illinois has been ahead of the curve on privacy-but not because it’s pro-AI. It’s been reacting to abuse. The state’s Biometric Information Privacy Act (BIPA), updated in 2023, now explicitly covers AI systems that collect facial scans, voiceprints, or gait patterns. A Chicago marketing firm got fined $250,000 in 2025 for using AI to analyze customer photos from social media without consent.

Then came S.B. 3197, the Artificial Intelligence Video Recording Act. It bans the creation of deepfakes of political candidates within 60 days of an election. This law was written after a viral video in 2024 showed a candidate saying things he never said-generated by AI. The law doesn’t touch general-purpose AI. It doesn’t require disclosure for AI-generated content in ads or entertainment. It’s a targeted fix for election integrity.

Illinois has no law requiring companies to disclose when they use AI in customer service, hiring, or advertising. No training data rules. No oversight for healthcare AI. The Illinois Policy Institute called the state’s approach “reactive rather than proactive.” That means businesses can operate freely in most areas-but one misstep in biometrics or politics could land them in court.

Colorado insurance AI rules showing prohibited factors and disclosure signs, in risograph style.

Utah: Waiting for the Federal Lead

Utah has no generative AI law. Not one. Its main privacy law, the Utah Consumer Privacy Act (UCPA), took effect in December 2023. It gives consumers the right to know what data is collected and to delete it-but it says nothing about AI, synthetic media, or algorithmic bias.

The only AI-related bill introduced in 2025, S.B. 232, is still stuck in committee. It would create a task force to study AI governance-no enforcement, no deadlines, no penalties. Just a report. The Salt Lake City Technology Council warned that Utah risks “falling behind in the AI economy” without clear rules. Tech companies in the state are split. Some like the lack of regulation. Others are nervous. With California’s laws setting the tone, Utah’s wait-and-see approach could leave businesses exposed if federal rules eventually mirror California’s.

What This Means for Your Business

If you’re operating in multiple states, here’s what you’re up against:

  • California: Full compliance required. Document training data. Label all AI content. Get consent for likenesses. Have physicians oversee medical AI. Budget $250,000 to $2.5 million depending on scale.
  • Colorado: Only worry if you’re in insurance. Disclose AI use in underwriting. Avoid discriminatory outcomes.
  • Illinois: Don’t use AI to scan faces or voices without consent. Don’t make deepfakes of candidates before elections.
  • Utah: Follow general data privacy rules. No AI-specific obligations-for now.

Many companies are choosing to apply California’s rules everywhere. Why? Because building one system that meets the strictest standard is cheaper than managing five different ones. Gartner estimates California’s AI market will hit $287 billion by 2027. If you’re not compliant there, you’re not compliant in the biggest market in the country.

Utah tech workers divided between no regulations and looming California compliance, in risograph style.

What’s Coming Next

California isn’t done. In December 2025, the California Privacy Protection Agency announced a new 45-person AI enforcement division starting January 1, 2026. They’ll be auditing companies, reviewing documentation, and issuing fines. The Department of Technology also released draft rules for frontier AI developers-those building the most powerful models like GPT-5 or Gemini 2.0. They’ll have to submit annual compliance reports.

Other states are watching. Colorado is considering a consumer AI transparency bill in 2026. Illinois has a proposed disclosure law stuck in committee. New York, Washington, and Massachusetts are drafting their own versions of California’s laws. Forrester predicts 15 more states will pass major AI laws by 2027.

The message is clear: AI regulation isn’t coming. It’s already here. And California isn’t just leading the way-it’s defining it.

Do I need to comply with California’s AI laws if my business is based outside the state?

Yes-if you serve customers in California. State laws apply based on where the user is, not where your company is headquartered. If your website, app, or AI tool is accessible to Californians and generates content for them, you must comply with AB 853, AB 2013, and other laws. The California Attorney General has already targeted out-of-state companies for violations.

What happens if I don’t label AI-generated content in California?

You can be fined up to $5,000 per violation, per day. That means if your chatbot generates 10,000 unmarked responses in a week, you could face $35 million in penalties. The law also allows city attorneys to sue on behalf of consumers. Enforcement is active, and the state has tools to detect untagged AI content automatically.

Does Illinois’ deepfake law apply to entertainment or parody?

No. The law only bans deepfakes of political candidates within 60 days of an election. Parody, satire, movies, and entertainment content are exempt. However, if you use someone’s likeness for commercial purposes-like an ad or product promotion-you could still be liable under BIPA or defamation laws.

Can I use open-source AI models without worrying about compliance?

No. California’s AB 2013 applies to any developer who trains or modifies a generative AI system-even if they use open-source models. If you fine-tune Llama 3 or Mistral on your own data and release it, you’re considered a developer under the law. You must document your training data, including its sources and potential biases. Open-source doesn’t mean open to violations.

Is there a federal AI law I should wait for instead of complying with states?

No. As of December 2025, no federal AI law has passed. Congress has debated over 30 bills since 2023, but none have moved past committee. Waiting for federal action is not a legal strategy. States are moving forward, and California’s rules are becoming the de facto standard. Businesses that delay compliance are taking a high-risk gamble.

Next Steps for Businesses

  • If you’re in California: Audit all AI systems launched since January 2022. Build metadata tagging. Train staff on consent protocols. Hire legal counsel familiar with AB 2013 and SB 1120.
  • If you’re in Colorado: Review insurance underwriting workflows. Add disclosure notices if AI is used. No action needed for other departments-for now.
  • If you’re in Illinois: Audit biometric data collection. Ensure no facial or voice recognition is used without consent. Avoid deepfakes of politicians near elections.
  • If you’re in Utah: Monitor S.B. 232. Prepare for potential future rules. Apply California’s standards internally to future-proof your operations.

The age of unregulated AI is over. The question isn’t whether you’ll comply-it’s whether you’ll comply before you’re fined, sued, or shut down.

5 Comments

Pooja Kalra

Pooja Kalra

8 December, 2025 - 22:07 PM

It's not about regulation. It's about control. Every label, every audit, every consent form-they're not protecting people. They're protecting corporations from accountability. We've traded autonomy for the illusion of safety. And now we're documenting our own subjugation in metadata.

Sumit SM

Sumit SM

9 December, 2025 - 20:47 PM

California’s law? Overkill. But honestly? I’m glad they’re doing it. If you’re using AI to generate content-especially at scale-you owe it to users to be transparent. Not because it’s ethical (though it should be), but because if you don’t, someone else will sue you into oblivion. AB 2013 isn’t a burden-it’s insurance. And yes, it’s expensive. But so is losing a lawsuit over a dataset scraped from Reddit in 2021.

Jen Deschambeault

Jen Deschambeault

10 December, 2025 - 18:40 PM

As someone who works in AI product design in Vancouver, I’ve already started applying California’s rules across all our products. Why? Because the alternative is chaos. Five different compliance systems? No thanks. I’d rather build once and sleep at night. The truth? Most companies aren’t waiting for federal laws-they’re just doing what California says. And honestly? It’s working.

Kayla Ellsworth

Kayla Ellsworth

11 December, 2025 - 10:25 AM

Oh wow. So now we’re all supposed to be terrified of AI because California says so? Let me guess-next they’ll require every toaster to have an AI disclosure sticker. Meanwhile, Colorado’s law is the only sane one. Why regulate everything when you can just regulate the thing that’s actually hurting people? Insurance underwriting. That’s it. The rest is performative outrage dressed up as policy.

Soham Dhruv

Soham Dhruv

11 December, 2025 - 22:07 PM

biggest thing no one talks about is that a lot of these laws are gonna hurt small devs the most. like yeah sure, big companies can afford $2.5 million audits but what about that one guy who built a meme generator using llama 3 on his laptop? he’s gonna get fined $5k a day for not embedding metadata? the system’s rigged. also why does everyone assume california is the future? what if the future is just… no one uses ai for marketing anymore because it’s too scary?

Write a comment