Progressives for AI

progressivesforai.com

Issue 2 Draft — Progressives for AI Newsletter

Subject line: Labor just entered the AI chat Subtitle: Unions are becoming AI’s most important accountability partner — and that’s good for everyone.


PROGRESSIVES FOR AI NEWSLETTER

ProgressivesforAI.com — Our astounding website

If you’d like to read this newsletter via RSS, the link can be found here.


QUICK TAKE

Here’s a question that doesn’t get asked enough: What if regulating AI actually makes AI better?

We hear a lot about how regulation “stifles innovation.” But look at what’s actually happening. Workers are organizing to demand transparency in how AI makes decisions about their jobs. States are passing laws requiring companies to test for bias. And the companies that comply? They’re building more reliable, more trustworthy products.

The labor movement just fired a massive shot across the bow in California — and whether you’re a union organizer, a nonprofit staffer, or just someone who uses ChatGPT to draft emails, you should be paying attention.

Let’s get into it.


AI NEWS ROUNDUP

Unions Tell Newsom: Regulate AI or Forget the Presidency

What happened: On February 4, the California Labor Federation held a press conference in Sacramento with a blunt message for Governor Gavin Newsom: if you want union support for your expected 2028 presidential run, you need to get serious about protecting workers from AI.

AFL-CIO President Liz Shuler was there. So were labor federation leaders from Iowa, Georgia, and North Carolina. This wasn’t just a California story — it was a national one.

California Labor Federation President Lorena Gonzalez didn’t mince words: “I don’t think you’re going to have a lot of motivation to walk precincts for somebody who won’t engage working class voters on the very things that are taking away their jobs.”

The federation is backing 24 bills this session, including:

  • SB 947: Bans management decisions based solely on AI predictions about employees
  • SB 951: Requires employers to give advance notice before replacing jobs with AI
  • A surveillance bill that would prevent AI-powered workplace monitoring designed to prevent union organizing

Why this is actually good for AI: These bills don’t ban AI — they require it to be used well. SB 947 doesn’t say you can’t use AI in management. It says a human has to be in the loop. That’s the kind of guardrail that builds trust. And when workers trust AI tools, they’re more likely to adopt and benefit from them.

Consider the numbers: A Gallup poll from September 2025 found that 80% of Americans want AI regulation, even if it slows innovation. That’s not an anti-tech number. That’s a “we want to trust this stuff” number.

What you can do: If you’re in California, contact your state legislators about SB 947 and SB 951. Even if you’re not, the AFL-CIO has been building a national framework on AI and labor — read it and share it with your networks. These are the kind of thoughtful, specific proposals that move the conversation beyond “AI bad” to “AI accountable.”


The AI Gap Is Real — And Progressive Orgs Are on Both Sides of It

What happened: A January report from Social Current revealed a growing divide in the nonprofit sector: organizations earning over $1 million are adopting AI at nearly twice the rate of smaller ones. And over half of all nonprofits earn less than $1 million annually.

The orgs using AI are seeing real results — 20-30% increases in donations through personalized outreach, and 15-20 hours saved per week on admin tasks. But 41% of nonprofits rely on a single person for all AI decisions. And only about 10% have any kind of written AI governance policy.

What it means: This is a classic equity gap playing out in real time. Well-funded nonprofits are using AI to raise even more money, while grassroots orgs serving the communities that need it most are falling behind. The tool isn’t the problem — access is.

The hopeful part: The barrier to entry has never been lower. AI tools that cost thousands per month two years ago now have free tiers powerful enough for small organizations. The real bottleneck isn’t money — it’s knowledge and confidence.

What you can do right now: If you work at a nonprofit or advocacy org, here are free and low-cost tools you can start using this week:

  • Grant writing: Use Claude or ChatGPT (both have free tiers) to generate first drafts of grant narratives. Orgs report saving 35-50% of proposal development time. Don’t paste in confidential info — use anonymized details and add specifics yourself after.
  • Donor communications: Draft personalized thank-you notes and updates at scale. What used to take 6 hours can take 90 minutes with an AI first draft that you review and personalize.
  • Meeting notes: Otter.ai (free, 600 min/month) or Fathom (free for individuals) will auto-transcribe your meetings and pull out action items. Just get consent from attendees first, and don’t transcribe sessions where you’re discussing specific clients by name.
  • Social media: Canva (free tier) now has AI image generation and design tools. Buffer ($6/mo) includes an AI assistant for writing posts. You can go from program update to polished social content in minutes.
  • Research: Perplexity gives sourced answers — useful for rapid-response research when news breaks. Great for building fact sheets and talking points quickly.

One important rule: Always review AI-generated content before it goes out. These tools draft; you decide. That’s not a limitation — that’s how it should work.


When Regulation Works: State AI Laws Are Making Products Better

What happened: Despite the Trump administration’s threats (which we covered in Issue 1), state AI laws are quietly doing exactly what they’re supposed to: pushing companies to build better products.

California’s Generative AI Training Data Transparency Act, effective January 1, now requires AI developers to publish information about what data they used to train their models. California’s AB 489 bans AI chatbots from impersonating healthcare professionals. Texas’s Responsible AI Governance Act requires transparency from AI developers or face civil penalties. And Colorado’s AI Act, taking effect June 30, will require “reasonable care” to prevent algorithmic discrimination.

Meanwhile, the Workday hiring discrimination lawsuit — which we also mentioned last issue — is now proceeding as a nationwide collective action. Millions of job applicants over 40 may have been filtered out by AI screening tools, sometimes at 1:50 AM when no human could possibly be reviewing applications.

Why regulation is pro-innovation: Here’s the part that often gets lost. When California required police to disclose AI use in official reports (SB 524), it didn’t kill police AI tools. It made them more transparent — which made them more credible in court. When states require bias testing, companies that comply end up with products that work better for more people. That’s not a burden. That’s a competitive advantage.

The companies fighting regulation aren’t defending innovation. They’re defending the right to ship untested products. There’s a difference.

What you can do: Know your state’s AI laws. The Future of Privacy Forum maintains a solid tracker of state-level AI legislation. If you’re in a state with strong protections, support them vocally — they’re under federal attack. If you’re in a state without them, that’s an organizing opportunity. The Brookings Institution has a good breakdown of how different states are approaching this.


PUT AI TO WORK

Practical ways progressives can use AI this week

Write a Public Comment in 15 Minutes

State legislatures and federal agencies are taking public comments on AI regulation right now. AI can help you participate even if you don’t have a policy background:

  1. Find an open comment period (check regulations.gov or your state legislature’s website)
  2. Paste the proposed rule or bill summary into Claude or ChatGPT
  3. Ask it to explain the key provisions in plain language
  4. Ask it to help you draft a comment from your perspective — as a worker, organizer, parent, small business owner, whatever applies
  5. Review, personalize, and submit

Your voice matters more than polish. Agencies count comments, and personal stories carry weight.

Build a Fact Sheet Fast

Got a meeting with a legislator or a community forum coming up? AI can help you prep:

  1. Use Perplexity to research the topic (it cites sources, so you can verify)
  2. Paste your research into Claude and ask for a one-page fact sheet with key stats, talking points, and counter-arguments
  3. Add your local context and print it out

What used to take a weekend of research can happen in an afternoon.

Start Your Org’s AI Policy

If you’re one of the 90% of nonprofits without an AI governance policy, here’s a shortcut: ask an AI to help you write one. Seriously. Ask Claude or ChatGPT: “Help me draft a simple AI use policy for a small nonprofit. Cover data privacy, content review, and which tasks are appropriate for AI assistance.” Then customize it for your org. It won’t be perfect, but it’s infinitely better than having nothing.


AI FLEX OF THE WEEK

Two things that blew our minds recently:

AI just made 100-year climate projections possible in 25 hours. Researchers at UC San Diego and the Allen Institute for AI built Spherical DYffusion, a generative AI model that can simulate a century of global climate patterns in about a day — a process that used to take weeks on supercomputers. Even better: it runs on standard GPU clusters, not billion-dollar infrastructure. This is the kind of tool that gives climate scientists and policymakers the ability to model scenarios fast enough to actually act on them. Imagine an advocacy org being able to say “here’s what happens to your district under three different emissions scenarios” with real data backing it up.

A free app is giving blind and low-vision users superhuman access to the visual world. Be My Eyes launched Be My AI — a free tool that lets blind users snap a photo of anything and get a detailed, conversational description in 36 languages. Reading a menu, checking an expiration date, navigating a store. Microsoft deployed it at their Disability Answer Desk and it’s resolving over 90% of calls without needing a human. This isn’t a gimmick — it’s genuine independence, powered by AI, available to anyone with a smartphone.

This is what we mean when we say AI can be a force for good. Not hypothetically. Right now.


LOOKING AHEAD

The next few months are going to be big. Colorado’s AI Act takes effect June 30 — and the administration has already targeted it. California’s 24 labor-backed AI bills will be working through the legislature. And the Workday lawsuit could set nationwide precedent for algorithmic accountability in hiring.

We’ll keep you updated. In the meantime: use these tools, follow these fights, and talk to your networks about what kind of AI future we want to build.

Because the future of AI isn’t just a tech story. It’s a labor story, a civil rights story, and an organizing story. And progressives should be leading it — not running from it.


Share this newsletter with someone who needs to hear that being pro-AI and pro-accountability aren’t opposites.

Subscribe | Website

Know someone who should be reading this?

Forward this email or share using the links below. Every subscriber makes this community stronger.

Share on X Share on Bluesky Send to a Friend
progressivesforai.com