Progressives for AI

Who gets to use AI?

Issue #6 · March 2026

Quick Take · News · Put AI to Work · Looking Ahead

In this issue

  • New York wants to ban AI from giving "substantive" answers in medicine, law, and 11 other fields. Who does that actually protect?
  • Montana just became the first state to enshrine the right to use AI. It's not what you'd expect.
  • Plus: a link to my new blog post on the progressive AI adoption gap
  • This week's tool: Use AI to research who's funding your opponents

Quick Take

Harper Carroll, a Stanford CS grad, former Meta AI engineer, and AI educator, posted a video last week about a New York State Senate bill that would ban AI from giving "substantive responses" in medicine, law, engineering, and 11 other licensed professions.

Her argument: the bill doesn't define "substantive." That's not an oversight. It's a feature. Vague language means anyone can sue, which means AI companies will overcorrect and restrict access to be safe. The people who lose first? The single mom checking her kid's symptoms at 11pm. The renter trying to understand a predatory lease. The worker who can't afford $500 an hour for a lawyer.

As Carroll put it: "This knowledge has always been out there, but now it's finally democratized. And New York wants to take that away."

This is the kind of fight progressives should be leading. We don't love AI uncritically. But gatekeeping shouldn't be the answer. The response to AI's real problems (hallucinations, bias, corporate consolidation) is not to hand the keys back to the professions that locked people out in the first place. It's to build guardrails that protect people without recreating the access barriers that harmed them.

I wrote about a version of this on my blog this week: why progressives can't afford to sit out the AI revolution. The short version: 39% of campaign professionals haven't used AI for content creation at all. We can't afford to be in that 39%. We don't get to unilaterally disarm.

Let's get into it.

AI News Roundup

New York wants to ban AI from answering your questions

A person holding a phone, looking up health information

Photo by National Cancer Institute / Unsplash

What happened: New York Senate bill S7263, sponsored by Senator Kristen Gonzalez, would prohibit AI systems from providing "substantive responses" in medicine, law, engineering, and 11 other licensed professions. The bill is on the Senate floor calendar as of late February. Liability falls on deployers, not model makers, and it creates a private right of action, meaning anyone can sue.

The central problem: "substantive" is never defined. Bloomberg Law flagged this as the bill's fatal flaw. Operators will over-restrict AI outputs far beyond what the statute requires just to avoid lawsuits. There's no clear line between "What is ibuprofen?" and "What dosage should I take?"

Why this matters: The bill is framed as consumer protection, but ask who it actually protects. The people with the most to lose from restricted AI access are the people who can't afford professional gatekeepers. AI is not replacing doctors or lawyers. It's giving people information that was always available but locked behind $500-an-hour consultations. A renter facing a predatory landlord who needs to understand their rights. A worker trying to figure out if their severance offer is fair.

If AI gives bad medical information, the fix is accuracy standards and disclosure requirements. Not banning it from answering the question.

What you can do

If you're in New York, find the bill on the Senate website and contact your state senator. The broader principle applies everywhere: when AI regulation proposals come to your state, ask who actually benefits. Consumer protection that prices out consumers isn't protection.

39%

of political campaign professionals haven't used AI for content creation at all, while a third of consultants overall now use it daily.

Source: Center for Campaign Innovation, 2024 Post-Election Survey

Montana just made AI access a right

What happened: Montana became the first state to write the right to use AI and computational resources into law. Governor Gianforte signed SB 212, the "Right to Compute Act," which says that any state restriction on AI use must be "demonstrably necessary and narrowly tailored to fulfill a compelling government interest." The law also requires mandatory shutdown mechanisms and annual risk assessments for AI systems that control critical infrastructure.

Why this matters: This came from a Republican governor in a deep red state. The bill was championed by libertarian-leaning tech advocates. Progressives should be paying attention, not because the politics align, but because the framework does.

Montana's approach: default to access, require narrow justification for any restrictions, build in safety for critical systems. That's closer to what thoughtful progressive AI policy should look like than most of what our side has proposed. The contrast with New York couldn't be sharper. One state defaults to restriction, the other to access.

The lesson is not that Montana got everything right. It's that progressives are at risk of ceding the "AI access" frame to the right. Access to knowledge and tools should be a progressive value.

What you can do

Read the coverage of the bill and share it with your policy team. New Hampshire already has similar legislation underway, and RightToCompute.ai is pushing the framework nationally. If your organization does state-level advocacy, start a conversation about what a progressive version looks like: Montana's access-first framework plus algorithmic transparency, bias audits, and worker protections. If you work with a sympathetic state legislator, send them the bill and ask if they'd be interested in introducing a version in your state.

Progressive AI Win

State AI protections may be stronger than anyone realized

Remember when Trump's executive order threatened to wipe out state AI regulations? Legal analysts at Ropes & Gray just published an analysis that should reassure anyone who fought for state-level protections: the executive order almost certainly can't preempt state law. Only Congress can do that. The March 11 deadline for agencies to identify "burdensome" state laws has passed, and Colorado's AI Act, NYC's bias audit law, and California's worker protection bills are all still standing. The communities that organized for these laws may have built more durable protections than they knew.

Put AI to Work

Practical ways progressives can use AI this week

Use AI to research who's funding your opponents

A person researching at a desk with notes and a laptop

Photo by Green Chameleon / Unsplash

Opposition research used to require a dedicated researcher and weeks of digging. AI changed that. A two-person advocacy shop can now do solid oppo research in an afternoon.

Map funding networks with 990 filings

Every nonprofit and foundation files a 990 tax return, and they're all public. ProPublica's Nonprofit Explorer has millions of them searchable for free.

  1. Look up the think tank or advocacy group opposing your issue
  2. Download their 990s (ProPublica has them as PDFs)
  3. Upload the PDFs to Claude or ChatGPT and ask: "List every grant this organization received over $50,000, who it came from, and the stated purpose"
  4. Then ask: "Which of these funders also fund organizations working on [your issue area]?"

What used to take days of reading tax filings takes 20 minutes. You'll see the funding web behind the opposition before your next coalition call.

Research lobbying connections before hearings

Before a legislative hearing, you want to know who's lobbying on the other side. OpenSecrets tracks lobbying disclosures and campaign contributions, and your state may have its own disclosure database as well.

  1. Search for the company or industry group testifying against your bill
  2. Copy their lobbying disclosure data into an AI
  3. Ask: "Summarize this organization's lobbying spending, which legislators they've contributed to, and what issues they've lobbied on in the past 3 years"
  4. Use the results to prep counter-arguments and questions for friendly legislators to ask

Build a "who funds this?" one-pager

For any upcoming meeting, hearing, or media hit:

  1. Gather everything you can find about the opposition group (website, leadership, 990s, lobbying records, news coverage)
  2. Feed it all into an AI and ask: "Create a one-page brief on this organization: who funds them, who leads them, what positions they've taken, and what conflicts of interest exist"
  3. Print it out for your team

This is exactly the kind of work that well-funded organizations have always done and grassroots groups couldn't afford. AI makes it possible for a small shop to walk into a hearing as prepared as anyone in the room.

From our friends

Change Agent

Your org deserves its own AI. Not Big Tech's.

Change Agent is a private AI platform built for nonprofits, unions, and advocacy orgs. Your data stays yours, it plugs into tools you already use (Google Drive, Slack, ActBlue), and it handles the tedious stuff so your team can focus on the mission. Starts at $35/month. Small nonprofits under $1M can apply for discounted pricing.

Learn more

Looking Ahead

Craig Mod, a writer and photographer based in Japan, published an essay this week called "Software Bonkers." He built custom accounting software in five days using AI coding tools. He's not really a programmer. He just needed something that fit his multi-country, multi-currency freelance life, and nothing off the shelf did.

His line that stuck with me: "The software feels organic and pliable in a form perfectly shaped to my hand, able to conform to any hunk of data I throw at it. It feels like bushwhacking with a lightsaber."

That's the access story applied to software itself. Custom tools used to require hiring developers or buying enterprise licenses. Now a nonprofit organizer or a freelance consultant can have software shaped to their exact workflow. That's happening right now, to regular people.

New York says only licensed professionals should answer your questions. Montana says everyone has the right to compute. The reality we should be fighting for is somewhere specific: access with accountability. Guardrails that protect people without gatekeeping them out.

That's the progressive position on every other technology. It should be ours on AI too.

Until next time,
Jordan

Jordan's sign-off doodle

Know someone who should read this?

Share the issue that resonated most.

Bluesky LinkedIn Email

Read past issues on the web · Subscribe via RSS · Website