Progressives for AI

progressivesforai.com

Progressives for AI Issue #4 — Corporate ethics folded in 48 hours. Now what?

Quick Take

Here's something the headlines mostly missed this week: more than 300 tech workers at Google and OpenAI broke ranks to publicly demand AI safety red lines that their own employers refused to adopt. Thirty-six attorneys general — from both parties — told the federal government to back off state AI protections. And Google committed $30 million to fund AI that serves communities, not surveils them.

The stories in this issue are heavy. Corporate ethics caved under government pressure. A mass surveillance system targeting immigrants is already running. The federal government is threatening to defund states that dare to regulate AI.

But the through-line isn't doom. It's that the people closest to these fights — the workers building these systems, the state lawmakers writing the rules, the organizations doing the unglamorous accountability work — are showing up. And they're not waiting for permission.

Let's get into it.


AI News Roundup

Corporate AI ethics folded in 48 hours. Then 300+ tech workers pushed back.

What happened: Two weeks ago, we wrote about Anthropic — the company behind the Claude AI — refusing to drop its safety guardrails for Pentagon use. Then everything moved fast.

On February 27, Defense Secretary Pete Hegseth formally designated Anthropic a "supply chain risk to national security," a label normally reserved for foreign adversaries like Huawei. Trump ordered every federal agency to stop using Anthropic products. Within hours, OpenAI CEO Sam Altman — who just that morning had publicly declared that OpenAI shared Anthropic's red lines on autonomous weapons and mass surveillance — signed a deal with the Pentagon. "Cancel ChatGPT" started trending. Claude became the #1 app in the App Store.

It's tempting to turn this into a simple morality play: Anthropic good, OpenAI bad. But the reality is messier. Anthropic wasn't refusing to work with the military — they were already the government's primary AI vendor. They were negotiating terms: written assurances that Claude wouldn't be used for fully autonomous lethal weapons or mass domestic surveillance. The Pentagon said no. Anthropic held its position. And they got punished for it.

Meanwhile, the same week, Anthropic quietly rewrote its own Responsible Scaling Policy, removing the hard commitment to never deploy a model without proven safety measures. The new language says holding that line while competitors race ahead "could result in a world that is less safe." Both things happened. In the same week.

OpenAI's deal wasn't much better on close inspection. Legal analysts at The Information found loopholes: the surveillance prohibition only covers "unconstrained" collection of Americans' private information, not publicly available data. An OpenAI alignment researcher, Leo Gao, publicly called his own company's additional safeguards "window dressing."

Why this matters: When AI safety depends on voluntary corporate promises, it will always lose to the next government contract or competitive pressure. Anthropic tried to hold a line and got blacklisted. Their competitor got rewarded for saying the right words with weaker commitments. The lesson isn't that one company is good and another is bad — it's that the whole arrangement is broken. Companies shouldn't get to choose whether to be ethical when the pressure comes. The rules should require it.

The hopeful part: Those 300+ workers who signed the open letter? They work at the companies that caved. They're inside the building, pushing for the red lines their employers won't adopt. That's not nothing. And it's the kind of internal pressure that, paired with enforceable regulation, actually changes things.


The government already built the mass surveillance system Anthropic was worried about

Surveillance cameras mounted on a lamp post

Photo by Jakub Zerdzicki / Unsplash

What happened: While the Pentagon was telling Anthropic that refusing mass surveillance tools was "undemocratic," ICE was already running exactly that kind of system.

ImmigrationOS, built by Palantir under a $30 million no-bid contract, is now fully operational. Its companion targeting tool, ELITE, creates dossiers on individuals with "address confidence scores" rated 0-100. A court brief described it as "kind of like Google Maps for finding deportation targets."

The data sources feeding this system: Medicaid records covering 80 million enrollees (names, addresses, Social Security numbers, claims history), IRS tax data, DMV records, Social Security files, license plate readers, seized phone data, social media posts, utility bills, and commercial data brokers.

And this is not a system that only targets undocumented people. ICE agents in Portland, Maine scanned the faces and license plates of legal observers, U.S. citizens watching enforcement operations, and threatened to place them on a domestic terrorism watchlist. DHS has fired hundreds of subpoenas to Google, Meta, Reddit, and Discord demanding identifying information for anonymous accounts that posted about ICE activity. Acting ICE Director Todd Lyons told Congress on February 10 that "there is no database tracking United States citizens." His own agents, on video, have said the opposite.

Why this connects: Remember those two red lines Anthropic insisted on, no autonomous weapons and no mass domestic surveillance? ImmigrationOS is exactly the kind of system the second red line was supposed to prevent. It's already running. It already sweeps in citizens and legal residents. And it was built by a private company with almost no public oversight.

What you can do: If your organization does immigration work, the American Immigration Council and the Brennan Center have published detailed documentation on ImmigrationOS and ELITE. If you or your staff observe enforcement operations, know your digital rights because ICE is actively collecting biometric data on observers. The Maine legal observers who were threatened with a terrorism watchlist filed a federal class action on February 23. Support these cases. They set precedent for everyone.


The federal government is trying to kill state AI protections. States aren't having it.

The dome of the U.S. Capitol building

Photo by Juliana Uribbe / Unsplash

What happened: The Trump administration's December executive order targeting state AI laws? It's moving fast.

The DOJ established an AI Litigation Task Force in January with one job: sue states whose AI laws are deemed to "obstruct" federal policy. The Commerce Department has until March 11 to publish a report identifying which state laws are "onerous," and that report is expected to trigger the actual lawsuits.

The administration is also threatening to withhold broadband infrastructure funding (from the $42.5 billion BEAD program) from states with AI regulations it doesn't like. Colorado was named explicitly in the executive order. David Sacks called the Colorado AI Act "probably the most excessive." And in what appears to be the first direct pressure on a specific state bill, the White House sent a letter to Utah's Senate majority leader saying a child-safety chatbot regulation "goes against the Administration's AI Agenda." The bill stalled.

But states aren't folding. Thirty-six attorneys general, bipartisan, signed a letter opposing federal preemption. Two hundred and eighty state lawmakers from both parties did the same. Colorado AG Phil Weiser has publicly stated he'll challenge the order in court. California State Senator Scott Wiener: "If the Trump Administration tries to enforce this ridiculous order, we will see them in court."

And legislatures keep passing bills anyway. Texas, California, and Illinois all had new AI laws take effect January 1. Oregon's chatbot disclosure bill passed the state Senate with bipartisan support. Virginia advanced an AI regulation bill 39-1.

Why March 11 matters: That Commerce Department report will name which states lose funding and which laws get referred for federal lawsuits. If your state has AI protections on the books or in the pipeline, this is the moment to support them, before the legal challenges arrive.

What you can do: Check whether your state has active AI legislation using the Transparency Coalition's tracker or the IAPP tracker. Contact your state legislators and attorney general's office. If your state has AI protections, tell them to hold the line. If it doesn't, that's an organizing opportunity. The model bills exist.


Google.org is giving away $30 million for AI in public services. Applications close April 3.

Hands joined together in a circle

Photo by Hannah Busing / Unsplash

What happened: Google.org launched its Impact Challenge: AI for Government Innovation. $30 million in grants, individual awards of $1 million to $3 million, for nonprofits, social enterprises, and academic institutions using AI to improve public services.

The focus areas are health (healthcare access, diagnostics), resilience (disaster response, crisis planning), and economy (public infrastructure, caseworker tools). Recipients also get engineering support from Google's AI team through a multi-month accelerator.

One catch: you need a government partner. Projects require "documented government buy-in," so you need to either already be working with a government agency or have a formal commitment from one.

Why we're featuring this: Concrete funding for using AI to serve communities, not surveil them, deserves a mention.

What you can do: If your org works with government on health, disaster response, or public services, look at the application page before the April 3 deadline. Even if you don't apply, share it with organizations in your network that might qualify, especially smaller orgs that won't hear about it through the usual channels.


Put AI to Work

Practical ways progressives can use AI this week

Government accountability research: find out what AI your government is buying

The ImmigrationOS story didn't emerge from a press release. It came from reporters, researchers, and advocacy organizations doing the unglamorous work of tracking government contracts, filing public records requests, and reading procurement documents. AI makes that process a lot faster.

Step 1: Discover what's being purchased.

Government AI procurement is public record. Go to USAspending.gov or SAM.gov and search for AI-related contracts in your area. Try "artificial intelligence," "machine learning," "facial recognition," "predictive analytics," or specific vendors like Palantir, Clearview AI, or Axon.

You'll get back a wall of procurement data. Paste it into Claude or ChatGPT and ask: "Which of these contracts involve surveillance technology, predictive policing, or automated decision-making? Summarize each one in plain language." Hours of reading become minutes.

Step 2: File the right records request.

Once you know what your local agency bought, file a FOIA request (federal) or public records request (state/local) for the actual contract terms, implementation documents, and impact assessments. AI can help you draft the request. The key is being specific about what you're asking for: what agency, what contract, what documents. MuckRock can help you file and track it.

Step 3: Make sense of what comes back.

This is where AI actually earns its keep. FOIA responses often arrive as hundreds of pages of PDFs, sometimes heavily redacted. Upload them to Claude (which can read PDFs directly) and ask it to summarize the findings, identify concerning provisions, and flag anything that contradicts what the agency said publicly. What used to take a team of interns and a full weekend becomes an afternoon of reading summaries and pulling quotes.

Why this matters for your org: Every progressive organization has issue areas that intersect with government AI use, whether it's policing, immigration, housing, healthcare, or benefits administration. The systems are already deployed. The contracts are public. The tools to investigate them are free.


From our friends

Change Agent

Your org deserves its own AI. Not Big Tech's.

Change Agent is a private AI platform built for nonprofits, unions, and advocacy orgs. Your data stays yours, it plugs into tools you already use (Google Drive, Slack, ActBlue), and it handles the tedious stuff so your team can focus on the mission. Starts at $35/month. Small nonprofits under $1M can apply for discounted pricing.

Learn more

Looking Ahead

Everything in this issue comes back to the same gap: what powerful institutions say about AI versus what they actually do.

Anthropic said it would never deploy a model without safety measures, then quietly rewrote that promise. OpenAI said it shared Anthropic's red lines, then signed a deal without them. The Pentagon called safety concerns "undemocratic" while ICE was running the exact surveillance system those concerns were about. The administration says it's protecting innovation while threatening to defund any state that tries to protect its residents.

Voluntary promises didn't hold up. They never do when money and power are on the line.

What does hold up: enforceable rules. The kind that 36 attorneys general and 280 state lawmakers are fighting to protect right now. The kind that companies can't quietly rewrite when things get uncomfortable.

March 11 is coming. Know your state's AI laws. Support the people defending them. And if your government is buying AI surveillance tools, find out. You know how now.

Until next time,
Jordan


Read past issues on the web · Subscribe via RSS · Website

Know someone who should be reading this?

Forward this email or share using the links below. Every subscriber makes this community stronger.

Share on X Share on Bluesky Send to a Friend
progressivesforai.com