
I've been thinking about a question that progressives mostly avoid: what happens if AI actually works? Not "works" as in generates a passable cover letter, but works as in changes how much human labor the economy actually needs.
We spend a lot of time in this newsletter (rightly) talking about guardrails, accountability, and who gets hurt when AI goes wrong. But there's a version of the AI future that's genuinely good, and we're not talking about it nearly enough.
Antonio Aestero wrote a piece last week called "Freedom and Idleness in a Post-Singularity World" that I haven't been able to shake. His argument: when AI makes most human labor unnecessary, the question isn't whether we'll have enough to do. It's whether we'll remember how to live without work defining us.
He pulls from Aristotle, who saw leisure, not work, as the whole point. The Greek word for leisure, schole, is literally where we get "school." Learning, creating, thinking. That was the real business of life. Work was just what you did so you could get to the good part.
David Graeber estimated 37% of workers think their own jobs are meaningless. Keynes predicted 15-hour work weeks by 2030. He was right about the wealth (we could have done it) but wrong about what we'd do with it. We chose more stuff and more busywork instead of more freedom.
And honestly, progressives should be the ones painting this picture. A world with more time for family, for community, for creative work, for rest. That's our vision. We just need to claim it instead of only playing defense.
Let's get into it.
What happened: Grammarly's "expert review" feature presents AI-generated writing feedback as though it comes from named academics, including professors who are deceased and never agreed to participate. The feature labels its output as "inspired by" these experts, using their real names, photos, and institutional affiliations to sell credibility it didn't earn.
Why this matters: There's a pattern forming. AI companies scrape people's work to train models, then use their identities to market the output. This isn't a training data debate. It's straightforward impersonation. If your organization uses Grammarly, it's worth knowing what they're doing with other people's reputations to make their product look smarter.
What you can do: If your org uses Grammarly, bring this up with whoever manages the subscription. The "expert review" feature can be avoided. More broadly, when evaluating AI writing tools, ask where the "expertise" actually comes from. If a product claims expert backing, check whether those experts know about it.

Photo by Markus Spiske / Unsplash
What happened: X (formerly Twitter) added an option for users to block Grok, the platform's AI, from modifying photos they upload. The toggle is off by default, so your photos are fair game for AI manipulation unless you go find the setting and turn it on.
Why this matters: The framing here is backwards. X built a tool that lets anyone AI-edit anyone else's photos, then offered an opt-out buried in settings. For organizers, journalists, and activists who rely on photos as documentation, this is a real problem. Authentic visual evidence matters, and the default shouldn't be "anyone can remix your images with AI."
What you can do: If you or your org posts on X, turn on the photo protection toggle (Settings > Privacy > Grok). But that only protects your own uploads. The structural fight is already underway: a coalition of 28 organizations led by UltraViolet sent letters to Apple and Google demanding they pull X and Grok from their app stores. Senators Wyden, Lujan, and Markey made the same ask. California AG Rob Bonta opened an investigation into Grok's sexually explicit image generation. You can add your organization's name to UltraViolet's campaign, contact Apple and Google through their app store reporting tools to flag X's policy, or support state-level bills requiring opt-in consent for AI image manipulation. 14 states are currently considering legislation.

Photo by Annie Spratt / Unsplash
What happened: A Dallas Federal Reserve study from late February found that 6.1 million U.S. workers in administrative and clerical roles lack the "adaptive capacity" to transition to new work as AI automates their current jobs. 86% of those workers are women. A related Brookings analysis confirmed the pattern: the roles most exposed to AI displacement are concentrated among women, workers without college degrees, and workers of color.
Why this matters: When we talk about a post-work future in the abstract, it sounds philosophical. When you look at who loses their jobs first, it's a labor rights and gender equity issue. The workers with the least bargaining power are the most exposed, and the least likely to have union protections or savings to fall back on.
What you can do: The Dallas Fed report and Brookings analysis both have accessible summaries worth sharing with your networks. If your organization employs administrative staff, look at how you're adopting AI internally. Are you involving those workers in the process, or just handing them a tool and hoping for the best? The AFL-CIO published AI principles for worker protection that include advance notice requirements, retraining commitments, and collective bargaining over AI deployment. California's No Robot Bosses Act (SB 947) would bar employers from using AI as the sole basis for firing or disciplining workers, and similar bills are moving in at least 6 other states. If your state has one, tell your legislators you support it.
Practical ways progressives can use AI this week
Every organization has one. The op-ed your ED has been meaning to write for six months. The theory of change document that's been "in progress" since last fiscal year. The strategic plan refresh that keeps getting bumped for whatever's on fire this week.
These aren't small tasks. They're the kind of thinking that requires a clear head and a long afternoon, which is exactly why they never happen. There's always a more urgent email, a donor call, a grant deadline. The important-but-not-urgent stuff rots on the to-do list.
AI won't write your strategy for you. But it's a decent thinking partner for getting past the blank page.
The op-ed that's been sitting in drafts. You know your argument. You've made it in meetings, on calls, in Slack threads. You just haven't sat down to write it. Try this: open Claude or ChatGPT and talk through your argument out loud. Paste in the messy notes, the half-finished draft, the angry email you wrote at 11pm that captured what you actually think. Ask the AI to pull out your main argument and organize it into an op-ed structure. You'll still need to rewrite it in your voice, but now you're editing something instead of staring at nothing.
The strategic conversation your team keeps avoiding. Some questions are hard to discuss in a meeting because nobody wants to be the one to raise them. "Should we still be doing X?" or "Is this program actually working?" Pose the question to an AI with your org's context (mission statement, recent program data, the landscape you're operating in) and ask it to make the strongest case for both sides. Bring that to your team as a discussion starter. It's easier to react to an argument than to generate one from scratch, especially when the topic is uncomfortable.
The grant narrative you know by heart but can't get on paper. Not the boilerplate reporting (we covered that before). The bigger ask: the new program concept, the multi-year vision, the case for general operating support. Dump everything you know into a conversation (the problem, your approach, what makes you different, what you'd do with the money) and ask the AI to help you find the through-line. The best grant writing tells a story, and sometimes you need someone (or something) to help you see the story you're already telling.
From our friends
Your org deserves its own AI. Not Big Tech's.
Change Agent is a private AI platform built for nonprofits, unions, and advocacy orgs. Your data stays yours, it plugs into tools you already use (Google Drive, Slack, ActBlue), and it handles the tedious stuff so your team can focus on the mission. Starts at $35/month. Small nonprofits under $1M can apply for discounted pricing.
Learn moreThis issue started with a big question (what happens when AI changes how much work humans need to do?) and ended with a small, practical one: what would you do with an extra hour this week?
They're the same question, really. We should keep fighting to prevent harm. But the progressive case for AI also has to include demanding that the benefits actually reach people. More time with family. More space for the work that actually matters to you.
The post-work future isn't guaranteed to be good. It could easily become a world where a few people own the AI and everyone else scrambles. That's exactly why progressives need to be in this conversation now, shaping it instead of just reacting to it.
Aestero writes about Aristotle's vision of leisure as the point of civilization. Work was supposed to get us there. Maybe AI finally can, if we build the politics to match.
Until next time,
Jordan

