Block Fires 40% of Staff for AI That Alexa+ Shows Isn't Ready

01OpenAI's Robotics Chief Resigns Over Pentagon Contract

Caitlin Kalinowski had spent months building OpenAI's robotics division from scratch. On March 7, she walked away from it. Her reason: the company's new agreement with the Department of Defense.

Kalinowski wasn't a policy staffer or a mid-level engineer registering a protest vote. She led OpenAI's hardware strategy, the team tasked with giving the company's AI a physical form. Her departure removes a senior technical leader at the moment OpenAI is pushing beyond software into robotics and devices.

The resignation statement was blunt. Kalinowski cited the Pentagon deal as conflicting with her values. There was no diplomatic language about "pursuing other opportunities," no staged transition. The contract crossed a line, and she chose the door.

The timing makes her departure harder to contain as a personnel story. On TechCrunch's Equity podcast last week, hosts discussed whether the controversy around defense contracts would scare AI startups away from government work. The question wasn't hypothetical. Early-stage founders are watching how OpenAI's workforce reacts to the deal. If senior leaders leave over military partnerships, recruiting pitches that promise "we're building for everyone" ring hollow fast. Demand for experienced robotics and ML engineers far outstrips supply, and talent knows it.

The same week, a coalition of AI researchers and ethicists finalized the Pro-Human Declaration, a document proposing industry guardrails. Its authors say the timing was coincidental; the declaration had been in progress for months. But the collision of the two events gave it an audience it might not have otherwise found. One provision calls for explicit opt-out rights for employees asked to work on military applications. No company has adopted it yet.

What makes Kalinowski's exit instructive is its specificity. She did not object to OpenAI's commercial direction, its pricing, or its safety practices. The line she drew was singular: military use. That precision makes the resignation harder to dismiss as general disgruntlement and harder for OpenAI to address. Policy changes can fix a complaint about working conditions. A moral objection to a revenue stream cannot be negotiated away.

OpenAI's robotics roadmap now has a leadership vacuum. The company has not announced a successor.

Senior departures over defense deals risk setting a talent-market precedentPro-Human Declaration's opt-out clause has no adoption or enforcement yetOpenAI's robotics program loses its founding leader with no succession plan

02Block Fires 40% of Staff for an AI Overhaul. Alexa+ Shows the Tech Isn't Ready.

Forty percent of Block's workforce is gone. CEO Jack Dorsey told WIRED he eliminated the roles to rebuild the company "as an intelligence," a fintech operation run largely by AI agents rather than human teams. He framed the cuts not as cost reduction but as architectural necessity: the old organization couldn't become what he wants Block to be.

It's the most aggressive AI-transformation bet any major tech CEO has made public. Dorsey's logic is straightforward. Legacy teams resist the changes AI demands. Removing them clears the path. The reasoning has surface-level appeal. Organizational inertia is real, and incremental AI adoption inside large companies has produced mostly demos and press releases.

But a month-long test of Amazon's Alexa+ suggests the problem isn't org charts. WIRED's reviewer installed an Echo Show 15 in their kitchen and spent four weeks with Amazon's upgraded AI assistant. The verdict: it still can't reliably handle basic tasks. Amazon has thousands of engineers, years of development, and functionally unlimited compute behind Alexa. The bottleneck was never headcount or willingness to change. It was the technology itself.

Grammarly offers a different version of the same gap. The company recently launched an "expert review" feature that claims to improve writing with guidance from great writers and thinkers. TechCrunch reported that no actual experts are involved. The feature uses AI models trained on public text, dressed up in language that implies human authority. The marketing outran the capability by a wide margin.

Dorsey is not making Grammarly's mistake of overselling a feature. He's restructuring an entire company around a conviction that AI can replace coordinated human work at scale. That conviction may prove correct eventually. The trouble is timing. Amazon and Grammarly represent what happens when companies ship AI products today: they underwhelm, frustrate users, or misrepresent what's under the hood. Dorsey has wagered that Block can skip that phase by starting from scratch. He has staked 40% of his employees' livelihoods on it.

Other CEOs now have a template to frame layoffs as AI transformationno accountability mechanism exists if the AI rebuild underdeliversinvestors may reward the narrative before any product ships

03Courts and Consumers Start Pricing AI Transparency

A California judge rejected Elon Musk's bid to block the state's AI data disclosure law last week. Musk's core argument: the public doesn't care where AI training data comes from, so forcing disclosure serves no legitimate interest. The court disagreed, ruling that public interest in training data provenance is real and legally cognizable.

xAI had sought a preliminary injunction, arguing the disclosure requirements would force it to expose trade secrets. Musk's legal team claimed compliance would cause "irreparable harm" to the company's competitive position, effectively framing opacity as a business right. The judge found that insufficient to override the state's regulatory authority. Disclosure requirements remain intact while the broader case proceeds.

Separately, Anthropic's Claude app is now generating more new daily installs than ChatGPT, according to TechCrunch. Daily active users continue to grow. The timing is suggestive: growth accelerated after Anthropic publicly refused a Pentagon contract earlier this year. But the full picture is more complicated. Product updates, including memory import and model improvements, landed during the same window. The Pentagon episode was one catalyst among several. Claude still trails ChatGPT in total users, but the install-rate gap is narrowing.

These two developments share a structural connection. A judge told an AI company that training data transparency is not optional. Consumers moved toward a company that staked out a public position on values. Neither data point alone proves transparency is a competitive advantage. Together, they suggest that legal and market systems are both starting to price it in.

The AI industry has long treated transparency as a reputational nicety, not a factor in user acquisition or legal compliance. Musk's legal argument rested on the assumption that users don't factor data provenance into their decisions. The court said that assumption is wrong as a matter of law. Download trends suggest it may be wrong as a matter of commerce, too.

Training data disclosure now established as legal right with precedent beyond Californiaconsumer behavior rewarding values-based positioning could shift AI go-to-market strategythe "users don't care" defense lost its first major judicial test
04

Anthropic Plans Court Challenge to DOD Supply-Chain Label CEO Dario Amodei said Anthropic will contest the Defense Department's supply-chain risk designation in court. Microsoft, Google, and Amazon confirmed Claude remains available to all non-defense customers through their platforms. techcrunch.com

05

Google Awards Sundar Pichai $692M Pay Package Most of the compensation is tied to performance targets. New stock incentives are linked to milestones at Waymo and Wing, Alphabet's drone delivery venture. techcrunch.com

06

OpenAI Delays ChatGPT Adult Mode Again The feature, which gives verified adult users access to erotica and other explicit content, had already slipped from its December target. OpenAI has not set a new launch date. techcrunch.com

07

WhatsApp Opens to Rival AI Chatbots in Brazil Meta will let competing AI companies offer chatbots to Brazilian WhatsApp users for a fee. The decision follows a similar move for European users announced one day earlier. techcrunch.com

08

ICE Detention Facility Owner Targets AI Data Center Worker Housing AI data center developers are adopting "man camps" — modular housing originally used for remote oil field workers. One operator that also runs ICE detention facilities is marketing its units for AI construction sites. techcrunch.com

09

Google Releases CLI Tool Connecting OpenClaw to Workspace Data The command-line tool lets developers plug OpenClaw into multiple Workspace APIs. Google has not designated it as an official product yet. arstechnica.com

10

Hayden AI Sues Ex-CEO Over 41GB Data Theft and Résumé Fraud The AI startup alleges its former chief executive took 41GB of company email and lied about his credentials. The suit also claims the co-founder improperly sold over $1.2M in stock. arstechnica.com

11

City Detect Raises $13M Series A for AI Urban Monitoring The startup helps local governments detect urban decay using AI. It operates in at least 17 U.S. cities, including Dallas and Miami. techcrunch.com

12

Deveillance Launches Spectre I Jammer for Always-On AI Wearables The handheld device claims to block always-listening AI wearables from recording nearby conversations. Its effectiveness faces basic physics constraints, according to Wired's analysis. wired.com