01OpenAI Struck a Pentagon Deal in Hours. The Fine Print Shows What It Gave Up.
Anthropic spent months negotiating with the Pentagon over two conditions. Its AI would not power mass domestic surveillance or direct lethal autonomous weapons. The Department of Defense wanted something broader: the right to use Anthropic's models for "any lawful use." On February 27, after Anthropic let a 5:01 PM deadline pass without agreeing, Defense Secretary Pete Hegseth designated the company a supply-chain risk to national security. President Trump ordered federal agencies to stop using its technology.
Hours later, Sam Altman posted on X that OpenAI had its own deal.
Altman said OpenAI shared Anthropic's red lines. The published agreement prohibits "domestic mass surveillance" and bars its models from directing autonomous weapons. Same two conditions, on the surface. Different story in the contract language.
OpenAI's deal does not explicitly prohibit the Pentagon from collecting Americans' publicly available information. The government already purchases aggregated commercial data without a warrant: cell phone location records, fitness app logs, browser histories. The contract restricts "unconstrained" collection of private data but draws no line around public data. Anthropic argued that AI applied to publicly available information at scale constitutes mass surveillance. That stance was a principal reason its Pentagon talks collapsed.
OpenAI anchored its protections to existing legal frameworks, citing compliance with Executive Order 12333 and current surveillance statutes. The published excerpt "does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use," according to MIT Technology Review. Anthropic's position was that current law has not caught up with what AI makes possible. OpenAI's contract accepts current law as the boundary.
Altman acknowledged the deal was "definitely rushed" and that "the optics don't look good." He framed the decision as strategic de-escalation. "If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses," he wrote. "If not, we will continue to be characterized as rushed and uncareful." He called Anthropic's blacklisting an "extremely scary precedent."
The deal puts OpenAI on the Pentagon's classified networks. TechCrunch described the shift as a transition from "a wildly successful consumer startup into a piece of national security infrastructure." No established framework governs how an AI company should operate in that role. The contract Altman called rushed now defines the terms between the U.S. military and the most widely deployed AI system in the world.
02The Supreme Court and London's Streets Both Rejected AI Last Weekend
A one-line order in Washington. Hundreds chanting "Pull the plug!" outside OpenAI's London office. Same weekend, same word: no. The reasons had nothing in common.
On Monday, the U.S. Supreme Court declined to hear Stephen Thaler's appeal over whether AI-generated art can receive copyright protection. Thaler, a Missouri computer scientist, has spent seven years arguing that his AI system DABUS should be recognized as an author under U.S. law. The Copyright Office rejected his application in 2019. Federal courts upheld that rejection twice. Without comment, the Supreme Court let the ruling stand. The legal position is now firm: works produced autonomously by AI, with no human author, cannot be copyrighted.
Three days earlier, a few hundred protesters marched through London's King's Cross, past the offices of OpenAI, Meta, and Google DeepMind, chanting "Stop the slop!" Their grievance wasn't about who owns AI output. It was about whether AI should produce that output at all. Artists, writers, and workers see generative models as a direct threat, trained on copyrighted work without consent, deployed to replace the people who created the training data.
The court didn't call AI dangerous. It said the Copyright Act requires a human author, and Thaler's submission didn't have one. A narrow, structural ruling. The protesters weren't asking for clearer IP rules. They wanted the technology stopped.
Thaler's seven-year legal campaign tested IP architecture, not public sentiment. He filed his first patent application listing DABUS as inventor in 2018. Courts in the U.S., UK, and Australia all ruled against him. The Supreme Court's refusal closes his last major U.S. legal venue but leaves open questions about AI-assisted works where a human makes meaningful creative choices.
The marchers in King's Cross aren't waiting for those distinctions. For them, the line requires no case law: human creativity shouldn't compete with machines trained on it without permission.
03Nvidia Bets $4 Billion on Photonics as Apple Turns to Google for AI Servers
Nvidia committed $4 billion on Monday to two photonics companies: $2 billion each into Lumentum and Coherent. Both firms build optical transceivers, circuit switches, and lasers that move data at high speed across data centers. The GPU maker, whose chips dominate AI training, is betting that the bottleneck is shifting from processors to the connections between them.
Days earlier, The Information reported that Apple asked Google to set up servers for a Gemini-powered upgrade to Siri, one meeting Apple's privacy requirements. Apple announced in January that Google's Gemini models would help power the new Siri. The latest report indicates Apple needs Google's physical infrastructure too, not just its models.
Two separate deals, different companies, different technologies. They point to the same structural shift. Model quality is converging across the industry. The scarce resource is now the physical layer: optical interconnects fast enough to link thousands of GPUs, and server fleets large enough to run inference at consumer scale.
Nvidia's move is telling. The company sells the chips every AI lab wants. It could have let customers handle data center networking. Instead, it spent $4 billion to secure the optical supply chain, a sign that chip-to-chip bandwidth is becoming the binding constraint on cluster performance. Faster GPUs deliver nothing if data can't move between them.
Apple's situation reveals a different facet of the same problem. The company holds roughly $160 billion in cash. It designs its own silicon and runs one of the world's largest cloud services in iCloud. Yet for AI inference at the scale Siri requires, it turned to a direct competitor. Building AI-grade server infrastructure from scratch takes years, not quarters.
Open-source releases and shared training techniques have commoditized the model layer. Physical infrastructure has not followed. Nvidia is locking in the optical supply chain, while Apple rents compute from a rival rather than building its own. Two years ago, neither move was on the table.

Anthropic's Claude Hit by Widespread Service Outage Thousands of users reported problems accessing Claude on Monday morning. Anthropic acknowledged the disruptions but has not disclosed a root cause. techcrunch.com
14.ai Sells AI Agents That Replace Startup Customer Support Teams Married co-founders built 14.ai to automate full customer support workflows at startups. The company also launched a consumer brand to measure how much of the support workload AI can realistically handle. techcrunch.com
Lenovo Shows AI Desktop Companion Concepts at MWC Lenovo revealed two standalone desk devices at MWC: an always-on "AI Workmate" and a robot arm with expressive eyes. Both target office workers as productivity assistants. Neither has a ship date. theverge.com
CUDA Agent Applies Reinforcement Learning to GPU Kernel Optimization A new paper introduces CUDA Agent, a system that uses large-scale agentic RL to generate high-performance CUDA kernels. Current LLM-based approaches to CUDA code generation still underperform compiler tools like torch.compile. huggingface.co
Memento Proposes Embedding AI Coding Sessions Into Git Commits An open-source project called Memento captures the full AI interaction transcript and attaches it to the corresponding commit. The goal: let future developers audit how and why AI-generated code was written. github.com
CiteAudit Benchmark Targets Hallucinated Scientific References Researchers released CiteAudit, a benchmark for verifying whether citations in LLM-generated text point to real publications. Fabricated references have already appeared in submissions and accepted papers at major ML conferences. huggingface.co
New Training Method Extends Video Generation From Seconds to Minutes A paper proposes decoupling local visual fidelity from long-term coherence using a Decoupled Diffusion Transformer. Separate training heads handle short-clip quality and long-sequence consistency, sidestepping the scarcity of high-quality long-form video data. huggingface.co
dLLM Provides Unified Open-Source Framework for Diffusion Language Models Researchers released dLLM, a standardized framework for building diffusion-based language models. The project consolidates components scattered across ad-hoc research codebases into one reproducible library. huggingface.co
LK Losses Directly Optimize Acceptance Rates in Speculative Decoding A new training objective called LK Losses optimizes the token acceptance rate in speculative decoding instead of using KL divergence as a proxy. Standard KL training leaves performance on the table when draft models have limited capacity. huggingface.co