01Google Told Developers API Keys Weren't Secrets. Gemini Made Them Secrets Anyway.
For two decades, Google's official guidance was clear: API keys are not secrets. Treat them as public identifiers. Embed them in your website's JavaScript. Commit them to your repo. They're just rate-limiting tokens. Nobody can do much with one.
Truffle Security decided to test that assumption against current reality. The firm scanned November 2025 Common Crawl data and found 2,863 exposed Google API keys that could authenticate to the Gemini API's /models endpoint. These were keys originally deployed for services like Google Maps, sitting in public web pages and open repositories, never intended to be confidential. Several belonged to Google itself. One had been publicly deployed since February 2023, predating Gemini's launch.
The problem is structural. Google Cloud uses a single key format, prefixed AIza, for two fundamentally different purposes: public identification and sensitive authentication. When Gemini launched, Google enabled the Generative Language API on all existing projects by default. No warning email. No opt-in prompt. Keys that had been harmless for years gained access to Gemini endpoints overnight. A key went from public identifier to secret credential, and nobody told the developers who owned them.
With a valid key, an attacker can query Gemini models, access uploaded files and cached data, and run billable inference on someone else's account. That was the threat surface when Truffle Security disclosed the findings. This week, it got wider. Google released Nano Banana 2, adding Pro-level image generation to the Gemini API and making it available through the same access tiers. Every capability Google adds to the API expands what a leaked key can do.
The disclosure drew 1,194 points and 286 comments on Hacker News, with developers reporting that they, too, had old keys scattered across public repositories with no idea those keys now authorized AI workloads. Google told Truffle Security on December 12, 2025, that it had started an internal pipeline to discover leaked keys and restrict them from Gemini access. New keys created through AI Studio now default to Gemini-only scope. But the company has not addressed the millions of keys already in the wild that predate this fix.
Google's remediation targets keys it can find. The keys it cannot find still work.
02Burger King's AI Assistant Also Grades Its Workers
Burger King is putting an AI named Patty inside employee headsets. The system answers questions about meal prep, walks workers through order assembly, and coaches them in real time. It also scores their customer interactions for "friendliness," monitoring whether they say "please" and "thank you."
Same device, same AI. Two very different jobs.
Thibault Roux, Burger King's chief digital officer, describes Patty as part of a broader BK Assistant platform designed to help employees work faster and more consistently. The system listens through the headsets crew members already wear and responds to voice queries. Need to know hold times for a Whopper patty? Ask Patty. Forget which sauce goes on a promotional item? Patty knows.
But Patty is also listening for something else. The AI evaluates how employees speak to customers, flagging interactions and rating them on courtesy metrics. A worker asking Patty for help and Patty assessing that worker's tone happen through the same channel, in the same shift. There is no opt-out, no separate mode. The assistant is the monitor.
This sits uncomfortably against the industry's own framing. MIT Technology Review published a piece this week on "Industry 5.0," the next phase of workplace automation. Its selling point: moving beyond raw efficiency toward "human-centered" integration. AI augments workers rather than replacing them. Collaboration, not control.
Burger King would likely say Patty fits that vision. The system does help employees do their jobs. It reduces training time and catches errors before food reaches the customer. Roux has framed the platform as a tool for empowerment.
The employees wearing the headsets might describe it differently. When the system answering your questions also grades your politeness, the relationship stops being collaborative. It becomes supervisory. A coworker who helps you and a manager who evaluates you serve distinct functions for a reason. Patty collapses that distinction into a single voice in your ear.
Industry 5.0 literature talks about "orchestrating" AI at scale. Burger King has done exactly that. Whether workers experience it as augmentation or surveillance depends entirely on which side of the headset they're on.
03OpenAI Answers Its Moat Problem with Figma and Federal Paperwork
A question rattled through the AI industry last week: if frontier models keep converging in capability, what exactly protects OpenAI's position? Ben Evans posed it directly in a widely circulated essay, arguing that model performance alone no longer constitutes a durable advantage. Two partnership announcements from OpenAI the same week suggest the company has been working on its answer.
The first deal integrates OpenAI's Codex agent into Figma, connecting code generation directly to the design canvas. Teams using both tools can now move between implementation and visual design without leaving either environment. The integration targets the handoff between developers and designers, a friction point that has spawned its own category of startups. By embedding inside Figma's workflow rather than competing with it, OpenAI positions its coding agent as infrastructure that product teams rely on daily.
The second partnership puts OpenAI inside federal bureaucracy. Working with Pacific Northwest National Laboratory, OpenAI built DraftNEPABench, a benchmark for evaluating how AI agents handle National Environmental Policy Act reviews. PNNL says the system could reduce NEPA drafting time by up to 15%. The target is infrastructure permitting: the years-long environmental review process that delays highways, power plants, and transmission lines. Government procurement cycles are slow, but contracts signed tend to stick.
Neither announcement involves a new model or a capability breakthrough. Both thread existing products into a partner's core process so deeply that switching becomes expensive. Figma's designers won't evaluate a competing code agent if Codex is already wired into their canvas. Once a permitting tool clears compliance review, agencies have little reason to restart the process.
This is a platform strategy, not a model strategy. Evans's essay noted that OpenAI lacks the distribution advantages of Google or Microsoft, which can bundle AI into products that already have billions of users. Workflow integration offers a different path: instead of reaching users through an operating system or a browser, reach them through the specialized tools they already depend on. The risk is that each partnership makes OpenAI more dependent on the partner's ecosystem. Figma could build its own agent. A future administration could cancel the PNNL program.

Google Launches Nano Banana 2, Opens Pro-Level Image Generation to Free Gemini Users Google released Nano Banana 2 (Gemini 3.1 Flash Image), bringing image generation and editing capabilities previously restricted to paid tiers to all Gemini users. The model runs at Flash-tier speed and adds improved world knowledge, subject consistency, and production-grade output. It is available today across the Gemini app and Google's developer APIs. theverge.com
Andrej Karpathy: Coding Agents "Basically Work" Since December Karpathy stated that AI-assisted programming changed abruptly in December 2025, not gradually. He attributes the shift to higher model quality, long-term coherence, and tenacity on large tasks — calling the effect "extremely disruptive" to default programming workflows. simonwillison.net
Samsung Galaxy S26 Debuts with New Android AI Features at Unpacked 2026 Google showcased its latest Android AI capabilities integrated into Samsung's Galaxy S26 devices at Samsung Unpacked 2026. blog.google
Google Translate Adds AI-Powered Context, Alternatives, and Ask Buttons Google Translate now surfaces alternative translations, an "understand" button for deeper context, and an "ask" feature for clarifying ambiguous phrases. The updates use AI to expose nuances that single translations miss. blog.google
SkyReels V4 Generates Video and Synchronized Audio in a Single Pass SkyReels released V4, an open multimodal video foundation model that produces video and temporally aligned audio together. The architecture uses a dual-stream Multimodal Diffusion Transformer — one branch for video, one for audio. It accepts text, images, video clips, masks, and audio references as input. huggingface.co
DREAM Benchmark Exposes "Mirage of Synthesis" in Deep Research Agents Researchers introduced DREAM, a benchmark for evaluating AI-generated research reports. The paper identifies a failure mode where strong surface fluency and citation alignment mask factual and reasoning errors. A four-vertical taxonomy maps the gap between apparent quality and actual accuracy. huggingface.co
Simon Willison Publishes Agentic Engineering Patterns Guide Willison argues that cataloging your own technical knowledge — what's possible, roughly how — makes you a more effective director of coding agents. The guide frames "hoarding things you know how to do" as the key skill for agent-assisted development. simonwillison.net
PyVision-RL Fixes "Interaction Collapse" in RL-Trained Vision Agents Researchers released PyVision-RL, a reinforcement learning framework for open-weight multimodal models. It addresses interaction collapse, where RL-trained agents learn to minimize tool usage and multi-turn reasoning. The fix combines oversampling-filtering-ranking rollouts with accumulative tool rewards. huggingface.co
ARLArena Proposes Stable Training Recipe for Agentic Reinforcement Learning Researchers introduced ARLArena, a framework addressing the training collapse that limits agentic RL at scale. The recipe targets longer interaction horizons and larger environments. huggingface.co
Google and Massachusetts Launch Free Statewide AI Training Program Google partnered with the Massachusetts AI Hub to give all state residents no-cost access to Google's AI training curriculum. blog.google