OpenAI reclaims the image crown
Read Online | Sign Up | Advertise
Good morning, {{ first_name | AI enthusiasts }}. After OpenAI’s DALL-E and GPT Image 1 paved early ground in image generation, Google’s Nano Banana has topped the leaderboards for the better part of a year. That run just ended.
OpenAI’s new ChatGPT Images 2.0 is the first image model that plans, searches the web, and self-checks its outputs before generating, and the results show — with an upgrade that Sam Altman says is like “going from GPT-3 to GPT-5 all at once.”
In today’s AI rundown:
-
OpenAI breaks new ground with Images 2.0
-
Meta logging employee keystrokes to train AI
-
Build a command center with Claude Live Artifacts
-
Google pushes Deep Research Agent to the max
-
4 new AI tools, community workflows, and more
LATEST DEVELOPMENTS
Image source: OpenAI
The Rundown: OpenAI just rolled out ChatGPT Images 2.0, the company’s upgraded image generation model that had been going viral in testing over the last few weeks — calling it the “smartest image generation model ever built”.
The details:
-
2.0 thinks before generating images, allowing it to plan, search the web for info and references, and check its outputs for errors before delivery.
-
The model takes the No.1 spot on Arena AI’s text-to-image leaderboard by a wide margin over Nano Banana 2, sweeping every category of generations.
-
Other features include 2K resolution, producing up to 8 images at a time, aspect ratios from 3:1 ultrawide to 1:3 tall, plus multilingual text rendering.
-
Sam Altman called the release “like going from GPT-3 to GPT-5 all at once”, with the model now available in ChatGPT, Codex, and in the API.
Why it matters: It’s been a while since OAI topped the image world, and this release brings it back in a big way — with a model that not only feels like it ‘solves’ images and text issues like no other model has, but also completely changes workflows yet again with thinking abilities and capabilities that open up brand new creative avenues.
TOGETHER WITH ALGOLIA
🧩 A practical guide to building AI agents that work
The Rundown: The next step in AI isn’t better chat; it’s agents that can query databases, update systems, and make decisions. Does that mean more custom connectors? Not sure.
Whether you’re a developer or data leader, Algolia’s guide helps you understand:
-
Challenges in building AI Agents
-
How MCP servers connect Agents with search
-
Best practices & real cases
Image source: Images 2.0 / The Rundown
The Rundown: Meta is running a Model Capability Initiative (MCI) to record screenshots, keystrokes, and mouse activity on U.S. employees’ work laptops, with no opt-out, to capture real data for AI training, sparking backlash within the organization.
The details:
-
MCI’s capture scope skews towards developers, logging activity in apps like VSCode, Metamate (Meta’s internal AI assistant), Google Chat, and Gmail.
-
Business Insider published the internal memo, with CTO Andrew Bosworth reportedly responding to concerns by saying there is “no option to opt out”.
-
About 8,000 Meta staffers are set to exit on May 20, with MCI starting to log their workflows a month before their end date.
-
The memo presented the move as the way for all Meta employees to help the company’s “models get better simply by doing their daily work.”
Why it matters: Robotics labs have spent years recording humans doing physical tasks to teach their systems when and how to grab, walk, or stack boxes. Meta just brought that playbook to software and computer use, except the demo subjects are its own staff — and the backdrop of layoffs gives it a very dystopian feel.
AI TRAINING
🎛️ Build a command center with Claude Live Artifacts
The Rundown: In this guide, you will learn how to build a daily command center in Claude Cowork with Live Artifacts. Instead of opening Slack, email, calendar, tasks, docs, and dashboards one by one, you will get one live view all in one place.
Step-by-step:
-
Open Claude Cowork and prompt: “Interview me about my connected apps, daily workflow, KPIs, and what counts as urgent. Then propose the modules for a daily command center before creating the artifact”
-
Answer Qs and then ask to build a modular Live Artifact dashboard with Today, This Week, and This Month views, including KPI cards, stats, charts, app feeds
-
Ask to add priority labels and ranking so updates are categorized (urgent, review, FYI, blocked) and sorted by impact, deadlines, and decisions needed
-
Prompt to add skills with dedicated buttons, like “Plan my day,” “Draft replies,” or “Prep meetings,” so you can take action from the dashboard itself
Pro tip: Try additional upgrades like dark mode, animations, a settings panel for update frequency, manual override, an archive button, and click to open any update.
PRESENTED BY LAMBDA
✂ Cut your AI training costs by over 25%
The Rundown: Most large-scale AI training runs use less than half the computing power they’re paying for. Lambda’s team found the root causes and built a reproducible framework that boosted efficiency by over 25%, without changing the model itself.
Lambda’s whitepaper shows you how to address:
-
Memory inefficiencies silently inflating your costs
-
Training configurations that aren’t making full use of your hardware
-
Bottlenecks that slow down GPU communication
Image source: Google
The Rundown: Google released Deep Research and Deep Research Max, two SOTA agents that use Gemini 3.1 Pro to generate research reports from the web, uploaded files, or any Model Context Protocol server, complete with charts and infographics.
The details:
-
Both agents use Gemini 3.1 Pro and run on the same research engine inside NotebookLM, replacing Google’s December preview of Deep Research.
-
Google’s benchmarks show jumps for Max on retrieval and reasoning from both previous versions and against models like Opus 4.6 and GPT 5.4.
-
Users can also combine open-web search with MCP servers and file uploads, or cut off external web access to search only their private data.
-
Google is already working with firms like PitchBook, S&P, and FactSet to build MCP servers that pipe paid financial data directly into the research workflow.
Why it matters: Research-heavy work of analysts, consultants, and lawyers has been an obvious target for AI automation. Google’s move turns that threat into a priced API call any developer can wire into a product. Expect more partnerships to follow as every vertical figures out which parts of its research workflow just became automatable.
QUICK HITS
🛠️ Trending AI Tools
-
🔒 Incogni – Remove your personal data from the web so scammers and identity thieves can’t access it. Use code RUNDOWN to get 55% off.*
-
🎆 ChatGPT Images 2.0 – OpenAI’s new next-generation image model
-
📚 Deep Research Max – DeepMind’s research agent with MCP, native charts
-
🔎 Deep Max – Exa’s new SOTA agentic search tool
*Sponsored Listing
📰 Everything else in AI today
Former OpenAI research VP Jerry Tworek launched Core Automation, a new AI lab building “an AI to build AI” with founders from OpenAI, Anthropic, and DeepMind.
Meta poached three more employees from Mira Murati’s Thinking Machines Lab, bringing the total number of founding members who departed to the tech giant to 7.
Google open-sourced its DESIGN.md feature from Stitch, a portable file that lets AI agents understand a project’s colors, accessibility, and brand rules.
Exa released Deep Max, a new agentic search tool that tops existing rivals on accuracy while running 20x faster.
Genspark launched Build, a new Claude Opus 4.7-powered agentic vibe-coding tool that generates apps and websites from text prompts
Deezer reported that 75K AI tracks are now published on its platform daily (44% of uploads), but draw just 1-3% of streams, with 85% of them labeled as fraudulent.
COMMUNITY
🤝 Community AI workflows
Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.
Today’s workflow comes from reader Matthew S. in the U.K.:
“I used Claude to build my own exercise tracking app and exported the code to Bolt to make a web app. I have a specific set of exercises that I do each day that other trackers don’t map or give me streaks for. It lets me input each set into each of the four sections and tells me when I’ve met my target for the day.
It only lets me build my streak after I have completed all exercise targets and keeps a daily record of what I achieved. Much easier!”
How do you use AI? Tell us here.
🎓 Highlights: News, Guides & Events
-
Read our last AI newsletter: DeepMind commits to a Claude catch-up
-
Read our last Tech newsletter: Apple gets a new boss
-
Read our last Robotics newsletter: Humanoid smokes half-marathon record
-
Today’s AI tool guide: Build a daily command center with Live Artifacts
-
RSVP to workshop April 30 @ 2PM EST: Codex for non-technical operators
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer—the humans behind The Rundown







