Welcome to the Friday edition of our newsletter. We spend Fridays going deeper into tools and trends related to generative AI (and Tuesdays sharing news updates). This week, Professor Snider marvels at ChatGPT’s new image model.

ChatGPT’s new image model arrives

For the last five months, my default image generator has been Google Gemini's Nano Banana (we wrote about it here). Before that, it was Ideogram and Midjourney. Before that, the original ChatGPT image tool. It's been a constant rotation – whichever one was best at that particular time got the work.

It's time for another rotation.

OpenAI launched ChatGPT Images 2.0 on April 21, and after a week of testing it, I'm back to making my images in ChatGPT. It topped the LM Arena image generation leaderboard by 242 points – the largest lead anyone has ever recorded on that benchmark. The major change here is that this model thinks before it draws. It can not only make impressive photos, but also flawless infographics that will fool people into thinking they weren’t made with AI.

Here are three things to know about it – and a couple concerns at the end.

10x the context. Half the time.

Speak your prompts into ChatGPT or Claude and get detailed, paste-ready input that actually gives you useful output. Wispr Flow captures what you'd cut when typing. Free on Mac, Windows, and iPhone.

Free Gen AI Update May 15

Five months in AI is a lifetime. New models. New features. New reasons to rethink how you work. Join us May 15 at noon Central for Hot GPT Summer 3, a fast, virtual catch-up on everything that's changed in generative AI since the new year, and what it actually means for your day-to-day. Sign up now. It’s free!

This event is offered free thanks to a sponsorship from technology and management consultancy, Lean TECHniques.

1. It actually reasons about your prompt

Most image models treat your prompt like a bunch of keywords. They scan the words, pattern-match to training data, and crank out something close. ChatGPT Images 2.0 does something different. Before it generates anything, it plans the composition, considers spatial relationships, thinks about where text should go, and even searches the web if it needs current information.

You can watch this happen in the ChatGPT interface – the reasoning chain shows up before the image does. It's the same shift text models went through about a year ago when "thinking" modes started winning at complex tasks. Image generation just caught up.

You can see the “thinking” done by ChatGPT to the right before it created the image.

2. Text in images is no longer an issue

If you've used AI image tools, you know the pain. You ask for a poster with the words "Spring Sale 2026" and you get "Spirng Sael 2026" or some made-up font that looks vaguely like letters. Even models that have figured that out get all the small text in the background incorrect.

ChatGPT Images 2.0 reportedly hits about 99% text accuracy – including in Chinese, Japanese, Korean, Hindi, and Bengali.

For anyone making social graphics, slide decks, or marketing assets, this is the change that matters most. You can finally generate a working ad creative without the need to fix the typography. And you can create up to eight images with consistent design features with just one prompt.

Prompt: Create a 3-page color mixed media food zine for college-student-friendly cooking. Include necessary step-by-step visuals, diagrams, and explainers. Include any health or environmental context of the dish.

3. The outputs don't look AI-generated

This was a pleasant surprise after months of seeing images with that same ChatGPT “look” to them. When you create something with Google's Nano Banana 2 or the previous ChatGPT model, you can usually tell. Same default fonts, same icon style, same slightly cartoonish vibe. ChatGPT Images 2.0 doesn't have those tells. The outputs look like a designer made them. Clean, restrained, not overdesigned.

Prompt:  create a slide about Drake University

Thinking mode is gated – and it's slow

Two important caveats.

1. The basic model is free for all ChatGPT users, but the "thinking" mode that does the reasoning, web search, and multiple-image generation is locked behind Plus, Pro, and Business subscriptions. So when you read about all the cool capabilities, know that the best stuff requires a paid plan.

2. It's slow. Each generation takes 40 seconds to over a minute, sometimes longer with thinking mode on. Google's Nano Banana 2 finishes in 20-25 seconds. For one-off marketing assets, the wait is fine. For creating a lot of images, it's painful. Plan accordingly.

My takeaway

ChatGPT Images 2.0 takes us one step close to producing client-ready work without a designer touching it afterward. That's a meaningful line to cross. If you're currently relying on Canva templates for your design – it's worth a serious look this week.

I showed this to my visual communication students with one clear message - the tools are going to continue to change (this class used to use Photoshop and now uses Canva), but their fundamental understanding of design will still matter.

And we may have just found a new to use in class.

Online classes: Learn Gen AI on your own time

Generative AI Fundamentals covers everything from how LLMs work to prompting, image tools, Copilot, and even vibe coding.

Google Gemini Essentials & Advanced Tools includes 120 minutes of videos to help you get the most out of Google Gemini.

Make the Switch to Claude helps you understand and use advanced Claude features like Projects, Skills, Cowork and more.

Keep Reading