A Picture’s Worth a Thousand Lawsuits: GPT-4o Goes Visual

Right, imagine this: ChatGPT can now make pictures. Not just some basic clip-art pish, but proper, detailed, human-like images—and edit them too. You’ve got Sam Altman, the golden boy of Silicon Valley, standing on a livestream, all chirpy, telling the world that ChatGPT’s got a major glow-up. The first proper image upgrade in over a year, and aye, it’s a biggie.

GPT-4o—the beefy model already running under the hood of ChatGPT—is now flexin’ some serious visual muscle. It can whip up new images from nothing, fiddle about with photos, even mess with stuff like backgrounds and faces. Welcome to the age of AI Photoshop, ya filthy animals.

Image Generation for the Rich, Coming Soon to the Rest of Us Peasants

If you’re shelling out $200 a month for OpenAI’s turbo-charged Pro plan, congrats—you’re in. GPT-4o’s new image-gen powers are live right now for you inside ChatGPT and Sora, their trippy AI video generator. The rest of us? We’re waiting like mugs.

OpenAI says the feature’s “rolling out soon” to the Plus and free crowd. Same goes for devs using the API. Translation: sit tight, plebs. The future’s arriving—just not for you yet.

DALL·E Who? GPT-4o Thinks Harder, Draws Better

GPT-4o doesn’t just replace DALL·E 3, it buries it. It “thinks longer” (whatever that means) and spits out images that are supposedly sharper, cleaner, and more accurate. We’re talking proper inpainting, where you can tweak parts of an image, add new objects, even adjust human faces and scenes like some digital god.

But let’s be honest—if you’ve used DALL·E 3, you know it’s hit or miss. Sometimes you ask for a cat playing guitar and get a creature from your nightmares holding a banjo. GPT-4o promises fewer hallucinations and more “Whoa, that actually looks right.”

The Training Data Conundrum: Who’s Art Is It Anyway?

Here’s where things get murky. OpenAI told The Wall Street Journal they trained GPT-4o on a mix of public data and “proprietary” stuff, thanks to their pals at Shutterstock and others. No surprise there—every AI company’s hoarding training data like it’s gold dust. Because it is.

They won’t say much more though. Why? ‘Cause that’s lawsuit bait, mate. Artists are already kicking off about AI ripping off their style. OpenAI offers a form to opt out of their training data—but by then, hasn’t the horse already bolted?

Brad Lightcap, OpenAI’s COO, did his best PR spin: “We’re respectful of artists’ rights.” Sure, and I’m the Queen of Scotland. They also say they won’t directly mimic living artists’ work. But what’s “directly” mean? Sounds like one of those rules with more holes than a Glasgow pub on Sunday morning.

Meanwhile, Google’s Gemini Turned Into a Meme Factory

And just so OpenAI doesn’t get too cocky—remember Gemini 2.0 Flash? Google’s big image-gen reveal? Aye, it went viral alright, but for all the wrong reasons. People were using it to strip watermarks and make knock-off versions of copyrighted characters. Basically, digital piracy with a smile.

Read more about Gemini’s blunders here. So yeah—OpenAI’s trying not to end up on that same train to PR disasterville.

But as these models get smarter, the guardrails get blurrier. And if companies like OpenAI don’t tighten up their data ethics game, there’s gonna be hell to pay. Lawsuits, protests, and a whole generation of artists wondering if the machines just stole their jobs and their soul.

Final Word: Beautiful Nightmares Are Just Beginning

So aye, it’s clever. It’s wild. It’s terrifying. GPT-4o now paints, edits, and reimagines the world with a few typed words. But let’s not pretend there’s no blood on the pixels. Artists are worried, regulators are slow, and companies are cagey as hell.

If you’ve got two hundred bucks burning a hole in your wallet, jump in and mess about. Just don’t be surprised if the digital Mona Lisa you generate is half Banksy, half lawsuit.

LEAVE A REPLY

Please enter your comment!
Please enter your name here