- H4CKER BY NATHAN BINFORD
- Posts
- THIS WEEK: GPT-5 WRITES BETTER CONTENT THAN CLAUDE
THIS WEEK: GPT-5 WRITES BETTER CONTENT THAN CLAUDE
I don't make the rules. GPT-5 is really good at writing content and I have receipts.

🌟 Editor's Note
Thanks for reading H4CKER, where I share a behind the scenes look at AI marketing, in the trenches of the real, nerdy work of AI marketing and automation. I’d love to hear what you think of this newsletter and what you’d like to see in the future.
Want more leads? I can help.
Let’s talk about how AI and automation can help grow your business.
👋 Bye Claude! I’m In Love With GPT-5.
It’s subtle and a bit difficult to quantify, but GPT-5’s writing just “sounds better”.
I struggled to put words to the differences, so I asked ChatGPT to compare 2 versions of the same article, one from GPT-5 and one from Claude 4 Sonnet; then to pick a winner, and explain its reasoning.
Unsurprisingly, it picked its version as the best. What was a little surprising was why.
Here are the highlights:
Easier to skim
No “broken thoughts” / side paths
Sounds like a person talking
Light metaphors, no buzzwords or stat dumping
And here’s the output, so you can see for yourself:

These are all great points. They’re also fairly interesting from a prompt engineering perspective.
The first suggests that brevity is a tuned-in preference in the model (GPT-5 is not as loquacious as 4o) but, remember (from my last email) verbosity is a built-in lever you can adjust with your prompts.
The second point, though, really nailed the difference I’m trying to articulate, and the third hammered it home.
No “broken thoughts” (GPT-5’s words, not mine), meaning the disjointed, staccato nature of Claude’s articles vs GPT-5’s seamless transitions between sentences and sections in the article.
“Sounds like a person" talking.”
Exactly.
I’ve included samples from both articles so you can compare.
🤡 From The Claude Version
“Most restaurant owners wonder if Facebook still matters in 2025. Your Page can bring local diners through the door and boost direct orders without breaking your budget.
A strong Facebook presence builds real connections with customers who live nearby. But when you skip Facebook or let your Page sit empty, you're handing potential customers to competitors who show up consistently.”
“…Facebook Pages help restaurants reach the people who matter most: locals looking for their next meal. When someone in your neighborhood searches for dinner options, your active Page can appear in their results.
The platform gives you free tools to showcase your food and personality. You can post daily specials, share behind-the-scenes moments, and respond to customer questions without spending a dime.
Getting started takes less time than you might think. Facebook's setup process walks you through each step, from adding your hours to uploading your first photo.
Your Page works around the clock, even when your restaurant is closed. Customers can browse your menu, check your hours, and place orders at any time.”
🏆 From The GPT-5 Version
“Most people who hear about your place check Facebook before they visit. They look for menus, photos, and if you’re even open today. That means your Page can make you look inviting, or it can make you look closed or careless…”
“…a Facebook Page puts your restaurant in front of local diners who are already scrolling every day. It’s free to create, and most owners can set one up in under an hour. In our experience, a Page often turns into the first stop for new guests.
Pages also tend to build trust. Diners see updated hours, reviews, and specials, which helps them decide quickly. For restaurants, that can mean more full tables and more direct online orders.”
👀 Final Thoughts
The sheer volume of text is the most significant difference, but again, you can tweak that up and down by prompting for verbosity.
Neither version is bad and I spent many hours honing this prompt, so I’ll take credit for that, but in pure stylistic comparison, I think it’s hard not to see the improvements made by GPT-5.
It’s shorter; but more succinct as well.
And it feels a lot more conversational than the version from Claude.
Humanization, I think, should be the priority in AI content writing because talking to robots, or reading their output, seems to distress people disproportionately but the potential to scale is too great to ignore.
You’ve got to be able to produce human-sounding content at scale to even play the greater game of digital marketing today.
And it seems, GPT-5 is a quiet, but very real, contender for best model for writing…
⚙️ The Laboratory: Prompts & Automations
Dive deep into my prompt engineering process and watch as I improve a content prompt with the help of GPT-5 Thinking. Steal this step-by-step process to 10x your AI content in only a few minutes.
Here’s what I show in the video:
✅ A 100% Effective Prompting Hack : Eliminate AI “tells” and annoying patterns, ensure your rules are followed, and fine-tune your output with precision.
✅ How To Get Help From GPT-5 : Ask ChatGPT to compare its content to some you’ve already improved and provide prompting recommendations to upgrade the prompt.
✅ How I’m Managing Version Control : I’ve started using git to manage version control for my prompts.
Streamline and scale with business automation and AI -schedule a free audit today!
🚀 AI / Marketing News
Google Rolls Out AI Mode to 180 New Countries

Everyone: Help Google, search is terrible. Here’s exactly what we want: Fewer ads. Better search results. Less Reddit.
Google: Best I can do is more AI.
Source: SearchEngineLand
Takeaways:
Google's AI Mode is now available in 180 new countries (English-only)
AI Mode is a new tab within Google Search with a more AI-like interface
AI Mode lets you explore topics and get AI-based answers
The move signals Google's intent to scale its AI offerings globally
This is a glorified beta test and Google’s going to go nuts with experimentation and an almost Meta-esque fail forward fast approach.
Mark my words, this will be the wild west for a while.
Useful though? I kind of doubt it. Despite coders loving Gemini, Google’s frontend tools for the masses (Gemini being force-fed in every Google tool, etc) are consistently terrible.
Meta Hits Pause on AI Hiring After Blowing Billions

Wow. You don’t say.
After blowing billions on a (panicked) recruiting spree, Meta has turned off the faucet just as quickly. I suppose this start-stop routine is just Facebook’s fail-forward-fast vibe but it doesn’t help make them look any more real a contender in the race to “superintelligence”.
Source: MSN
Takeaways:
Meta has frozen hiring for its core AI infrastructure teams
In 2023 they spent nearly $40 billion, largely on AI hardware
The company’s focus is now shifting from hiring to optimizing
Insiders say the pause signals Meta slowing its AI arms race
Microsoft, Google, and others continue aggressive expansion
My Take:
Meta flung open its war chest in 2023, lighting cash on fire to build out AI infrastructure and homegrown models. Then they spent billions on a hiring spree.
Now they've slammed the brakes, which could signal a shift from 'YOLO' spending to 'Wait, did that work?' introspection.
Seems like Zuck tried to buy his way to relevance in the AI game but lots of GPUs doesn't make you an innovator.
OpenAI's Sam Altman Sounds Alarm: Don’t Underestimate China’s AI Prowess

DeepSeek lives rent free in Sam Altman’s head, right where Elon wishes he could be. The Chinese competitor dropped their outstanding r1 model last year and stole a lot of the wind from OpenAI’s sails.
Since then Sam’s been pretty down with the anti-China rhetoric.
Source: Tesaaworld
Takeaways:
Sam Altman warns the US underestimates China's rapid AI progress
He says global cooperation is required to manage the risks of AI
Despite competition, Altman encourages openness over isolation
The U.S. is the leader in foundational models, but China’s catching up
My Take:
Sam Altman loves playing AI diplomat, reminding us that Chinese models are quietly becoming very serious contenders in the AI arms race.
While D.C. throws regulatory spaghetti at the wall, China’s laser-focused on cranking out large-scale models at blistering speed. Don’t assume the winner just yet.
For those of us building lean and mean with AI, this isn’t doomer news, but it’s a wake-up slap to watch what’s happening outside Silicon Valley and D.C.
The takeaway here isn’t fear, it’s strategy: learn from how fast China builds and deploys, and how they skip the endless committee meetings.
China’s DeepSeek Strikes Again, Rivals GPT-5 with Cheaper, Chip-Savvy LLM

Chinese models, DeepSeek in particular, stole a lot of OpenAI’s sunshine last year. DeepSeek r1 was impressively cheap to build (so they say) and very fast. But then it fell out of favor as quickly as it rose to fame.
Now they’ve got a new release nipping at the heels of GPT-5. But is their benchmark friendly AI actually better or just benchmaxxed?
Source: Fortune
Takeaways:
DeepSeek released V3.1, said to match GPT-5 on some benchmarks
V3.1 is optimized for Chinese-made chips and is cheaper to run
It uses a mixture-of-experts architecture with 685 billion parameters
But it activates just a fraction at a time (lower compute burn)
V3.1 fuses instant-answers with advanced reasoning in one model
U.S. export controls provoked China to develop homegrown AI tech
Sam Altman admitted Chinese models forced them to open-source
Researchers warn of potential alignment issues with these models
Adoption outside China is growing but U.S. developers still hesitant
My Take:
DeepSeek just dropped a monster of a model, and it’s optimized to run on China’s own chips; as China thumbs its nose at US export controls.
V3.1 is tuned for efficiency and costs less to operate than GPT-5, while apparently bench-pressing (or at least benchmaxxing) in the same weight class.
It fuses instant recall with step-by-step reasoning in one system, something even some closed models struggle with.
While the West flexes its LLMs with flashy PR, China’s quietly building bulletproof AI toys with local silicon and half the PR budget.
This isn’t just a model drop. It’s a signal flare in the AI arms race worth watching closely.

Nathan Binford
AI & Marketing Strategist
I hope you enjoyed this newsletter. Please, tell me what you like and what you don’t, and how to make this newsletter more valuable to you.
And if you need help with AI, marketing, or automation, grab time on my calendar for a quick chat and I’ll do my best to help!