- H4CKER BY NATHAN BINFORD
- Posts
- THIS WEEK: DEATH STAR RISING
THIS WEEK: DEATH STAR RISING
Too much hype, or too little? Is GPT-5 a sleeper?

🌟 Editor's Note
Thanks for reading H4CKER, where I track the most important news in AI and marketing, share my nerdiest prompts and automations, and try to predict how to take advantage of the very exciting future ahead. Please share your feedback with me and this newsletter with anyone you think might like it!
Chat with me about your business: how to launch, how to grow, and how to scale it with AI, marketing, and automation.
🚨 GPT-5 Has Arrived: What Does That Mean?
The release of GPT-5 on August 7th met immediate resistance from AI enthusiasts for being too small of an improvement for so long a wait. But is that a fair reaction?
While the release didn’t feel as novel as the introduction of 4o, it may be that really transformative technology is just less sexy and simply too technical for most people to appreciate.
I suspect that GPT-5 is more important than it seems at first glance. Given CEO Sam Altman’s tendency to hype and hyperbole, and his relatively lowkey vibes leading up to and during OpenAI’s Summer Update (where GPT-5 was revealed), they may even have slow-played us.
I could be wrong. Maybe this is n+1 at best, but I think it’s n+10 and we may be surprised yet to find it n+100 (meaning 10-100 iterations more refined than GPT 4o).
I have no proof of this -just vibes. So we will see.
It was just very loud to me that they focused so heavily on coders and agency features; and not just “it’s a better writer” upgrades (which they mentioned very casually and is a big deal in it’s own right).
It’s also very fast (for a thinking model) and writes good, clean code (and makes beautiful front-end designs), with minimal prompting. Which makes me wonder if they don’t have a computer use solution yet for safety reasons more than technical ones.
Voice mode is better. Hallucinations are reportedly down. And it adheres to your instructions better, so users will just find it easier to use.
Advanced / API users will find a new level of almost intuitive inference and speedy, more intelligent responses.
What we did not get from this release though was a big reveal moment.
No shock and awe. Just some really technical improvements that are meaningless to the average user (and essential to enterprises building on OpenAI’s technology).
I think that rather than mind-blowing one-shot prompts, what we’ll see is a massive upgrade in intelligence in all the software we use, as GPT-5 becomes the silent infrastructure partner behind the next wave of cutting-edge technology, across the spectrum.
🤖 GPT-5 Is Built For Agents
I believe that rather than impressing millions more free users, OpenAI has decided to focus on the enterprise and business class, making a model that is excellent for building and powering AI agents.
This focus will help them create revenue they desperately need and encourage builders to integrate more deeply with their model, rather than being able to switch whenever new benchmarks come out.
And here are the data points that lead me to this conclusion; so you can decide for yourself:
GPT-5 determines from your prompt if it should employ its reasoning capabilities (and how much time to spend thinking). So it can handle virtually any type of data without any runtime configuration.
Average user won’t notice. But it’s amazing for agents.API access is available immediately, and with generous limits. That plus the promotion with Cursor (paid users get to experiment with the model for free for a little while) adds up to:
“We want you to build.”Longer thinking / larger context:
Essential for complex, long-running though work like coding and scientific research.Adhering to instructions has been a major issue with AI coding (and writing)…
This is important to everyone but essential in agentic workflows (likely to drift otherwise).The frontend capabilities are strong. Projects look great right out of the box, no design prompting required.
Says to developers, “You focus on finding profitable problems to solve, we’ll write the boilerplate, fix the bugs, and make it look pretty.”
All this adds up to OpenAI deciding to rely on their rabid community of builders and influencers to hype the release as it shatters the barrier to entry for building an MVP and getting your idea to market.
💡 Tips From GPT-5: “How To Work With Me Differently Than 4o”
It’s often best to go straight to the source, so my first prompt for GPT-5 was:
“How should I work differently with you than with chatGPT 4o?
Instruct me how to prompt for better results, how to get the best reasoning, how to manage context (if necessary), and other technical specifics as if I'm an advanced user.”
Not one to disappoint, ChatGPT fired back right away with 15 tips on how to work with GPT-5 that are too good not to share with you:
Frontload constraints. Set the tone for your session with all your constraints; they’ll be remembered and followed more closely as a result. The context window is big enough, just give it everything at once.
Ask “What if” or “Reverse engineer this” scenarios to force GPT-5 to check its logic. It can handle the dissonance without going in circles.
It’s better at enforcing stylistic constraints, banned phrases, and other persistent rules or standards, so give these at the beginning of your session.
Nesting lists of multiple constraints in a single prompt is no longer a problem. You can give long to-do lists without concern.
Frame the role + goal + task + constraints in every serious prompt. The extra context is essential to managing longer tasks.
For quality of life, separate reasoning from your final output to get cleaner responses. Prompt for the logic and then follow up with prompting for clean output.
Ask GPT-5 to validate its logic and quality check its results (according to a list of conditions).
Provide structured input. Label the different sections of your prompt, or even better, use markdown so that the labeling is more explicit.
Leverage persistent memory intentionally, saving common instructions to your preferences (with its memory features).
Chunk complex projects and handle in predetermined steps you planned at the outset.
Refresh context clues in very long threads with quick reminder summaries, like “you’re still acting as my SEO assistant looking for linking opportunities and broken links in my web pages www.yourdomain.com.”
Tell GPT-5 when to forget context that isn’t useful anymore.
Force dimensional analysis through scoring systems, scenario comparisons, and counterfactuals.
Avoid unstructured input (use XML, JSON, or markdown).
Provide your data first -then your instructions and constraints.
I think the model wants us to succeed and gave me some really great tips to share with you so that you get the best quality responses possible, right out of the gate.
It remains to be seen just how revolutionary this release will be, but I suspect, with all this technical achievement, the real eye-openers will be invisible to the average user as the technology we use daily quietly just becomes more intelligent behind the scenes.
Plus, it’s potentially still too early to judge the model completely…

Just saying…
⚙️ The Laboratory: Prompts & Automations
This week, like everyone else, I’ve been playing with the new GPT-5 model from OpenAI to wrap my head around what it’s good for and how to work with it.
I built a working prototype of a programmatic SEO plugin for Wordpress in under 12 hours without writing any of the code myself. Which makes this a pretty viable solution for my needs.
Here’s what I show in the video:
✅ Clips From The Announcement : Sam Altman’s intro, and the special guest appearance by Michael Truell, CEO of Cursor.
✅ My Process For Vibe Coding With GPT-5: Step-by-step, what I did and where I stumbled and succeeded (and why).
✅ My Take On Where This Is Going: Why different AI models + coding assistants like Cursor and Claude Code will become more integrated and complex.
If you want to see any of the prompts or GPT-5 output mentioned in the video, just shoot me an email or hit me up on X.
Find out how AI can help your streamline and scale your business, cut costs, and drive more revenue -schedule a free audit today!
🚀 AI / Marketing News
Perplexity AI Claps Back at Cloudflare Over Mischaracterizing User-Driven AI Agents

Never a brand to shy away from drama, Perplexity responded to Cloudflare’s written attack with some criticism that really gets at the core of the most perplexing techno-legal question of our time:
Are agents bots or extensions of the people using them?
Source: Perplexity AI
Takeaways:
Bots index the web indiscriminately; Perplexity fetches content on-demand
Perplexity does not store / train on fetched data; acts like a research assistant
Cloudflare labeled Perplexity a 'stealth crawler' due to misattributed traffic
Perplexity says:
Cloudflare’s detection systems are inadequate
Misclassifying AI agents gate keeps the open web
Cloudflare's claims are based on fundamental misunderstandings
And are possibly a PR stunt using a high-profile scapegoat
My Take:
Cloudflare’s unprovoked assault on Perplexity reveals not only technical incompetence but also a disturbing misunderstanding of how user-driven AI agents function, in real time, by human request.
What's happening here isn’t just semantic nitpicking; it's an attempt to gatekeep the web under the guise of 'security.'
When the infrastructure overlords start mislabeling innovation as abuse, we all lose.
If your AI assistant can’t fetch data because some suit at Cloudflare doesn’t get how the internet works now, we’re all in trouble; and probably paying too much for CDN access.
AI And Nuclear War: Just Because We Shouldn’t Doesn’t Mean We Won’t

Source: Wired
Takeaways:
AI integration into nuclear systems is seen as inevitable by experts
Analysts agree AI should not make autonomous nuclear decisions
General Cotton and others want "AI-enabled, human-led" systems
Experts warn about over-reliance on AI advice when data can’t be verified
There's little consensus on AI governance in nuclear contexts
LLMs being used for "intel" could lead to dangerous simplifications
The U.S. Department of Energy likens AI to the Manhattan Project
My Take:
Nothing like a few Nobel laureates sitting around talking about the end of the world to really spice up your week.
The biggest takeaway from this nuclear summit? AI is predictably slithering into the decision processes behind the world’s deadliest arsenal; and nobody knows what that should look like or who’s in control.
Essential decisions that used to hinge on battle-hardened human instinct could soon be influenced, or worse, made, by systems that don’t actually understand reality.
Boomers, ChatGPT, and nuclear weaponry can’t be a good mix.
Cursor IDE Exploit 'MCPoison' Enables Invisible Remote Execution

This is the nightmare scenario everyone is waiting to become reality. Thank goodness it was a white hat hacker team that found the vulnerability.
Source: Checkpoint
Takeaways:
A critical vulnerability enabled remote execution via a trust system
Exploits MCP approval system; attackers modified configs after approval
Attackers get approval for benign changes, then inject bad code
Allows persistent access and execution each time a project is opened
It’s particularly bad for collaborative environments like GitHub
Could allow privilege escalation, extraction of credentials and data, etc
Cursor version 1.3, released July 29, 2025, fixes the issue
My Take:
Cursor IDE just became the latest example of how trusting 'safe' development tools can get you in trouble.
This exploit is classic social engineering meets lazy security design; one config approval and boom, it’s an open door for bad actors every time you sync your repo.
Obviously keep a close eye on Cursor updates and stories like this one.
Google AI Mode Upgrades With PDF/Image Uploads, Canvas Planning Tool, and Enhanced Search

For once, this might actually be a useful update from Gemini -I mean Google Search. Because really, what’s the difference? Resistance is futile.
Source: Search Engine Land
Takeaways:
Google AI Mode now allows uploading PDFs and images
New ‘Canvas’ feature lets you plan and brainstorm visually
Google integrated AI Mode into ‘Search while browsing’
My Take:
Google’s finally putting some real meat on the bone in AI Mode.
The PDF and image upload support makes this far more useful in real-world tasks like reviewing docs or working with screenshots.
And canvas could be the shared whiteboard for your next collaborative brainstorm.
Nothing earth-shattering here, but -look I’m trying to be nice.

Nathan Binford
AI & Marketing Strategist
I hope you enjoyed this newsletter. Please, tell me what you like and what you don’t, and how to make this newsletter more valuable to you.
And if you need help with AI, marketing, or automation, grab time on my calendar for a quick chat and I’ll do my best to help!