THIS WEEK: PROMPTS, SKILL ISSUES & GPT-5

I'll say it louder, for the people in the back. GPT-5 doesn't suck. Your prompts do.

🌟 Editor's Note
Thanks for reading H4CKER, where I mix the most important news in AI and marketing with what I’m personally cooking, and serve it up with my very best sauce. Please share your feedback with me and this newsletter with anyone you think might like it!

Want to grow your business? I can help.
Let’s talk about automating your sales pipeline with digital advertising.

🚨 GPT-5 Isn’t The Same ChatGPT You Knew

This past week has been an lesson in restraint as I worked side-by-side with GPT-5 every day, and everyday being more and more impressed with its capabilities.

All the while, the Internet around me was losing its collective shit complaining about how bad this model is —the same model that’s been blowing my mind all week.

It’s not the first time that I find myself on the island of misfit toys; it’s a pretty regular occurrence, honestly. But it is the first time that I recall feeling so isolated when it came to the AI community.

And not because I had missed some cool new pro tier feature (again) or have fallen behind. Quite the opposite. I’m getting great results over here and love the new model, while it seems like everyone else just decided to sit this one out.

Some people got strangely attached to the sycophantic personality of 4o.

I was very happy to see it go.

So I don’t share in any nostalgia to previous models (and I think that’s generally weird and unhealthy).

But for those of you that simply had built on top of 4o, and had everything tuned just the way you liked it, I can sympathize with how much it sucks to have to rework all your prompts suddenly.

The good news is that OpenAI has already released a prompting guide, among other educational materials, for GPT-5 which will help you write better prompts and generate (much) better output. I’ve included some of my own tips as well.

And I’ll try to present these in less technical terms than the OpenAI blog post. Let’s see how I do:

✍️ New In GPT-5: More Context + Deeper Thinking

Through the chat interface (versus the API) the input context window (how much you can send to GPT-5 at once) is set at the same 32k tokens that 4o had.

However the internal token budget is twice what 4o had to work with, so that means substantially more space for:

  1. Tool-calling (searching the web, writing code, etc)

  2. Remembering the history of your chat

  3. Thinking (chain-of-thought reasoning)

This larger context window means GPT-5 has more “room to think”, and this translates into deeper reasoning, better instruction-following, and quality control.

In short, it means it will do a better job of producing the output you want.

Via the API, the situation is very different, demonstrating what I’ve said before —this model is for coders and builders first and foremost.

Let’s look at a breakdown of context window sizes between GPT 4o, o3, and 5:

Model / Mode

Input Window

Total Context Window

GPT 4o (free)

8,000

Unknown

GPT 4o (plus)

32,000

128,000

GPT o3 (plus only)

32,000*

60,000

GPT o3 API

200,000

300,000*

GPT 5 Flagship (free)

8,000

Unknown

GPT 5 Flagship (plus)

32,000

128,000

GPT 5 Flagship API

272,000

400,000

GPT 5 Thinking (plus)

192,000*

Unknown (maybe 400k)*

GPT 5 Thinking API

272,000

400,000

*These are difficult to verify but are well-reasoned estimates.

What’s really interesting here is that the overall context window sizes aren’t the major differentiating factor between GPT-5 and GPT-5 Thinking.

Instead, its the size of what GPT-5 calls its “internal scratchpad”.

GPT-5 “Takes Notes” While Thinking

The “scratchpad” is an allocation of its token budget that is dedicated to thinking / reasoning, rather than input or output or session context. Its like a notepad where GPT-5 jots down its thoughts (for its own benefit).

GPT-5 Flagship has a “scratchpad” too, but the window gets even larger when you use GPT-5 Thinking.

The more GPT-5 thinks, the more it needs to “take notes” on because of all the facts and instructions, etc it has to contextualize (hold in its mind) at once.

This “note-taking” ability gives us better instruction following, tool-handling, more detailed summaries, more persistent session memory, and many more of GPT-5’s quality of life upgrades.

What Does This Mean?

All this adds up to more focused model that can perform longer processes that require more reasoning without losing the plot.

So it’s not just best practice to prompt GPT-5 with as much context as possible, it’s an essential part of using the model effectively; particularly with GPT-5 Thinking.

The less context you provide, the more it will try to be intuitive and guess what you want (with varying degrees of accuracy).

The more context AND direction you can provide, the more that GPT-5 can put its big brain to work on your problems.

Thinking is both its strength (it’s powerfully intelligent) and its weakness (it’s easily distracted). The best prompts give it focus, constraining its big brain to hyper-specific functions that actually add value.

💡 Top Tips From The GPT-5 Prompting Guide

  1. Prompting the model to provide a summary of its thought process improves performance on high-intelligence tasks.

  2. Requesting tool-calling “preambles” that update you on task progress improves performance in AI agents.

  3. Metaprompting (asking the model for tips on improving prompts) is especially effective with GPT-5, which excels (much like o3 did) at engineering prompts for itself.

  4. When it comes to front-end development, this is the stack that GPT-5 has been trained on the most:

    1. Frameworks: Next.js (TypeScript), React, HTML

    2. Styling / UI: Tailwind CSS, shadcn/ui, Radix Themes

    3. Icons: Material Symbols, Heroicons, Lucide

    4. Animation: Motion

    5. Fonts: San Serif, Inter, Geist, Mona Sans, IBM Plex Sans, Manrope

  5. Include instructions for how you want your code (or writing, for that matter), formatted, styled, structured, and so on, and they’re much more likely to stick.

  6. Prompt for more reasoning by telling it to think longer and by giving it more complex output instructions; like comparisons, step-by-step guides, and calculations.

  7. Limit reasoning by telling it to respond more quickly, or by allowing it to take an educated guess. “Don’t waste time,” “don’t overthink,” “only consider X variables,” are also worth exploring.

  8. GPT-5 is quite good at following instructions, especially step-by-step ones, so write prompts like detailed SOPs.

    1. Break prompts into sections

    2. Make the structure obvious (markdown, xml, json)

    3. Use numbered lists

    4. Use detailed procedural guides

  9. Encourage self-reflection to improve the quality of very technical outputs. You can even instruct it to come up with its own rubric for judging its work, and to iteratively improve and examine the work until its meets that standard.

  10. Tune the “verbosity” of GPT-5’s output by prompting it to give more concise or more detailed responses.

GPT-5 continues to surprise me with really great output, especially when I take the time to really think through my prompt and even ask it to help me improve them.

It’s called prompt engineering because it requires tinkering…

I think the era of the “one-shot prompt” is ending.

⚙️ The Laboratory: Prompts & Automations

I’ve got a quick and dirty automation for you this week that is as simple as it is useful: connecting a CRM (ActiveCampaign in my case) with OpenPhone to send text messages to contact leads right away after they fill out a form on your website.

Here’s what I show in the video:

✅  Configuring ActiveCampaign In Zapier: Connecting to ActiveCampaign and setting up a tag-driven trigger to be executed when contacts submitted a form.

✅  Setting Up OpenPhone In Zapier : Step-by-step, how to configure Zapier to send automated texts with OpenPhone.

A Bonus Workflow : I build another workflow to check all incoming texts for potential leads; using AI to alert the team; making sure there are no missed opportunities.

Streamline and scale with business automation and AI -schedule a free audit today!

🚀 AI / Marketing News

Perplexity Fires A $34.5B Silver Bullet at Google Chrome

Freshly crowned champions of schadenfreude, Perplexity AI, an $18.5B company, made a surprise $34B offer to buy Google Chrome this week. There’s no telling if Google will even entertain the offer; they clearly don’t need the money.

It is very funny though because it both highlights Google’s antitrust cases which all center around the Chrome browser and showcases Perplexity as a contender in the coming AI browser wars.

Well played…

Source: WTopNews

Takeaways:

  • Perplexity made a surprise $34.5B bid for Google Chrome

  • The offer appears symbolic or promotional; Google hasn't responded

  • Perplexity positions itself as an ad-free, AI-native alternative to Google

  • If a PR stunt, it’s working to get headlines and drive venture capital

My Take:

Well, this one’s a headline grabber. Perplexity’s $34.5 BILLION ‘bid’ to buy Google Chrome is either a satirical F-U to Big Tech or the gutsiest marketing PR stunt this side of 2024.

I’m leaning stunt, but kudos to them…

More importantly, it signals just how broken traditional search feels to users and how AI-native players like Perplexity are pushing hard into territory once ruled by Google.

And it’s more evidence of the power of memes: Perplexity’s gaining potentially billions in value by mogging Google Chrome with Comet and flaunting it with this meme offer.

BRILLIANT.

Reddit Moves to Lock Down Its User Data Amid AI Gold Rush

Source: The Verge

Takeaways:

  • Reddit is aiming to implement stronger protections around its data

  • This move is driven by the demand for UGC to train AI models

  • Reddit wants to prevent 3rd parties from monetizing its content

  • Google deal showed Reddit how to monetize its data directly

  • Platforms like Reddit want to cash in

My Take:

Reddit’s putting up fences around its digital pasture because it got lucky with Google and now believes it’s actually a good source of data.

Unfortunately, training on Reddit has already nearly ruined AIOverviews, and Reddit is increasingly full of spam, so it remains to be seen how valuable their data really is.

Reddit apparently also wants to become a search engine, so I think they’re just getting high on their own supply at this point.

Ex-Googler Mo Gawdat Wants To Replace Evil Politicians With Definitely-Not Evil AI

AI is smarter than the people that run the governments of the world, says former Googler Mo Gawdat, and that’s leading us down a dystopian path.

I’ll be awaiting his candidacy announcement breathlessly.

Takeaways:

  • Former Google X CBO, says global decision-makers don't understand AI

  • He claims we’re headed into a 'short-term dystopia'

  • He argues that most corporate and government leaders are out of their depth

  • He predicts mass disruptions in the next 5 years

  • Rather than slow AI progress, he urges ethical development

  • His issue: the disconnect between AI’s power and human competence

My Take:

While I don’t disagree that politicians are dumber than ChatGPT 4o, I’m not signing up for whatever AI-driven technocracy this bro dreams up.

The average person (who’s not that smart) is smarter than the average politician too and it’s always been this way.

Those who can build things do so, those who can’t try to tell other people what they can and can’t do. In the end it doesn’t stop the march of progress.

Google's New 'World Model' Teaches Robots Using Virtual Warehouses

Google Deep Mind dropped another wild update that allows 3D worlds to be rendered and walked around it from pictures and API data; but that’s not even the crazy part.

Wait until you hear about how they’re using it to train robots so they can replace us faster.

Source: The Guardian

Takeaways:

  • Google DeepMind unveiled a new 'world model' for training AI robots

  • These robots performed real-world tasks with 90% success rate

  • The model acts like a game engine for reality, letting robots learn

  • 'Learn everything everywhere' may overcome current limitations

  • These trained AI agents adapt quickly from to real-world applications

  • Could totally automate warehouse and e-commerce logistics

My Take:

Google’s doing what it does best: building insanely powerful systems and then finding a way to make money off you interacting with them.

This 'world model' thing is essentially a Matrix for robots; it lets machines hyper-train in a virtual warehouse, then drop into the real world ready to lift, stack, and fetch like caffeinated forklifts.

The leap here isn’t just hype. Going from virtual to real with such high accuracy means serious implications for logistics and e-commerce (bye warehouse temps, hello droid overlords).

They’re here to take our jobs!

Nathan Binford
AI & Marketing Strategist

I hope you enjoyed this newsletter. Please, tell me what you like and what you don’t, and how to make this newsletter more valuable to you.

And if you need help with AI, marketing, or automation, grab time on my calendar for a quick chat and I’ll do my best to help!

Thanks For Reading H4CKER!