Run enough Meta accounts and you start noticing a pattern: the creative that performs best almost never tests well in pre-flight reviews. It's the one that made someone in the room raise an eyebrow.
Meta's delivery system is hunting for novelty. It rewards creatives that look different from everything else in feed, and it punishes creatives that look like the safe consensus pick. Your A/B testing process is biased toward the wrong winner.
01Variance is the metric, not consensus.
When we score creative pre-launch, we score it on how unlike the rest of the set it is. Same hook angle as the others? -1. Same color palette as the brand book? -1. Hits a beat the audience hasn't seen this quarter? +2.
- Three creatives that look the same will compete for the same impression slot.
- Three creatives that look different will each unlock a new audience pocket.
- Bid against yourself, not the algorithm.
02What 'weird' actually means in feed.
It's not gimmicks. It's format intersection. A document-style screenshot in a video-first feed. A handwritten note in a polished category. A diagram where everyone else has a model shot. The pattern interrupt is the entire game.
If three people in your office can predict the winner before launch, the algorithm already discounted it.— internal note, 2024 retro
03How to ship weird without breaking brand.
Reserve 20% of your creative budget for variance. Fund it like R&D. Don't ask it to hit the same ROAS bar as the safe set — ask it to outperform on click-through and time-on-page. That's where novelty pays first.
Want the full playbook?
The 12 Growth Leaks burning your ad budget — the same internal doc we hand to new clients on day one. One short form, no spam.
Get the playbook →