doomer - solana rug scanner
// rug postmortems// field notes

Anatomy of a Solana rug. Five ways tokens die in the first 24h

94% of Solana token failures we see at +24h fire the same launch-time pattern. The five categories of loss, and how often each one shows up.

5 min readdoomer
Solana (SOL) cryptocurrency logo and coin in a 3D black and white render

We follow every token we scan for thirty days, with checkpoints at 6h, 24h, 7d, and 30d. Of the 6,989 outcomes we've resolved at the +24h mark, 3,634 (52%) landed downside: rugged, dumped, or abandoned. Five mechanisms cover almost all of them, and they aren't evenly distributed. One pattern accounts for 94% of the failures.

Most rugs aren't accidents. They were already in the cap table when you got there.

If you want a primer on what the verdict words on each scan card actually mean before reading on, that's the verdicts piece. The page-level breakdown of how those verdicts are computed lives in the how-to-read piece. This article is what happens after a scan: the patterns those verdicts are trying to catch.

How we file failures

Every downside outcome lands in one of three categories.

Contract failures are when the token contract or its LP setup is itself the extraction mechanism. The deployer can pull liquidity, freeze accounts, or flip a tax through code. The malicious behavior lives in the bytecode.

Insider failures are when the people behind the launch coordinated the supply, or have a rug history. The contract is fine. The cap table isn't.

Structural failures are when no malicious signal fired but the market stopped trading anyway. Nothing went wrong. Nothing went right either.

Each of the five patterns below maps to one of these.

1. LP pull

If the LP isn't locked, somebody else is holding the off switch.

The deployer (or whoever holds the LP) yanks liquidity. Price collapses to near zero in a single block. The pool address still exists. It's drained.

What you see at +24h

  • Liquidity dropped to near zero versus scan time.
  • The LP was unlocked at scan, or the lock expired (or was released) inside the holding window.
  • No route remains to swap the token out.

This is a contract failure. The loud version (no lock at all) is easy to flag at scan. The quieter version (locked at scan, lock expires in 24 to 72 hours) is a class of late rug we have to look forward in time for. Empirically rare: 0.5% of resolved downside outcomes file here as their primary pattern.

2. Authority flip

The token behaves differently than it did at scan. Mint authority is back. Or freeze authority is back. Or a sell that simulated cleanly at scan now reverts (a honeypot: buys go through, sells don't).

Fingerprint

  • Mint authority returned to a non-null value.
  • Freeze authority resurrected. This is its own pattern: it lets the deployer freeze trader accounts without diluting supply.
  • A fresh sell simulation fails, or charges a tax that didn't exist at scan.

Token-2022 widens this surface. An active transfer-fee authority (a Token-2022 setting that lets the deployer change the sell tax post-launch) means the deployer can flip the sell tax to nearly anything once liquidity is in. We want to flag it before the flip happens, not after.

Also a contract failure, also rare: 0.9% of resolved downside outcomes.

3. Insider dump

The token didn't rug. The insiders just sold. Hard.

No malicious contract event, just coordinated supply hitting the bid. The contract is exactly what it claimed to be at scan. The people holding most of the float just decided they were done.

Fingerprint

  • Supply share held by sniper or bundle wallets at scan was meaningful, often over a quarter of supply.
  • The deployer's stack at scan is gone by +24h.
  • Volume and holder count drop sharply at the same time as snipers exit.

We classify the on-chain outcome as dumped, not rugged. From a holder's seat the loss looks identical.

Insider failure. Stripped of premeditated-launch signals, it's the primary pattern for 2.8% of downside. By the looser overlap measure (any sniper or bundler concentration rule fired at scan), the share is 83%. The through-line of nearly every failure.

4. Manufactured distribution

The launch itself was the rug. Sniper bundles bought the first block. Co-funded wallets stack the top 20. The deployer was a serial rugger, or their funder was.

Fingerprint

  • Top holders trace back to a small set of funder wallets.
  • The deployer's funder matches a wallet we've seen funding rugs before.
  • The deployer themselves has a history of rugged tokens.
  • The launch was a Jito bundle (an atomic transaction group used to land sniper buys in the same block as pool creation), alongside a high bundle-wallet supply share.

By +24h the on-chain shape is the same as #3, but the difference matters: the rug was premeditated, not opportunistic. We can flag it before the first sell.

Insider failure, and the dominant pattern by a long stretch: 93.8% of resolved downside outcomes file here as their primary signal. If you read only one number from this post, read that one.

5. Slow bleed

No malicious event. No coordinated dump. The token just stops.

Volume dies. Holders drift. By +24h liquidity is below our abandoned threshold (about $500 of LP). Nothing went wrong, technically. Nothing happened at all.

Fingerprint

  • Deeply negative volume change.
  • Net-negative holder count change.
  • LP intact in absolute terms, no auth flip, no honeypot. Nobody is trading.

Structural failure (or indeterminate when we can't attribute the deployer cleanly enough to rule out hidden coordination). 2.1% of resolved downside outcomes.

This is the outcome our rules catch least well. There's no single signal that flags "this token will be ignored." It's where most of our open scoring work is.

What the data says about share

Across 6,989 outcomes resolved at +24h, 3,634 landed downside (52%). Filed by primary pattern, with the most specific match winning:

  • Manufactured distribution (insiders): 93.8%
  • Insider dump, no premeditated signal (insiders): 2.8%
  • Slow bleed (structural / indeterminate): 2.1%
  • Authority flip (contract): 0.9%
  • LP pull (contract): 0.5%

Why we publish this

Every scan produces an outcome record at five future checkpoints. When a checkpoint resolves, we already know which signals fired at scan, so we know which signals predicted what actually happened. Signals that fire often but don't predict downside lose weight. Signals that rarely fire but consistently predict downside gain it.

The public side of that loop is metrics. You can see how our verdicts hold up at every horizon, broken out by tier and by category, with the misses included. We don't quietly retire the bad calls.

Scan a token at doomer.wtf to see which of these five patterns it's most exposed to right now.

// faq

Which of the five patterns is most common?
Manufactured distribution by a wide margin. 93.8% of resolved +24h downside outcomes file here as their primary pattern: a deployer with a rug history, a funder we've seen funding rugs before, or coordinated wallet patterns showing the supply was preplaced. The other four patterns combined account for the remaining 6%.
"First 24 hours" from when?
From scan time, not from launch. For tokens scanned at launch the two converge. For older tokens it's a 24-hour holding window starting whenever the scan happened. The 6,989 outcomes the percentages above are drawn from include both pre-graduation tokens (still on a bonding curve, before migrating to an open AMM) and post-graduation.
Why is "abandoned" treated as a loss?
Because from a holder's seat an abandoned token (LP drained, pool unverifiable) is functionally identical to a rug. You can't exit the position. Excluding abandoned from the downside numerator would understate the true failure rate by a meaningful margin.
Where do you publish this data?
Aggregate splits and per-tier accuracy live at /metrics. Per-token timelines live on each token page.

// read next