Headphone reviews need a reset.
Resolve explains why our new headphone ranking system gives lower scores than expected—and why that reset is necessary to raise industry standards.
We recently launched our new headphone scoring system and ranking lists, and judging by the reactions, a lot of you were confused. Some headphones you’ve loved for years aren’t ranking as highly as you expected. Some products that routinely receive 8/10 or 4 out of 5 star ratings elsewhere are landing closer to the middle of the pack on our list. So what’s going on?
It’s intentional. And it’s time to explain why.
The Problem With Most Rating Systems
Here’s a familiar scenario. You’re shopping for a new headphone or IEM. You already know the usual forum chaos isn’t going to help much, so you turn to review sites, which inevitably have a scoring system. The product you’re eyeing gets an 8/10, or maybe even a 9/10, four out of five stars, five out of five stars, etc.
That must mean it’s good, right? So you buy it. Eventually, you compare it to something better, or mess around with EQ, and suddenly the headphone that was getting 8 or 9 out of 10 reveals itself to not actually be all that impressive.
This doesn’t happen every time. But it happens often enough. And the missing piece is that the rating scales people happen upon are meaningfully compressed—only using a small fraction of the possible scale of judgment.
If almost everything scores between 7 and 9 out of 10, then the rating isn’t meaningfully differentiating products—it's a 3 point scale, not a 10 point one. This creates false confidence and reinforces confirmation bias. You want the thing you’re excited about to be good, and the rating validates that feeling, and you purchase based on that validation.
Even well-known review sites like SoundGuys and RTINGS—which still absolutely provide useful data—operate within a limited depiction of performance when it comes to their ratings. They don’t always consider the full theoretical range of what headphone sound quality could be, which means the scale homogenizes and compresses. Products that perhaps should fail, often instead get a "soft pass". That’s not good for brands who would benefit from reviewers telling them how to improve their product, but more importantly, its just not good for consumers.
“But Sound Is Subjective…”
Yes, there’s a subjective component to audio, but humans have heads and ears, and that is a fact. There are simply acoustic realities that matter, and a headphone that doesn’t remotely resemble a reasonable ear transfer function isn’t always going to be justifiable as "a different flavor" that will work well for some tastes.
This isn’t about taste. It’s about performance ceilings and performance floors. Right now, the industry operates as though we’ve more or less reached the mountaintop. But we haven’t. But the truth? We're not even close.
You can spend thousands on something like the HiFiMAN Susvara, the RAAL Immanis, or even the Sennheiser HE-1. We’ve all heard them, and none of us feel they represent “as good as it gets.” They all have meaningful compromises.
The Hard Truth: It’s About Compromises
If you take away one thing from this article, let it be this: Buying headphones today is less about achieving perfection, and more about deciding which compromises you’re willing to live with. The HiFiMAN Susvara has treble I personally love—but lean bass and mids that could be stronger. The Audeze LCD-4 delivers incredible bass texture—but has dark upper midrange and feels like strapping bricks to your head. The Sennheiser HD600 has gorgeous vocal timbre—but rolls off in the bass.
Every headphone that I love has trade-offs. I use EQ to mitigate many of the trade-offs—but the point is, we shouldn’t have to.
The sound quality you’re getting right now—even from the best gear in the world—could still be better. The fact that it isn’t simply reflects that the product has compromises, and we want to better reflect those compromises in our ratings.
We asked: Is it fair to score products only relative to what currently exists? Or should we score them relative to what’s possible?
We chose the latter.
That means the best headphone in the world doesn’t automatically get a 10/10. If it has weaknesses—and they all do—it loses points for those weaknesses. Yes, this results in lower overall scores than you’re used to seeing. That’s the point.
How Our System Works
We have our Sound score, which breaks down sound quality into four categories: Bass, Mids, Treble, and X-Factor. Each category is weighted based on reviewer priorities. For me, treble performance is a make-or-break factor, so it carries significant weight. Another reviewer might emphasize midrange or X-Factor more heavily.
These ratings are subjective listening impressions—not measurement-derived scores. Headphone performance varies from head to head, and no single graph can capture or communicate the totality of our experience.
“X-Factor” can be thought of as technical performance—or, as I sometimes frame it, a bullshit-tolerance coefficient. It’s the intangible quality that makes a headphone better (or worse) than the sum of its parts.
On top of the Sound score, we have an Overall score that includes Comfort and Value.
What This Means in Practice
Take the HiFiMAN Susvara again. It scores extremely well in treble (8/10) for me. Slightly less in mids (6/10), and slightly less again in bass (5.5/10). That doesn’t mean it’s bad. I love it. But it’s not perfect—and it shouldn’t be treated as if it is, which is why it gets a 7.1/10 for its Sound score.
Or look at the Audio-Technica ATH-R70x. It has some of the best mids I’ve heard (8/10). But the treble lacks precision and detail compared to higher-performing sets (4/10). So its overall ranking reflects that imbalance.
Even strong products land in the 6–8 range because we’re evaluating them against the full performance ceiling—not just against their immediate peers. Meanwhile, truly bad products will not quietly drift by with a 5/10 safety net. If something is bad, it will be treated as such.
A Higher Standard
If you’re a brand and you’re frustrated that your product isn’t getting a 9/10 anymore, understand this: you absolutely can earn that score. It just requires fewer compromises.
Our goal isn’t to tear products down. It’s to raise expectations.
The current media landscape often signals, “This is as good as it gets.” It isn’t.
There’s still enormous room for growth in headphones and IEMs. And if we collectively hold the industry to a higher standard, that growth becomes more likely.
This ranking system reflects that belief. If it feels stricter, that’s because it is.
