What Headphone Reviewers AREN'T Telling You
Ever wonder if there are any relevant pieces of information that most audio reviewers are leaving out of their reviews? The answer is yes, so Resolve is here to break down what exactly reviewers are leaving out of reviews, and why.
Ever wonder if there are any relevant pieces of information that most audio reviewers are leaving out of their reviews? The answer is yes, so Resolve is here to break down what exactly reviewers are leaving out of reviews, and why.
00:00 - Intro
01:20 - Reasons Why A Reviewer May Not Tell You Something
02:45 - Has the reviewer disclosed their preferences?
04:41 - Has the reviewer disclosed the bias of their measurement process?
07:17 - Has the reviewer disclosed their music choices/listening volume?
08:06 - Has the reviewer disclosed the limitations of their data collection/presentation?
10:10 - Has the reviewer disclosed any conflicts of interest or incentives to give a positive/negative review?
11:52 - Conclusion
Want to discuss this video? Start a discussion on our forum by clicking the "Start Discussion" button at the bottom of the page!
Transcription below:
Hello there, my name is Andrew and I'm a headphone reviewer. I sit and listen to headphones for a living. How thrilling. Today I want to talk about some of the information that headphone reviewers, including myself, aren't always sharing with the audience. You guys. And this is whether it comes to products that are being evaluated, the evaluative process itself, or the ongoing relationships with headphone brand or manufacturers that these reviewers have. And more generally, what is going on behind the scenes of a review. And I think people would be helped by understanding the limitations of this format, as well as the perspectives their favorite reviewers may be coming from. So, without further ado, let's dive in. As always, these videos are made possible by Headphones.com. If anything that I've said here today is interesting, entertaining, or valuable to you, definitely check out Headphones.com and consider them for your next audio purchase. In addition to that, that is also where we publish all of our written reviews and our educational deep dives. So if you want more information like this that I'm giving you guys here, that's all up on headphones.com as well. Now as we get into this, I want to make it clear that this isn't a call out of anything. It's not drama and it's not some sort of resolve versus other reviewers issue. Some of the things that I'm going to be mentioning here are issues that we at the headphone show struggle with as well. I just think it's important to pull back the curtain a little and clarify why headphone reviews end up the way they often do. So the first thing I want to talk about is the reasons why a reviewer may not be telling you some of the things that they should tell you about the product or the review itself. Some reviewers certainly want to disclose every little detail and little bit of information about a headphone that they can. But if we did that, these videos would be an hour or two long and nobody would want to watch them. So unless your favorite reviewer is regularly putting out 40-minute review videos, brevity and watchability is something that the reviewer is likely trying to account for. Something that we have to consider. And this may also explain why a review that you're watching just isn't as long or thorough as you might like, but I'll be the first to admit that it's an incredibly hard balance to strike. Another reason is that some reviewers may opt for a different style, something more focused on the casual experience of listening to headphones instead of nitpicking every tiny little flaw that they find. While this is totally fine, it's up to you, the viewer, to understand what kind of review you're watching, because not all reviews will be attempting to be equally detailed, thorough, or critical. And lastly, the least common reason a reviewer wouldn't tell you something would be that they are incentivized to hide important information about the product for one reason or another. And I want to be clear, this is not as common as people online think it is, but it does unfortunately happen. So for that reason, it's worthwhile for viewers to mull over and consider a reviewer's trustworthiness based on their track record of how they've discussed products in the past. And with the why out of the way now, let's talk about the what. The first thing that I've noticed many reviewers not disclosing that they probably should be is what their personal preferences actually are. This is obviously important, because unless they're a robot, their personal preferences are likely to affect how every part of a review ends up. But something I've noticed is that for most reviewers, their personal preferences are both subject to change as well as honestly kind of an unknown. Obviously, we need to account for the fact that reviewers are human, and so it's totally fair for their preferences to change when exposed to new experiences. but it's worth keeping in mind that for most reviewers, their preference is potentially less concrete than you think. Beyond that, while you might expect a headphone reviewer to have gone through the process of ironing out exactly what their sonic preference is, so they could judge every product against their ideal, the unfortunate reality is that most reviewers haven't done this with any sort of rigor, and preference doesn't necessarily work like that either. Most reviewers who've tried to quantify or express their own preferences are IEM reviewers, But because of the limitations of IEM measurements and target methodology until just recently, most reviewer targets are meaningfully flawed. And it's impossible to know how well these targets represent what the reviewer's most preferred sound would actually be. The problem here is that how do you know that their preference is actually their preference? And this gets into the limitations of the most commonly available measurement equipment and not actually understanding the in-situ responses at their eardrums of the devices they're using. And this is a deep and complex topic for another time, but I'll leave links to some of the videos where we've talked about this in the past where you can get more information. Additionally, we know from some of Dr. Olive's most recently published research that the same person can prefer two very different sound signatures equally. So it makes sense to allow for some level of deviation even for an individual's preference, because the truth is, different things can be preferred similarly. Now with that said, reviewers would do well to try and characterize their own preferences and relay that information to the viewers whenever convenient. because it's obviously a huge factor when it comes to the value judgments present in a typical review. This brings me to the next topic reviewers aren't talking about as much as they should be, which is how subjective, even objective, parts of the review process like measurements are. While it's probably obvious to a viewer that a person's judgment of how good a headphone sounds is going to be, at least in part, subjective, it's much less obvious how a measurement is subjective. Especially when there's been so much inertia behind the idea that measurements are objective or free from human fallibility. And you probably think you have a good understanding of how a given audio product performs, but do you actually understand how it performs? We have to remember that the way the data is collected and presented is influenced by the biases of the person collecting and presenting the data. And this means that even though the measurement would ideally reflect the objective performance of that device as measured in whatever circumstance, more often than not, it's much more complicated than that. And this isn't to say any of it is deliberate. It's just that different methodology can lead to very different outcomes. For example, another reviewer and I could measure the exact same headphone, but if the way these measurements are collected and presented is different enough, and it likely will be, it's reasonable that the data itself as well as the viewer's interpretation of the data would vary significantly more. Amir over at Audio Science Review has a methodology that shows a single seating of a headphone's response against the Harman target, which wasn't even devised for the ear that he's using. And I know what you're thinking, hold on, we all made that mistake at one point because we didn't realize how massive of a variable that would be. By contrast, Mark Ryan at Super Review provides a raw visualization of an averaged 3 seeding per side measurement against a tilted diffuse field target for his rig, which is different from Mimir's methodology, and I would actually argue significantly better. And then our method is to diffuse field compensate an average 10 seedings per side against preference bounds pulled from the Harmon Research, which is different still from either of those other two methods. All of these approaches to objective information are bound to leave the viewers with different things to unpack. Now again, the point here is not to throw shade at others for having a different methodology. That's a different conversation. In fact, I want to take a moment to shout out Super Review and recommend everyone watching this to go subscribe to his channel. The point is rather to bring to light the fact that all of these measurements still come with a reviewer-specific value system attached to them. And for that reason, it's worth making clear to viewers that they need to be very careful about treating all of this data as objective. Because it simply isn't. Not only are all these different rigs bound to produce different results, But these are also different operators with different data collection methodology, operating with different approaches to visualizations, targets, and normalization. So, how the data gets captured, represented, and visualized is all still laden with the values of the reviewers. Alright, on to the next topic. Reviewers rarely spend much time talking about the actual music they're listening to, what volume level they're typically listening at, and how that affects their preference for the headphone sound. And so, for example, I like to listen to serious audiophile music like jazz, My music taste and typical listening level almost certainly have an effect on what kind of colorations I find tolerable, enjoyable, or neither. Someone who only listens to EDM at ear-splitting volumes will almost certainly have a different preference to someone only listening to classical music at much lower volumes. The spectral content of those two genres is vastly different, and the way that your ear and brain react to the different volume levels will differ as well. So keep in mind that if a reviewer hasn't really mentioned what music they're listening to or at what volume, your experience with a headphone is bound to differ based on how you and the reviewer differ in taste and typical listening volume. The next part is more measurements focused and has to do with how people measuring products, mostly to do with IEM specifically, are leaving out critical information. So talking about the differences that human anatomy is likely to impart to the experience with the product is helpful, but exceedingly few reviewers are actually showing and explaining the data that will drive this concept home for people. And this is something even we don't do sufficiently. For example, the treble of IEMs operates on a principle of modal behavior. Above 3 kHz, we get a pattern of resonances that is dependent on the length between the eardrum and the IEM driver. Whether due to anatomy, insertion depth, or eartip choice, the location and pattern of the treble peaks changes as well. Here you can see a visualization of that. But most people only show a single seating of the IEM instead of displaying the range of possibilities one could experience with a given product. This, along with the erroneous idea that measurements are objective, leads people to misinterpret the single data point as the truth about how this IEM will perform, when their experience will almost certainly differ in ways we can actually show. So what I'm saying here is not that their experience might differ, it is highly likely, if not guaranteed, to differ. measurements are never treated like that when people are reading them. And beyond that, while different eartips can make or break the comfort of certain IEMs, eartips also make a considerable acoustic difference as well, especially when it comes to this treble response. A swap of eartips alone can make an IEM that sounds buzzy or harsh in the treble to something that's much more palatable, smooth, and easy to listen to. Again, essentially nobody except for like one or two people are publishing comprehensive data on this. And even those who talk about their subjective preference for a certain kind of ear tip often aren't showing how this ear tip actually affects the sound. So in general, the treble of IEMs is typically shown as a single measurement, just like one line rather than the range that I mentioned. And it's important to know that especially in the treble, this can all change depending on the tips. And so this is effectively calling out all of the IEM reviewers to do more of this, and I'm calling out myself in the process. We do this sometimes, but we don't do it enough. So I'm definitely in that category. I'm taking shots at myself. Now, lastly, the big elephant in the room. Some reviewers are incentivized to hype the positives and downplay the negatives about a headphone. And whether that's to keep relationships with brands intact, or keep the inflow of review units coming, or for literal financial reasons, this is something that does happen. I'm not saying all reviewers do this, or even that the majority of reviewers are doing this. You know, the folks you guys are familiar with probably aren't. But for some reviewers, there's a very real incentive not to rock the boat. So, if you're too critical, brands might stop sending you headphones. And the reality of this is that access to gear means a whole lot when a reviewer needs headphones to even be a headphone reviewer. Like you need to have the stuff to be able to talk about it. And that's why transparency matters. And it's why we're incredibly clear in all of our videos that headphones.com makes everything on this channel possible. They support us. They sponsor. They're the primary sponsor for what we do here. And that includes getting us gear to review while also having no input on the sentiment of what gets expressed about it. And I bring this up because it provides an extra layer of security for us on the evaluative team that regardless of what we say, we'll always have access to headphones to talk about. And this is a security that most reviewers just don't have. There's no guarantee that if they say something negative about a product, that that brand will send them something again in the future. And beyond that, if someone's being sponsored, or if they're reviewing a free unit that they get to keep, or the unit was sent by a brand with the expectation of a positive review, I mean, this is stuff that happens and we know it happens. That should be disclosed, but it rarely will be. So make sure that you examine the relationship between the reviewer and the company that is sending the product to the reviewer, the product to be reviewed, because this can provide a ton of context on information that they are giving you. Okay, that's basically all that I wanna say on this topic for now. Thanks for taking the time to watch this video. And if you guys wanna connect with me, you can do so at our forum at forum.headphones.com or our Discord, also linked below. And until next time, I'll see you guys later. Bye for now.