Our Review Principle And Score Guidelines

A look behind the curtain at our review principle and how we score our reviews.

Our Review Principle

As you may have picked up on, the Headphonesty scoring process strives for simplicity and clarity. There’s no value in a scoring system if it isn’t easily understood by both the readers and the reviewers. So, we define 3 as “average”, with better or worse positioned above or below this middle value.

We at Headphonesty have a responsibility to both the consumers that read our reviews and to the companies that make and provide the gear we review.

People often read our reviews in order to make a purchase decision. Are headphones A better than headphones B, and are either of them worth my hard-earned money? As fellow headphone enthusiasts and consumers, we acknowledge and own that responsibility.

We also owe the companies the truth. There is no gain in hyping inferior products, nor in dramatically overstating weaknesses. We’re here to provide a fair and honest assessment and to professionally present our experiences with a product. We don’t use inflammatory language. We don’t overly dramatize positives or negatives.

Companies can trust that we will treat them fairly and accurately portray the product. No more and no less.

We owe you the truth, and we want to earn your trust.

That doesn’t mean we always give glowing reviews for all products. Every review highlights both the positives and the negatives, the outstanding elements, and the things that can be improved. The content of the article and the score reflect this balance.

Remember: the real value of the review is in the details contained in the written assessment and not simply the final assigned score.

You will notice that several words will keep popping up throughout this article. Fair. Honest. Balanced. Truth. Trust. These are the fundamentals by which we define the Headphonesty review scoring process.

4 Types of Scoring Systems

At Headphonesty, we currently have 4 scoring systems:

  1. Oracle V1.0 🔮: For most audio gear review
  2. Sparrow V1.0 🐦: For true wireless headphones (TWS) review (Click here)
  3. Falcon V1.0 🦅: For wireless headphones review (Click here)
  4. Medusa V1.0 🐍: For wired headphones and wired IEMs review (Click here)

Let’s start with the most basic scoring system – Oracle.

Oracle Scoring System

We employ the Oracle scoring system as our general framework to evaluate most audio gear. In cases where a more specialized scoring methodology is not available, or if the reviewer is unable to adhere to a specific methodology, we default to the Oracle System.

The Oracle is currently at version 1.0.

The Oracle scoring system follows a 9-point range, defined as follows:

Rating ScoreScore MeaningScore Description
1Very PoorVery few to no redeeming virtues or elements. Complete failure on most or all essential aspects. Unusable.
1.5PoorFew redeeming qualities. Sound quality failure.
2SubstandardThe product has one or more serious flaws.
2.5Below AverageNot quite up to expectations or the performance of other comparable products. Some aspects are not up to par, but it is still usable.
3AverageA decent representative of value and sound quality at its price point.
3.5Above AverageAcceptable sound quality and has one or more elements to distinguish itself from its direct competition.
4GoodBetter than average sound quality at its price point and represents a good value.
4.5Very GoodAn excellent sounding product that is almost perfect.
5OutstandingA stellar product that not only sounds fantastic (regardless of price) but is also far above expectations in all categories. Sets the standard of excellence for its price point.

Other Scoring Methodologies

Besides our general audio gear review criteria, we’ve developed more complex methodologies. These consist of multiple criteria specifically tailored for different types of audio gear, such as wireless headphones, true wireless headphones (TWS), and wired headphones and IEMs, addressing their unique characteristics and performance.

Frequently Asked Questions (FAQ)

Is a score of 3 bad?

No. A product rated as 3 stars is literally defined as “a decent representative of value and sound quality at its price point.” That is where Headphonesty diverges from many other sites that use a 5-point rating system that appears superficially similar. Some sites seldom, if ever, rate a product as 3 stars or below. Their 3 is very likely not defined the same as Headphonesty (we define 3 as “Average”).

How does price impact the scoring process?

A fundamental scoring question is if we should compare an item only to other products in the same general price range, or should we also compare the item to all products on a universal continuum (regardless of cost)?

It is an interesting question. Should $100 items be held to the same absolute scale as a $1000 item? Understandably, people have greater expectations for a more expensive product.

Cost and sound quality is a non-linear function.

That is, sound quality does not necessarily improve in direct relation to the cost increase. For example, a $100 item does not sound twice as good as a $50 item, $500 is not a ten times improvement, and $1000 is not a twenty times improvement, and so on.

Headphonesty scores products based on comparison to other products in its general price tier and Sound Quality serves as the ultimate influence in determining an item’s score.

We use 6 general Price Tiers, and the Cost Range of each level widens as the tiers increase in value. The expanding cost range is intended to address the non-linear function of price and performance.

Price tiers for comparison and scoring

Price TierProduct CostCost Range
1<$50$50
2$51-$150$100
3$151-$300$150
4$301-$500$200
5$501-$1000$500
6$1000+$1000+

What are the elements that we consider when reviewing?

When considering the performance of a product, we try to keep the following 4 review elements in mind.

  1. Sound Quality – Sound quality is the most important element and most heavily weighted. The review should factor in the purpose of the product and the target audience. If the product is clearly made for a purpose other than ‘ultimate fidelity’ (bone-conduction earphones, for instance), it is unfair to directly compare that product to audiophile designs and it must be compared to other more similar items.
  2. Build Quality – We must address the quality of materials and build in every review. This discussion should include interior and exterior design and aesthetics. We should mention if a product has an outstanding aspect or warranty (a lifetime warranty, for instance).
  3. Included Accessories – While the product’s quality is far more important than the packaging, the included accessories add to an item’s intrinsic worth or flexibility for a variety of uses. Things like ear tips or pads have a dramatic impact on sound quality.
  4. Comfort/Fit – What could be more subjective than the comfort or fit of an item? It depends on the individual’s size and shape of their head and ears. If an item is unusable due to comfort or fit issues, this should be clearly stated and explored if this is unique to the reviewer or a severe design flaw.

What are the difficulties with achieving consistent scores?

Headphonesty consists of more than just one person. We have several contributing freelance reviewers, who are real, live people. And everybody has different preferences, expectations, and experiences, not to mention, ears! Some of us have measurement gear, some don’t. Some have middle-aged ears, with perhaps greater experience, but also likely high-frequency hearing loss.

All these differences lead us to that important term: subjectivity.

There are no truly objective reviews. It’s impossible.

But that doesn’t mean we shouldn’t strive for continual improvement for consistency across all reviewers. With multiple reviewers, we must calibrate our results by clearly defining expectations, and updating those expectations when new questions or scenarios pop up.

The majority of our reviews are edited by a single person (the author of the article you are currently reading) which goes a long way toward consistency across all reviews. Part of the editing process is to ensure the assigned score is in line with what is written in the review.

As a reader, you should keep in mind that scores are most valuable when comparing the reviews done by a single author. Even so, that reviewer’s expectations and scoring will likely change over time.

You should strive to find reviewers that have similar tastes and experiences as your own and place value on their opinions above others.

Do manufacturers influence the review or score?

There is a lot of general distrust regarding the relationship between review sites and companies. Are reviewers simply shills for companies in return for advertising dollars or free stuff? If all scores are high, it certainly gives that impression.

Frankly, this is the slippery slope that plagues consumer and reader confidence in reviewers and manufacturers. If only positive reviews are ever generated, this entirely undermines the industry. This is precisely the grand conspiracy that floats around online.

If a manufacturer is confident that they have produced a high-quality product, they should welcome, rather than fear independent reviews. The value and power of reviews are contingent on remaining independent. A company’s reputation is built on making great products and treating its customer base with respect.

One benefit of having freelance reviewers (that is, the reviewers may write for more than one site and that have no ownership of Headphonesty) is that we are not compensated directly by the manufacturer. The reviewer is beholden only to Headphonesty management and the review editing and scoring process.

Headphonesty has fairly stringent standards as to the format and quality of reviews, but there is no requirement for a score beyond a simple word that encompasses the site’s philosophy: honesty.

Yes, the site name comes from “Headphone-Honesty” not “Headphone-Sty” although, depending on the reviewer’s desk (mine included), it’s understandable where the confusion might come from.

The reviewers are not the same person who manages the website rankings, SEO, advertising, and all that goes into maintaining, growing, or generating revenue from Headphonesty. This is an important distinction, and Headphonesty has no paid sponsors that receive special compensation or treatment in our reviews.

Why do the scores not average in the middle of the range?

A perfect bell curve does not improve the validity of the scores, but transparency regarding the review and scoring process does. However, this isn’t to say that never using the full scoring range, and only scoring products between 3.5-4.5 is the right thing to do.

There are several limitations to be aware of. Headphonesty, like all other review sites, can’t possibly purchase every product that is reviewed and some manufacturers do provide review samples (this is noted at the top of every review). Companies aren’t likely to provide many duds for reviews, and it’s likely we see more of their good performers than not. Every site only sees a small percentage of all the headphone products out there.

Also at the extremely low range of scores, or those products that we define as: “Very few to no redeeming virtues or elements. Complete failure on most or all important aspects. Unusable.” are unlikely to make it to market. Unsurprisingly, we don’t see many products worthy of this 1-star rating.

So yes, we encounter mainly products that can be considered ‘above average’ or ‘good’. That is, the product has “acceptable sound quality and has one or more elements to distinguish itself from its direct competition” or has “better than average sound quality at its price point and represents a good value.”

There’s nothing wrong here and most scores averaging between 3.5-4 are to be expected. However, in a valid rating scale, there should be outliers. A few products really are that good (or bad). It just makes sense.

Understanding the process is the key to trusting it.

Every site has its own way of doing things. Just because they are different doesn’t make them wrong. Trust the ones that take the time to explain their reasoning and processes and make your own decisions. No process is perfect, but you shouldn’t have to accept a scoring process that isn’t clearly defined.