|
 |
A word or two about the 100-point scale so controversial these days among those who engage in the critical evaluation of wine.
Those opposed argue that it is impossible to assign a wine a numerical rating with such specificity. They mock the scores that accompany reviews in popular wine publications such as The Wine Advocate (Robert Parker's famous wine newsletter) and Wine Spectator.
They ignore the reality of the critic's task, regardless of the topic under review. Arriving at a wine recommendation involves, among other things, a deliberate weaning of options until the field has been narrowed and clear preferences established.
My nationally syndicated Wine Talk long ago embraced the numerical rating, aka scores. As a wine journalist, I once shunned the practice of putting a number to a wine. I believed, naively I think, that my prose was the most important element of the wine review. That sort of thinking has a serious drawback, driving the reviewer to ever more outlandish inventions of adjectives and wine descriptors.
The most frequently asked question I got from readers boils down to this: "Which wine did you like best?" My position on scores evolved over time as I realized consumers weren't all that interested in my wonderful vocabulary; they had to buy the wine for the dinner party this weekend and wanted to know which of those three Chardonnays reviewed this week was my favorite.
Well, duh. Soon a new philosophy was born, one I maintain to this day. The Wine Talk scores measure one thing: my enthusiasm for the wine being recommended. On the rare occasion, I've given a wine the full 100 points. Such wines do exist.
Opponents of the 100-point scale, almost never consumers, would argue that scores are not a valid measurement of wine quality because few reviewers can replicate the same score over and over with respect to the same wine. I fail to see how a letter grade, such as an "A-" does a better job in the "repeatability" arena. Or any of the weasel words such as "outstanding" or "exceptional," which are always open to interpretation.
My hunch is that those who stress the repeatability issue do so because they lack confidence in their own ability to nail a wine on consecutive passes in a blind tasting.
AD FEEDBACK
Assigning a score to a wine would pin them down forever in a way that no weasel word ever would. And a letter grade always leaves a little wiggle room in a way a specific score wouldn't.
To be sure, repeatability is an important aspect of critical wine evaluation, even when you take into account bottle variation and temperature variation, which can alter perceptions in subtle ways.
It is important enough that judges at Concours Mondial de Bruxelles, the world's largest wine competition, are tested frequently for repeatability over the course of the three-day evaluation process. Judges whose scores on the same wine exhibit an unacceptable degree of fluctuation (more than a few points) are weeded out.
The fact that many of the judges are the same year after year is testimony to their ability to peg the quality level of a wine tasted "blindly" twice over the course of a tasting session. Is that blind luck? I think not.
Those who think this can't be done probably believe that because they can't do it themselves.
What makes the discussion of the reliability and repeatability of scores particularly relevant today is the growing use among U.S. wine competitions of the 100-point scale. The Los Angeles International wine competition was the first to embrace scores a few years ago.
The object was to make it easier for consumers to interpret the meaning of a gold medal, and understand that some gold medals were more significant than others because they came with higher scores.
Other wine competitions have followed this lead, including the recent San Diego International, which I manage for the Social Service Auxiliary of San Diego, a non-profit charitable organization.
"Here in the tasting room, customers are loving the dual awards system," said Thrace Bromberger, who manages the tasting room on the Sonoma square for Charles Creek Vineyards. "And we love it because we got a 98 for one of our wines, and a couple of scores in the low 90s for other wines."
That, in a nutshell, is the crux of the issue. Consumers love scores. Consumers understand scores. Consumers couldn't care less how many adjectives a wine reviewer can summon in the basic, garden-variety, often interminably boring wine review.
To the extent that consumers read wine reviews for recommendations, they essentially want to know one thing: How much do you like this wine? That answer should be ever so simple, and numerical.
Follow Robert on Twitter at @wineguru.
|
 |
|