Last week, Michigan's doyenne of wine writers, Sandra Silfven -- perhaps basking in the afterglow of an expense-paid junket to judge at the Riverside Wine Competition in California -- waxed rhapsodic about how such competitions "tell the real story of quality."
As opposed to reviews in "big glossy wine magazines" which, she suggests, rate wines on the basis of "the label or who advertises."
A grenade clearly lobbed in the direction of the Wine Spectator and Wine Enthusiast, neither of whose conflicts of interest I'd defend, either. But the collateral damage -- a message that medals are superior to reviews -- sprays shrapnel on every wine reviewer, from the matte-paper, commercial-free Robert Parker on down.
And it's plain wrong, especially where consumer value is concerned.
Let's start with the basics. Unlike consumer-oriented wine reviews, competitions are largely funded by and exist for the sake of
wineries. In review-starved places like Michigan, medals supply an important currency for wineries to buy positive marketing buzz.
Nothing wrong with that. But it means that competitions structure the rules of the game to guarantee maximum payback with least downside risk to those who foot the bills. And that's not you or me.
For example, have you noticed that competitions have no losers -- only winners? Wines
that don't get medals are quickly "disappeared".
At Riverside, wineries entered 1883 wines. Judges gave medals to 1247 (66%) of them; you'll find their names on the website.
What about the other 636 wines? As a consumer, wouldn't you like to be able to find out what they were -- if only to avoid on the store shelf?
Sorry, you'll never know. The "Entered, no medal" list doesn't exist.
What else don't competitions readily tell you? How freewheeling or stingy they are with the hardware. Riverside handed out medals to 66% of its entries. Last year's Michigan Wine Competition tipped the scales at 73%. The All-Canadian Wine Championships, held last week in Windsor, limits awards to 30%.
Does this make a difference to consumers in the quality of wines that come home with medal bragging rights? You betcha.
Competitions only give consumers one bit of information: who won a medal. And even that's suspect.
Just ask any owner or winemaker. The first thing they'll tell you is that medals mostly represent the luck of the draw: which random table of judges happened to taste your wine, what other wines were on the table in the same flight, and when during the day did they taste it?
How else do you explain why judges at the Pacific Rim Competition created a special trophy for Best Rosé, just so they could award it to 45 North's 2008 Pinot Noir Rosé -- and two weeks later, the same wine came home with an also-ran silver medal from the Tasters Guild?
That's like Parker scoring a wine 96, and the Spectator 81 - a split-decision you rarely see among major critics. But it happens all the time at competitions.
One reason: time is a luxury that competitions deny their judges, who may taste
100+ wines in a day. Wham, bam, thank you ma'am -- grab a quick
impression of the wine, cast your vote, and on to the next flight. Not to mention
the fog of palate fatigue that can set in late in a long tasting day.
Another reason: with only four things they're allowed to say (gold, silver, bronze, no medal), judges need to shoehorn each wine into a size, whether it fits well or not. There's no room for subtlety or shading, as in a review, and one strong voice at a judging table can sway everyone else into substantially raising or lowering an award level. Every judge has seen this happen.
I'm tuned into this at the moment, because last Sunday MichWine's tasting panel sat down to taste a dozen Michigan rosés for upcoming reviews.
(For the record: we taste blind so labels don't come into play, and don't take winery advertising.
We do accept free review samples; since it's a stretch to imagine being ethically challenged by the chance to swirl and spit a group of $15 wines.)
At the end of the day, I'm much more satisfied with the panel's results than those from a competition -- especially from a consumer perspective. Why? Here's three advantages a good review process offers consumers that no competition can match:
(1) Reviewers can taste more like consumers drink -- taking time to evaluate, let a wine air out, and retaste after it's been open a while. Last Sunday, we spent nearly an hour on each four-wine flight, commenting repeatedly how the wines changed in the glass as we went along.
(2) Reviewers can differentiate among wines, not just slap labels on them. The tasting panel's three top rosés scored within a point of each other; at a competition, they might well have each earned the same medal. But each is completely different in character, flavor profile, and the foods they'd go best with. A good review tells you this; a medal can't -- and that helps you be a smarter wine consumer.
(3) Reviewers can help you train your own palate, because there's substance behind the score. You may agree or disagree with what we say about a wine, but the descriptions give you a jumping-off point for your own evaluation. You're never left to merely scratch your head and wonder, "Why on earth did they give a gold medal to THAT one?"
I just read this statement from Robert Parker on his website , which expresses some of my own sentiments about competitions:
A look at the results of tasting competitions sadly reveals that
well‑made mediocrities garner the top prizes, and thus blandness is
elevated to the status of a virtue. Wines with great individuality and
character never win a committee tasting because at least one taster
will find something objectionable about the wine.