People who run wine competitions around the U.S. must wish they never heard the name Robert Hodgson.
Hodgson, a winery owner
and retired professor from Cal State University (Humboldt), recently embarked on a new career: statistically debunking the reliability of wine competition medal awards.
Earlier this year, he published a study in the scholarly Journal of Wine Economics (who knew?). He concluded something that many winemakers and critics have long suspected: which medals wines get at competitions depends at least as much on who does the judging as on the wine itself.
Over four years, Hodgson tested 65 panels of judges at the California State Fair Wine Competition by slipping three samples from the same bottle into large judging flights.
The results? Fewer than half of the panels consistently gave the same wines the same medal. In one extreme case, a panel rejected two samples of a wine and awarded the third a double gold.
Individual judges performed far worse than the groups. Just 10% of them consistently voted the same medal to the same wine.
Last week, Hodgson stomped his other foot. In a new article that's lighting up online discussion groups nationwide, he examined over 4000 wines entered in 13 major wine competitions. His devastating indictment:
The probability of winning a Gold medal at one competition is stochastically independent of the probability of receiving a Gold at another competition, indicating that winning a Gold medal is greatly influenced by chance alone.
Among 2440 wines entered in more than three competitions, 47% (1142) took at least one gold medal. For those, like me, who
frequently question the medal madness, that number alone calls the
entire process into question.
Wherever Hodgson looked, he found inconsistencies. Among them:
- 84% of the 1142 gold medalists received no medal at all in at least one other competition. In other words, one panel's top wine was rated sub-par by another.
- Not one wine that entered four or more competitions received gold medals at all of them
- Of 375 wines that entered five competitions, 132 (35%) won a gold medal somewhere along the way. But just six took gold at three competitions, and none at four or more. And 98% of those gold medalists also got a bronze medal or no medal at all in at least one other competition.
- The median correlation between results at any two of the 13 competitions: 0.10. The high correlation: just 0.33.
Hodgson drew three conclusions:
- There is almost no consensus among the 13 wine competitions regarding wine quality.
- For wines receiving a Gold medal in one or more competitions, it is very likely that the same wine received no award at another.
- The likelihood of receiving a Gold medal can be statistically explained by chance alone.
Blogger Joe Roberts (1 Wine Dude) has since lobbed a few grenades in Hodgson's direction, calling his statistical analysis "pseudo-science" and "bordering on being totally irresponsible".
One major issue: lumping together 13 competitions may illuminate inconsistencies among them, but obscures whether those inconsistencies exist simply because the study compares some competitions that do a great job picking top wines with others that don't.
Meanwhile, other wine writers like Alder Yarrow at Vinography are saying "I've always told you so" when it comes to wine competitions.
Around here, the advice has always been: Drink what you enjoy, or get advice from folks whose palates you trust. Always keep in mind that most competitions exist for wineries to use as marketing tools, not for consumers to trust as buying guides.