Book Review: The Beer Trials
The Beer Trials
Seamus Campbell and Robin Goldstein
Fearless Critic Media, $14.95
It always feels like I'm the last to learn about cool beer events that happen around town (of the Associated Brotherhood of Portland Beer Bloggers local 503, I am the designated hermit), so it's no surprise that I didn't know Seamus Campbell and Robin Goldstein were quietly putting together a book based on a series of blind tastings conducted right here in River City. The idea sprang from The Wine Trials, a successful book Goldstein put out a couple years ago, wherein wines were subjected to blind taste tests by a panel of judges and then rated on a ten-point scale. In the Beer Trials,
Each beer was tasted by a rotating subset of the larger tasting panel. At each tasting, we had a steward who was responsible for selecting beers for each of the evening's panel of three to six tasters.... Beers were grouped with other beers of the same family--and the same style when possible--and served in flights of three, six, or (occasionally) nine beers.Niki Harrison Ganong (aka Suds Sister) was one of the tasters, and she expanded on the experience:
Though the beers were roughly grouped by style, we did not know anything about them (including the style). They were poured in another room and brought out on trays and were numbered. We had a form and a space to write comments for each beer. We also rated them numerically in categories ranging from hoppiness to mouthfeel to bitterness. No table talk until everyone was done writing.The goal was pretty obvious: separated from their label, fame, and our memory of them, how would these beers stack up when tasted blindly? The results of the tasting "trials" form the main substance of the book.
Let's start with what worked, first. The book includes ratings for 250 beers (which, since most of them came from Belmont Station, are available in Portland!), and they're rated within categories. I generally find the scores credible, and therefore surprising. The panels gave Rodenbach a 7 and Rodenbach Grand Cru a 9. Good! A number of world classics received their highest score: Saison Dupont, Paulaner Hefe-Weizen, Pliny the Elder. A number of well-regarded beers got middling scores (Full Sail Pale, Widmer Hefeweizen, Rochefort 8) while some poorly-regarded or lower-profile beers scored highly (Singha, 7)--exactly what you expect in a blind tasting. Broken Halo was awarded the highest score in the IPA category, which given its competitiveness, was an eyebrow raiser. To the extent you can find overlooked-beers that scored well, it's quite handy.
Now to what didn't work. One big problem are the ratings, which aren't scaled. So while you read in the front matter that it's a ten-point scale, it's really only seven, with no beer getting a 1, 2, or 10. Worse, the median score is a 7, meaning that half the sample is bunched up at the top end (but not very top!). I would really have liked to see a weighted sample that did spread out over the full scale. Even better, it would have been nice to see the weighting applied to each "family" of beers (the book doesn't divide them up by style), so you don't end up with the problem they faced in the pale ale category, where the highest score was an 8, and where 28 of the 44 beers gets a seven or eight rating.
Another problem is the pale lager category, which contains everything from Czechvar and Prima Pils to Pabst and Keystone light. It's the largest category, with 81 entries--most of them industrial lagers. While there's something vaguely interesting in seeing Pabst (6) beat out Miller High Life (5) and Bud (4), it hardly bears giving each of these beers a one-page treatment. And when you devote 81 of your 250 slots to crap beer, that leaves fewer slots for the beer we really care about. Finally, there is the age-old problem of rating beers within categories. While it's true that Pabst may well be a "6" in terms of light lager, how are we supposed to compare it to other sixes like Lindemans Framboise, Full Sail Pale, or Samuel Smith's Oatmeal Stout? The casual reader may not know.
And this is really where I begin to wonder: who's the audience for this book? The Wine Trials made a lot of sense. When you stand in front of a wall of wines at the grocery store, you see some for $8, some for $25, and some for $60. These prices may not reflect actual quality--a well-regarded vintner may have had an off-year, while an obscure Chilean vintner had a stellar one. Unless you drink a huge amount of wine, you can't know which vintages and wineries to choose. Beer is different. If you've had a sixer of Deschutes Mirror Pond (Beer Trials rating: 7), you know what it tastes like. You don't have to wonder if it still tastes the same. And even if you're standing there looking at a beer you haven't tried, say Boddington's Cream Ale (Beer Trials rating: 4), you figure that at three bucks a can, you're not really out anything if you try it. And is anyone standing there in front of the 36-packs of Icehouse thinking, "Gee, I wonder what this got in Beer Trials?"
I hope the book sells, because I hope all beer books sell. I would like to write a few, and it would be cool to see a robust market developing. But I'm not sure who will pick up this book and decide they need it in their collection. It's a worthy experiment, but the execution strikes me as having a few beta-version bugs. My suggestions: weight the beers, ditch the macros, and expand the whole selection to include a thousand beers or more. That's a book I want.