Winemaker Robert Paul responds to Tom Carson’s letter defending the Australian wine show system.
Tom Carson makes a valiant attempt to defend the Australian wine show system as it exists today (letters to the editor, 5 October 2024).
However, he makes the mistake of conflating change with improvement.
This is a highly questionable assumption and he offers no evidence as to how the changes he and his colleagues have introduced have improved the quality of wine show judging.
Perhaps Mr Carson only intended to demonstrate that judging panels have now become more diverse.
If that was his aim, then he has succeeded.
If, however, he wishes to convince the reader that wine show judging itself has improved because of these changes then he needs to go further.
Of course, it is reasonable to assume that a more diverse judging panel may give a better result.
However, it is just as reasonable to assume that a varied group of judges will simply give a more varied set of results.
Who does this help? Certainly not the confused exhibitor.
Nowhere in Mr Carson’s letter can I find any reference to exhibitors without whom there would be no show system.
People enter wine shows for several reasons and for several outcomes but the most common complaint I hear from exhibitors (apart from the expense) is the inconsistency of results from show to show.
I suggest one reason for this is inadvertently stated by Mr Carson when he says proudly that nearly fifty people have been involved in judging at Brisbane in the last three years.
Why is this good? How does it help to improve the standard of judging? Where is the evidence?
There is no mention in Mr Carson’s letter of how important it is to select good judges, by which I mean those who can judge reliably, consistently and with repeatability. This is, after all, what exhibitors want.
If wine shows have abandoned one of their most important traditional roles, that of ‘improving the breed’ and become simply another marketing device, then selection of quality judges is not important and judging panels can be as diverse as one likes if it makes for a better story to pass on to marketing departments.
However, unless wine shows are to descend into relativism, wine show committees need to ensure that the judges they select are doing their job properly.
Yes, the AWRI has the excellent AWAC system for evaluating potential judges but I suggest that individual shows also have a responsibility to assess their chosen judges in a formal way, rather than simply by discussion between panel chairs and the chief judge.
This is a difficult, time-consuming and potentially embarrassing project but one that can provide exhibitors with more confidence in the show judging system.
With entry numbers in most shows on the decline, this is more important than ever.
Photo: Ben Macmahon.
Related content
I have to agree with Robert Paul’s sentiments that Tom Carson’s defence of the Australian wine show system, while well-intentioned, falls short of providing concrete evidence that the changes implemented have truly improved the quality of wine show judging.
Tom’s letter primarily focuses on increased diversity and rotation policies, but fails to address the core issues of judging quality and consistency that are crucial for exhibitors.
Tom emphasises the increased diversity in judging panels, highlighting the shift from a system dominated by “old white men” to one that includes a broader range of industry professionals. While diversity is generally positive, it does not inherently guarantee improved judging quality. The assumption that a more varied group of judges will produce better results is questionable without supporting evidence. For example, having sommeliers on judging panels has created a scenario where fashion and style is being lauded over technical quality.
The implementation of a rotation policy, as described by Carson, has certainly increased the number of individuals involved in judging. However, the frequent turnover of judges raises concerns about consistency and expertise development. The proud statement that “nearly 50 different people have judged at Brisbane” over three years does not necessarily translate to improved judging standards. It just means that there were 50 different opinions that end up having to go through the gatekeeper, who makes the final call anyway.
As Robert notes, a glaring omission in Tom’s letter is any mention of the exhibitors, who are the backbone of the wine show system. The primary concern for many exhibitors is the inconsistency of results across different shows. This is only exacerbated when very similar, if not the same, judging panels at different shows come up with absurdly different results within a three week time span.
Tom fails to address the critical aspect of judge selection and evaluation. The focus should be on choosing judges who can consistently and reliably assess wines, rather than simply diversifying the panel. Implementing formal assessment methods for judges, beyond the Len Evans Tutorial and the Australian Wine Research Institute’s Advanced Wine Assessment Course, could provide exhibitors with greater confidence in the judging process.
There is a concern that wine shows may be shifting away from their traditional role of ‘improving the breed’ towards becoming marketing tools. If this is the case, the emphasis on judge quality becomes less important, and diversity can be prioritised for marketing purposes. However, this approach risks undermining the credibility and value of wine shows for exhibitors.
While Tom’s letter highlights positive changes in terms of diversity and rotation within the Australian wine show system, it fails to provide evidence that these changes have improved the quality of judging. To truly defend and improve the system, there needs to be a greater focus on judge selection, evaluation, and consistency of results. Only by addressing these core issues can the wine show system maintain its relevance and value for exhibitors in an era of declining entry numbers.
Quality comment. The judging system reached the limits of human ability decades ago as improving the breed by removing technical faults petered out. However, what is judged each year and how the results are tabulated is of immense value to promote Australia globally. How to use all the annual results should be the direction of thinking. To those that imagine a perfect system, a gold in one show is duplicated in all others, I suggest you do not. There is no final taste standard to which all drinkers aspire. Why because the tastes of people varies so a one taste suits all finally leads to a giant factory. All tastings can do is sort wines, this group of wines gives drinkers better value than that group of wines, well that’s what we think. So be thankful for the odd Trophy that can be used in marketing.