Chris Hodder looks into how to assess the value of advice.
With COVID-19, understandably, never being out of the news, journalists have been struggling to keep looking for new angles rather than just regurgitating statistics. One subject brought up repeatedly these last few weeks has been the quality and independence of the advice government have been receiving from the mysterious “SAGE".
SAGE stands for the Scientific Advisory Group for Emergencies and, despite the social media fury, hasn’t just been created for the purposes of this specific pandemic. In fact, it goes back several years and has a composition covering the necessary expertise for the event for which it was reconstituted. For instance, when it convened to discuss the Nepalese Earthquake in 2015, it had several members from the British Geological Survey and when it discussed the Fukushima nuclear accident, experts from the National Nuclear Laboratory were present.
Although there has been much scrutiny over who was in what room and when, the key to any advice is whether it is any good and, annoyingly, who was involved does matter here. There are several things to consider:
Point 1 is the most challenging in any situation. Nobody is omniscient, so there will always be gaps in your knowledge. The challenge here is being aware of what you know and don’t know and being able to assess the potential impact of both. If you have flawed information or incomplete information on key points, the quality of the advice won’t really matter. However, more people are likely to have more information collectively than individually, but then the challenge is not to get overconfident that you have complete information. Admitting what you don’t yet know is the most valuable thing anyone can do in an advisory situation.
Points 2 and 3 are interesting as having experience of the exact same problem is rare. Context can be an important factor and it is critical that you don’t assume that because the problem someone had with something 20 years ago looks like the same problem now, that the same solutions would work. Being able to assess the contextual and particular differences is a necessary step and some advisors are unwilling to admit that their expertise may not be as relevant now as before. For example, the French Government very much assumed that declaring war on Germany in 1939 would be similar to 1914, but aeroplanes, tanks and radio communication (amongst a myriad of other social and technological developments) meant that the front with Germany ended up being the English Channel and not the Maginot line.
Point 4 is often forgotten. We automatically think that having a room full of doctors is the best way of stopping a pandemic because doctors know medicine. But in actual fact, the problem is not only treating the patients, but also preventing the spread of the disease. Here, we’d need input from behavioural experts who could advise on the best way of getting the behaviour change needed to stop people touching and coughing on each other. The width of the expertise is also important because the cultural input could have a dramatic effect and wider experience could help consider factors that people with a similar background might not think of. There is plenty of literature, for example, on how serious and important statistics, usually proposed and compiled by men, often neglects fairly fundamental aspects of women’s lives such as the contribution of unpaid childcare to the economy.
The final point is about “having skin in the game". In SAGE, you’d assume that all participants would like to lower the death toll and would be very keen to ensure that their advice helps rather than hinders. However, not all advisors are advising on global pandemics. Pensions advice, for example, has a long pay-off for the individual and a short pay-off for the advisor and therefore short-term gains may be favoured over long-term gains.
So, you’ve put together a diverse range of experienced experts with a vested interest in giving you good advice, now what? Well, now they have to produce some recommendations for action. You have to be “guided by the experts" as the government keep repeating, but here things get a bit woolly. Advisors may be keen to give great advice, but the reality is that good advice has to be doable. Advising everyone to travel back 12 weeks and sell their airline shares is great, but not very helpful advice. This is why it is critical that the people enacting the advice are involved in the process of drafting it. They may not be experts in what to do, but they are experts in how to deliver it. This will mean that there may be some hard choices about priorities for the use of your resources or which actions are undertaken first, but at least with the inputs of the subject experts as well as the practitioners, you can make an informed and practical choice.
The penultimate stage (but not the final) is enacting the advice. The final stage is actually reviewing the advice based upon the experience of delivering it. Here, the gaps in the knowledge will quickly become self-evident and you can revise your recommendations accordingly and change your plans. Keeping the experts in the loop at the delivery stage is very important and will increase the chances of them feeling invested in the outcome.
Great advice that is doable, context specific and covers all the angles is hard to come by. However, the important element of any advice is understanding that you need it and a great politician is one that listens to advice. As a meeting of minds between a diverse range of subject experts and practitioners working on the best available information, SAGE seems to be doing the right things.