Safety regulations are proliferating at a great rate in society today, attempting to protect us from hundreds of known and unknown dangers. It is the unknown dangers that Aaron Wildavsky addresses here. Do restrictive regulations really reduce risk?
‘Trial and error’ is the cornerstone of science. Without trial there can be no error, but without error there is no learning. Regulations that prohibit even the testing of new technology, nuclear power technology for example, force innovators to guarantee before they start that there will be no errors. Even the most benign technology would never have been established if it first had to demonstrate that it did no harm.
Avoiding risk by trying to regulate it away has a surprising implication: making innovation more costly means there will be fewer departures from past practice, and this very lack of change may increase risk.
(Extract)
Consequences of risk aversion
The direct implication of trial without error is obvious: if you do nothing without knowing first how it will turn out, you cannot do anything at all. Risking and living are inseparable. Almost any act may stand convicted when judged by the rule of no trial without prior guarantees against error. The indirect implication is not at all intuitive: if trying new things is made more costly, there will be fewer departures from past practice, and this lack of change may increase risk.
Recent environmental impact statements in America may be read as saying that all ‘values being protected should remain constant. The environment is to remain inviolable in all its parts. No problem, at least not much: but when one adds health, safety, employment, inflation, urban, rural, and other impact statements, the world of public policy is then comprised of nothing but constants, with no variables.
Safety depends on learning and learning depends on error. Safety features found effective in one area may be adopted in others to their mutual betterment. Reliability is enhanced when, as in a submarine, there are numerous systems capable of replacing those that break down (Landau, 1969). This duplication depends on having sufficient resources to install additional units and sufficient diversity to create new approaches. Improving interdependence without increasing the diversity of parts, by contrast, inhibits them from either accumulating resources or trying out alternatives. When none is to be allowed to suffer a first-order effect, none takes on the task of accommodation to changing circumstance. The worst case is when the whole contributes uniformity and the parts rigidity. Then the scope for passing learning on to others or for discovering new configurations or responding to the unforeseen is diminished. Relative safety is not a static but rather a dynamic product of learning from error over time. Pioneers pay the costs of premature development. First models are rarely reliable; as experience accumulates, bugs are eliminated and incompatibilities alleviated. Were history halted, development deterred, so to speak, risks for innovators would be markedly increased. The fewer the trials, so there are fewer mistakes to learn from, the more error remains uncorrected. As development continues into the second and succeeding generations, moreover, the costs of error detection and correction are shared to some extent with future practitioners, and the benefits passed there will be no tomorrow, few could be willing to start up something new today. Needless to say, the second generation cannot learn from the first if there isn’t one. Who, then, will want to be the first to face risks?
A risk of guarding against all conceivable risks is that the costs are raised to such a high level that the ability of small scale units to compete declines and with it the rate of innovation. Rules and regulations designed to provide protection also increase the cost and hence the size necessary to carry on the activity in question. Thus it is possible to kill an industry or a sector of activity with regulation while that one reinforcing the other.
A preoccupation with rejecting risk leads to large scale organisation and centralisation of power in order to mobilise massive resources against possible evils. The probability that any known danger will occur declines because of risk-averse measures; but the probabilities that if the unexpected happens it will prove catastrophic increases, because resources required for response have been used up in advance.
By devaluing experience, the doctrine of ‘trial without error’ simultaneously increases the importance of theory and of theorists. Given the desire to avoid experience, the only way to know what to avoid, other than prohibiting all new developments, is to theorise about possible effects of proposed new technology. (Come to think of it, since it is often not clear what is new or not, hence conflict over patent rights, extensions of old technology may also be interpreted.) Theorising is a highly specialised activity, even more so when its purpose is precisely to eliminate all risk. Not only would government grow larger in an effort to ward off danger, but there would be larger organisations in the private sector to meet the riskless criteria, and all of these organisations would have to have large numbers of theorists to argue their cases. This new breed of intellectuals—of scientists-cum-lawyers might not exactly be a modern form of Dostoevsky’s Grand Inquisitor but they would be in a strong position to issue compelling commands. For this safety directorate could claim to make authoritative pronouncements on doing this or forbidding that to protect our lives. If society asks, ‘Who ought to allocate safety?’, the only answer can be the experts on risk aversion. To the traditional unanswerable imperatives — there is no money, no time, God is opposed, and it is unnatural — there would be added, ‘Wein and Wildavsky, 1978)’ — there would be no trial without prior guarantee against error.
On the one hand, risk aversion increases the demands for coordination among organisations; in order to prevent evils from occurring, coordinated actions across a wide range of possibilities is necessary. Otherwise, what is done in one area, say replacing a suspect chemical carcinogen, will hurt another, say spoiling timber. Yet the growing size of organisations is bound to reduce their flexibility. The larger they are, the more they operate by rule, the less quickly they can move. The spur to change, of course, is error, which comes in the form of feedback, i.e., divergences between what is desired and what occurs. Without tolerance for error, feedback must be reduced or eliminated. It is, after all, a uniformity of condition that is desired (‘feed forward’), so to speak, rather than feedback; so as to avoid danger to people and to the physical environment. To uniformity, therefore, the principle of ‘trial without error’ adds inflexibility. How, then, will these large and inflexible organisations deal with the change (or the challenges) that must occur despite their best efforts?
A policy of ‘no trial without prior guarantees against error’ decreases safety by increasing vulnerability. The loss of wealth used in safeguards against the possibility of error, i.e., in defensive moves, decreases the surplus available for innovation. The decline in innovativeness reduces variety, which in turn exposes society to surprise. It is not merely that the lack of variety enhances the unexpected when it occurs.
Australia has a mental health crisis, but not the one we think. Despite decades of...