Volume 3, Number 4—December 1997
Identifying and Anticipating New Foodborne Microbial Hazards
Communicating Foodborne Disease Risk
On This Page
The food industry, like many others, has a risk communication problem. That problem is manifested in the public's desire to know the truth about outbreaks of foodborne diseases; ongoing concern about the safety of foods, additives, and food-processing procedures; and continued apathy regarding aspects of routine food hygiene. If these concerns are addressed in a coherent and trustworthy way, the public will have better and cheaper food. However, sloppy risk communication can itself cause public health damage. Because citizens are ill-equipped to discriminate among information sources, the food industry as a whole bears responsibility for the successes and failures of its individual members. We review risk communication research and practice for their application to the food industry.
The guardians of the world's food supply face a communication challenge of extraordinary complexity. They need to be ready at short notice to deal with various crises, often involving baffling combinations of foods, pathogens, handling and distribution practices, dietary norms, and interactions with medical conditions and medications. Their response to this challenge may have important health, economic, and even political implications. Conflicting pressures may come from groups that bear the cost when the public health response is too swift or too slow. Quick, confident explanations are expected after outbreaks that may never be fully understood. When consumers (and producers) need information, they cannot wait for more research. Consumers can read between the lines, especially when they perceive their lives or livelihoods at risk. If they misread messages, the communicators may still be held responsible. Moreover, consumers know that silence is also a form of communication.
At the same time, the guardians of the food supply must wage a continuing struggle to improve the handling of food. In the United States, campaigns are under way for cooking beef more thoroughly, separating raw meat from salad ingredients, and improving the sanitation of food handlers (e.g., Operation Clean Hands). To some extent, these campaigns are the incarnations of old messages that have not been communicated effectively. At the same time, the campaigns are responses to changes in the food supply that have increased the risks associated with conventional practices. For example, as the incidence or severity of foodborne disease pathogens increases, the effectiveness of customary food-handling practices decreases.
This article briefly reviews risk perception and communication research as a possible resource for better understanding (and perhaps meeting) the public's needs (1-3). Communication research provides a set of general tools and theories, as well as a body of results, showing a complex picture of strengths and weaknesses in lay understanding of risk. We explore here the implications for anticipating public response to emerging foodborne pathogens and offer a proposal for how an effective communication campaign might be organized.
Although risk communication research does not directly address emerging foodborne pathogens, it is compatible with the model of risk assessment that the food industry seems to be adopting (4). Drawn from the National Research Council's (5) volume, Improving Risk Communication, the model involves overlapping processes of assessing the magnitude of risks (through analytical procedures), managing their level (through practical measures), and communicating with the public about them.
Like many other risks, emerging foodborne pathogens are of primary concern to some specialists but one more thing to worry about for ordinary citizens. The thought processes that people rely on for making decisions are the focus of much research (Table; 6).
An overarching theme of risk communication is that people understand risks that draw their attention and are presented comprehensibly. Whether the public's attention is aroused spontaneously or as a result of a message, the opportunity must be seized. The right information must be selected and communicated appropriately (1,16,17).
The hallmarks of effective communication should be used. Match the audience's level of technical sophistication. Do not talk down. Clarify terms (e.g., virus) that are used in everyday speech but not very precisely (e.g., risk). Organize information. Provide the audience with a quick logical overview. Make the desired level of detail easy to read. Use numbers to communicate quantities. Avoid ambiguous verbal quantifiers, such as "rare" or "likely". Ensure source credibility. Realize that messengers are a part of the message and essential to its interpretation. Use knowledgeable sources that will not misrepresent the message. Avoid risk comparisons with rhetorical implications. Comparing one uncontrollable accident risk with another, more familiar one (e.g., half as likely as being injured by lightning) can be useful; however, people dislike comparisons that imply they should accept one risk because they accept another, e.g., comparing the risks of nuclear power with those of eating peanut butter (from aflatoxin).
However useful communication research may be, there is no substitute for empirical testing of messages. With heterogeneous audiences, any fixed message will work better for some people than for others. In such cases, universal understanding may require providing the opportunity for the public to ask questions through public information sessions, agricultural extension services, science teachers, or toll-free numbers.
The effort to communicate is wasted if the information is not worth communicating, either because people already know it or because it makes no difference to them. Indeed, communication can backfire if consumers think that their time is being wasted with useless messages while they are being denied pertinent information. An analytical effort to determine what is worth knowing and a coordinated empirical effort to determine what people know already are required. These efforts take different forms in situations where consumers face well-formulated decisions and need only a few quantitative estimates before making choices, and in situations where consumers are trying to understand the processes creating and controlling a risk, in order to follow public discussion, devise decision options, or understand quantitative estimates.
Identifying Relevant Estimates
The tools of decision analysis provide ways to determine how sensitive well-structured choices are to uncertainty in different decision parameters (18,19). The more sensitive parameters should receive more attention, unless consumers know them already (and need no reminder). If conditions do not permit sensitivity analyses for individual decision makers, one can model the information needs of a population similar to the intended audience. Merz et al. (20) demonstrated this approach for communicating to carotid endarterectomy candidates. Scraping out the main artery to the brain reduces the probability of stroke for patients with arteriosclerosis. However, the procedure can cause many problems, including strokes. Decision analysis computed the attractiveness of surgery for a hypothetical population of patients, with a distribution of physical states (e.g., stroke risks) and values (e.g., time horizons). The analysis found that three of the potential complications (stroke, facial paralysis, and persistent headaches) posed sufficient risk that learning about them should dissuade about 30% of candidates from surgery. Learning about the other side effects should affect few additional patients. Therefore, physicians trying to secure informed consent should (while not hiding other information) make sure that patients understand the risks of these three complications.
Identifying Relevant Processes
Risk analysis provides one way to identify the critical processes in creating and controlling risks. Figure 1 shows a simple model for the risks of foodborne pathogens. It uses the formalism of the influence diagram (21,22). Such a model can be used both to assess risks and to characterize the comprehensiveness of lay understanding. In this model, people incur food-related risks as a result of decisions, which possibly lead to actions or exposures. These decisions concern such actions as eating a bite of suspicious food, choosing a particular diet, or opting for school (or home) lunch. Those decisions depend, in part, on the perceived risks of those actions as well as other nonrisk factors (i.e., other costs and benefits). Exposure may follow, if a pathogen is actually present; it can lead, in turn, to transmission of the pathogen and to changed health states, depending on the resistance to disease that the person's health provides.
Figure 2 elaborates on this model. It shows that food pathogenicity depends on both the prevalence of pathogens in the environment and the quality of food handling. A person's own health influences risk perceptions through the intermediate variable of perceptions of health, which in turn is influenced by the person's history of food consumption (or avoidance). Actual pathogenicity influences risk perceptions through awareness, a variable that communicators might affect. Nonrisk factors include visceral factors (e.g., hunger), external social factors (e.g., social pressure to eat any food offered by a host), norms (e.g., not eating dog), and the expected benefits of consumption (e.g., taste, texture, and other gustatorial pleasures). Computing risks with this model would require specifying each variable and estimating the contingencies by using statistical sources or expert judgment. For risk communication purposes, even a qualitative model can define the universe of discourse and allow approximate estimates of the most important relationships (23).
Identifying Current Knowledge
Determining what people already know about quantitative estimates is relatively straightforward, although there are various pitfalls (24,25). Eliciting knowledge of processes is more difficult. Respondents should be given the focus of the problem and maximum freedom to express their ideas and reveal which of the processes in Figure 2 are on their minds. Studies using open-ended techniques often find that people speak the language of risk without understanding its terms. For example, in a study about radon, we found that respondents often knew that it was a colorless, odorless, radioactive gas that caused lung cancer. However, when pressed, respondents often revealed inappropriate notions of radioactivity, believing that anything radioactive would permanently contaminate their homes. Some told us that they would not test for radon because there was nothing that they could do if they found a problem (26). In studies with adolescents, we found other forms of false fluency; for example, teens used terms such as "safe sex" and "clean needles" without understanding them (23). Without open-ended probing, we miss misconceptions that a technical expert never would have imagined, or we use language that does not communicate effectively with our audience (27).
Technical experts (in any industry) generally want to get the facts in order before saying anything. Although that is an appropriate norm within the scientific community, refusing to address a concerned public can evoke mistrust and anger, as can failing to arouse an apathetic public. To steer an appropriate course, communicators need an explicit policy that balances the risks of saying too much with the risks of saying too little. The policy must consider both what to say and when to say it. From a decision theory perspective, citizens need information critical to identifying actions that will help them achieve personal goals. As a result, any recommendations should reflect both scientific knowledge and citizens' values. That is, consumers need to know what is the best gamble, given the trade-offs between, for example, the risks of throwing out good food and the risks of eating food that might make them sick.
At times, there may be a temptation not to tell it like it is. For example, one might argue that risks should be exaggerated when a frank report would leave people unduly apathetic, as judged by their own standards. That is, people should say "Thanks for getting my attention" once the grounds for the overstatement were made clear. Such gratitude requires a public that not only recognizes the limits of its own understanding, but also accepts paternalistic and manipulative authorities. That acceptance seems more likely for misrepresentations intended to get a complacent public moving than for ones intended to allay a hysterical public's fears. In either case, once the secret is out, all future communications may be subjected to second guessing ("How seriously should we take them this time?").
One situation in which paternalistic authority is needed arises when a single message must be sent to a heterogeneous audience—for example, when officials must decide whether to declare a particular food "safe." Safety is a continuous variable, and any cut-off represents a value judgment. For any given food, different groups may face different risks, derive different benefits, and want to make different trade-offs. For example, a few people are strongly allergic to sulfur dioxide as a dried food preservative. Marketing such foods signals their safety to all. Labels that declare preservatives in foods allow consumers to customize their risk levels, but only if they know their own risks (i.e., whether they are strongly allergic, which they may learn only through a bad reaction whose source they identify).
The Food and Drug Administration faces a similar challenge in its effort to standardize risk labels for over-the-counter drugs. For example, other things being equal, producing bilingual labels will require either reducing print size or omitting information about some side effects. These modifications would, in turn, increase the risks for consumers with limited vision (e.g., some of the elderly) or those particularly sensitive to the omitted effects. Whatever labeling, warning, or communication strategy is chosen will leave some residual risk, with an uneven distribution depending on the heterogeneous sensitivities of the audience. Thus, the strategy reflects the authorities' notion of the "acceptable level of misunderstanding" (28).
What that acceptable level should be is a political and ethical question, which could be resolved by properly constituted public or private groups, and a scientific question, partially resolvable by research of the sort described here. Rigorous empirical testing is needed to determine whether communications fulfill the hopes placed in them (27). Emerging foodborne pathogens provide a particular challenge to safety communications—and a particular need for evaluation. Their novelty and ability to produce outbreaks in diverse places in the world and the food chain encourage treating them as unique. If a communication strategy is improvised only when a crisis hits, or as it evolves, the chances for a misstep increase. Those chances are especially large if the outbreak is the first major risk problem for the health authorities involved (16).
As a result, communications about these unique situations should be routine. A standard format for reporting risk information should be adopted. Funtowicz and Ravetz (29) propose a notation that includes a best-guess risk estimate (expressed in standard units), a measure of variability, and a "pedigree" (indicating the quality of the research). Although new, such notation might become familiar, much as degrees Fahrenheit, miles per gallon, probability of precipitation, and recommended daily allowance have become familiar.
Another part of communication planning is to adopt standard scripts for reporting complex procedural information regarding what citizens should do and what food specialists are doing. The adoption process should include empirically testing the comprehensibility of concrete messages with an audience like the intended audience. Influence diagrams offer one template for organizing procedural information. Risk analyses provide one way to identify the crises most likely to occur and may allow not only testing the most likely messages, but also identifying the persons most likely to do the communicating and preparing them accordingly. The chemical industry's Community Awareness and Emergency Response program might provide some useful lessons in how to organize for unlikely events, although the challenges of dealing with the relatively identifiable community surrounding a chemical plant are different from those presented by dealing with the diffuse national (or even international) audience concerned about a food. The chemical industry's experience may also provide guidance on how to achieve voluntary industry compliance with a set of communication principles. Public goodwill is eroded every time an industry spokesperson violates the public trust by misrepresenting, or just explaining inadequately, the state of affairs. Reducing misrepresentation requires institutional discipline; reducing inadequate communication requires a scientific approach to communication.
Our research was supported in part by the National Institute for Allergy and Infectious Diseases, the National Institute of Alcohol Abuse and Alcoholism, and the National Science Foundation.
- Fischhoff B, Bostrom A, Quadrel MJ. Risk perception and communication. In: Detels R, McEwen J, Omenn G, editors. Oxford textbook of public health. London: Oxford University Press; 1997. p. 987-1002.
- Krimsky S, Golding D, eds. Social theories of risk. Westport (CT): Praeger; 1992.
- Slovic P. Perception of risk. Science. 1987;236:280–5.
- Lammerding AM. Quantitative microbial risk assessment. Emerg Infect Dis. 1997. In press.
- National Research Council. Improving risk communication. Washington (DC): National Academy Press; 1989.
- Carter MW, ed. Radionuclides in the food chain. Berlin: Springer-Verlag; 1988.
- Kahneman D, Slovic P, Tversky A, eds. Judgment under uncertainty: heuristics and biases. New York: Cambridge University Press; 1982.
- Gilovich T, ed. How we know what isn't so. New York: Free Press; 1993.
- Hasher L, Zacks RT. Automatic and effortful processes in memory. J Exp Psychol Gen. 1984;108:356–88.
- Peterson CR, Beach LR. Man as an intuitive statistician. Psychol Bull. 1967;69:29–46.
- Fischhoff B, Slovic P, Lichtenstein S. Fault trees: sensitivity of assessed failure probabilities to problem representation. J Exp Psychol Hum Percept Perform. 1978;4:330–44.
- Crouch EAC, Wilson R, eds. Risk/benefit analysis. Cambridge (MA): Ballinger; 1981.
- Fischhoff B, Watson S, Hope C. Defining risk. Policy Sci. 1984;17:123–39.
- Slovic P, Fischhoff B, Lichtenstein S. Rating the risks. Environment. 1979;21:14–20, 36–9.
- Fischhoff B, Svenson O. Perceived risk of radionuclides: understanding public understanding. In: Harley JH, Schmidt GD, Silini G, editors. Radionuclides in the food chain. Berlin: Springer-Verlag; 1988. p. 453-71.
- Fischhoff B. Risk perception and communication unplugged: twenty years of process. Risk Anal. 1995;15:137–45.
- Schriver KA. Evaluating text quality: the continuum from text-focused to reader-focused methods. IEEE Trans Prof Commun. 1989;32:238–55.
- Bursztajn HJ, Feinbloom RI, Hamm RA, Brodsky A, eds. Medical choices, medical chances: how patients, families, and physicians can cope with uncertainty. New York: Routledge; 1990.
- Raiffa H, ed. Decision analysis: introductory lectures on choices under uncertainty. Reading (MA): Addison-Wesley; 1968.
- Merz J, Fischhoff B, Mazur DJ, Fischbeck PS. Decision-analytic approach to developing standards of disclosure for medical informed consent. Journal of Toxics and Liability. 1993;15:191–215.
- Burns WJ, Clemen RT. Covariance structure models and influence diagrams. Manage Sci. 1993;39:816–34.
- Howard RA. Knowledge maps. Manage Sci. 1989;35:903–22.
- Fischhoff B, Downs J. Accentuate the relevant. Psychol Sci. 1997;8:154–8.
- Morgan MG, Henrion M, eds. Uncertainty. New York: Cambridge University Press; 1990.
- Poulton EC, ed. Bias in quantifying judgment. Hillsdale (NJ): Lawrence Erlbaum; 1989.
- Bostrom A, Fischhoff B, Morgan MG. Characterizing mental models of hazardous processes: a methodology and an application to radon. J Soc Issues. 1992;48:85–100.
- Bruhn CM. Consumer education. Emerg Infect Dis. 1997. In press.
- Fischhoff B, Riley D, Kovacs D, Small M. What information belongs in a warning label? Psychol Mark. 1998. In press.
- Funtowicz S, Ravetz J, eds. Uncertainty and quality in science for policy. London: Kluwer; 1990.
Suggested citation: Fischhoff B, Downs JS. Communicating Foodborne Disease Risk. Emerg Infect Dis [serial on the Internet]. 1997, Dec [date cited]. Available from http://wwwnc.cdc.gov/eid/article/3/4/97-0412
Please use the form below to submit correspondence to the authors or contact them at the following address:
Baruch Fischhoff, Department of Social and Decision Sciences, Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, PA 15213, USA; fax: 412-268-6938
Comment submitted successfully, thank you for your feedback.
The conclusions, findings, and opinions expressed by authors contributing to this journal do not necessarily reflect the official position of the U.S. Department of Health and Human Services, the Public Health Service, the Centers for Disease Control and Prevention, or the authors' affiliated institutions. Use of trade names is for identification only and does not imply endorsement by any of the groups named above.
- Page created: December 21, 2010
- Page last updated: December 21, 2010
- Page last reviewed: December 21, 2010
- Centers for Disease Control and Prevention,
National Center for Emerging and Zoonotic Infectious Diseases (NCEZID)
Office of the Director (OD)