Sorry, Wrong Number

Why Media Polls on Gun Control are so Often Unreliable


Gary A. Mauser
and
David B. Kopel

Originally published in 9 Political Communication and Persuasion69-91 (no. 2, April-June 1992). This reprint of a shorter version is from volume 6 of the Journal on Firearms & Public Policy.


I. Introduction

This article examines the quality of the surveys used by the media to investigate the controversial public policy issue of gun control. The central question is the scientific quality of the methods used by media in their use of public opinion polls. After discussing the methodological problems in the media polls, this article per suggests that many media polls are methodologically flawed and unreliable. In some cases, the polls are so flawed to suggest that the polls are not science but "sagecraft," which Tonso defines as a quasi-scientific data manufactured to bolster a predetermined ideological position.1

The questions of whether gun control is itself a good public policy, or whether opinion polls form a sound basis for public policy are not addressed. The focus is only on the reliability of polling about the gun issue.

This paper deals only with "media polls"--that is, polls which are produced either by or for the media, and intended mainly to run as news stories. Examples of "media polls" include the polls commissioned by Timemagazine, CNN, major urban newspapers, and other major media organizations. "Media polls" also refers to polls conducted by the Gallup and Harris organizations, because many Gallup and Harris polls are marketed for newspaper syndication, and since the Gallup and Harris polls suffer from many of the same methodological flaws as do the polls conducted by the media.

This paper does not deal with in-depth analytical surveys conducted by reputable survey research organizations--academic or professional. In contrast to the "media polls," the analytical polls use lengthy question series to fully assess the public's complex attitudes on sensitive issues such as gun control. The analytical polls are typically commissioned by a customer with a strong interest in the gun policy debate and typically with a strong interest in obtaining quality results. Polls which will be used to formulate political strategy have strong reasons to be methodologically solid.

Analytical polls may be paid for by anti-gun organizations, such as the Center for the Study and Prevention of Handgun Violence, or by pro-gun organizations, such as the National Rifle Association. Although it might be expected that the agendas of the customers would bias the polls, the analytical polls conducted for both sides of the gun debate appear to be reliable. Analytical polls paid for by anti-gun organizations achieve results remarkably similar to polls paid for by pro-gun organizations.2 The similarity suggests that there is a real "public opinion" about the gun control issue, and that scientifically conducted polls can measure that opinion with reasonable accuracy.

Media polls, on the other hand, often report "findings" that are incorrect. In 1976 in Massachusetts and 1982 in California, handgun prohibition referenda questions were on the ballot. Media polls conducted before the election reported a tight race, but the prohibition measures were defeated in large landslides. (In Massachusetts, 69.2% voted against prohibition, as did 63% in California).3

One of the better-known media polls which was clearly in error was Timemagazine's survey of American gun owners. That survey claimed that four million Americans possessed fully automatic machine guns, a number that is approximately 10 times higher than the number estimated by any criminologist, and 20 times higher than the number of legally registered machine guns.4

This article attempts to identify and explore some of the methodological flaws that may lead to errors in the media polls. While other researchers have criticized the ideological bias of media polls, this paper attempts to evaluate the impact of the methodological flaws upon newspaper poll results.5

Public attitudes are complex, particularly with respect to contentious issues such as gun control, abortion, or capital punishment. It would not be surprising if the public were to hold complex or even contradictory views on controversial issues such as gun control. Not only are many people still thinking through their feelings on many controversial issues, but such issues inherently involve competing values, so that people are required to make difficult trade-offs between two or more highly desirable values before they can comfortably support one side or another in the debate. Very little of this complexity is found in media reported polls.

If issues are complex, then subtle differences in question wording could yield quite different results. Unfortunately, many subtleties are not recognized by the researcher until after the fact. Conscientious researchers deal with this problem by probing the issue area with a variety of questions. In contrast, questions in media polls are often highly selective and occasionally quite biased. Questions about firearms touch sensitive issues and need to be approached with the highest standards of sampling and interviewing quality. Too often this is not the case.

Typically, due to cost constraints perhaps, the media use methods that at best skirt the border of minimally acceptable standards.6 Such an approach may be penny-wise, but it risks introducing biases, which can be particularly severe in dealing with sensitive issues such as gun control.

Moreover, the interviewer effect is, this paper suggests, a greater problem in media polls on gun control than has been previous recognized. While interviewer effects have been known to exist for other public policy issues, the importance of such differences for researching gun control is less well known. Interviewer bias is a potential problem whenever there are significant social or cultural differences between the interviewer and respondent. The classic problem is race, but examples have been found for sex, social class, and age differences.7

The social and cultural differences between interviewers and respondents are fertile ground for potential distortion. This article identifies these problems and empirically analyzes the interviewer effects in a survey conducted recently in the United States and Canada.

II. Accuracy and Reported Accuracy of Polls

A. Sampling Error

The scientific accuracy of polls is typically exaggerated by the media. Almost all media reports of polls include a one-sentence paragraph explaining that "the survey is considered accurate within 2.5 [or 3 or 5] percentage points," and giving the sample size.8

This standard caveat refers to sampling error, that is, the statistical error introduced theoretically by using a sample rather than a complete census of the target population. The "explanation" gives the erroneous impression that the stated error is the maximum error contained in the poll, when in reality it is the minimum. This explanation assumes that no errors or biases that exist in either the sampling or interviewing methods. Such perfection is highly unlikely even in the best of surveys. This paper argues below that scientific flaws in the media polls introduce additional error at least as large as, and sometimes many times larger than, the sampling error.9

In addition, the stated error refers only to the sample population as a whole, and not to subgroups within the sample population. In many cases, the results from the subgroups are considered more important that the result from the national sample. For example, in polling about "waiting periods" for handgun purchasers, the fact that most Americans support a waiting period may be less newsworthy than the fact that most handgun owners support a waiting period. But the results for handgun owners are far less reliable than the results for the nation as a whole, since the handgun-owning sample is so much smaller.10

The media claim that they are required to give only the briefest treatment of methodological questions because of the low level of expertise of the audience or the reporter. This argument is not very convincing. Such over-simplification, at the very least, acts to enhance the readers' perceived quality of the survey and, arguably, may even augment the impact of the poll because it exaggerates the "scientific" nature of the poll. But keeping the methodology secret can have much more serious consequences, such as enabling cost-conscious media managers to cut corners in quality that could jeopardize the accuracy of the results.

Particularly in magazines--where gun control articles often run for thousands of word--it would not be difficult for articles discussing media polls to include a paragraph noting that the stated sampling error is valid only for the population as a whole, and not for any subsets. Nor would it be difficult to note that inaccuracies may exist in addition to the sampling error.

B. Additional Errors

If sampling error were the only error--as the media pollsters imply--then media polls on the same subject at the same time would rarely report results further apart than the sum of the sampling error of the two polls. In fact, simultaneous or near-simultaneous media polls on the same subject often yield results farther apart than the sum of the sampling error in each poll. For example, in early 1991, a CNN poll found 45% opposed to using nuclear weapons against Iraq. Yet a Gallup poll found 71% in favor of using the weapons.11

That media polls contain errors far larger than the statistical error also became apparent during the 1984 race for the Democratic Presidential nomination. National polls for organizations were conducted within days of each other, and often simultaneously. The polling results showed differences far larger than the statistical error. One poll might report Hart ahead nationally by 8 points, while another poll, taken at the same time, might show Mondale ahead by 5 points.

And of course clear examples of errors larger than the sampling error were seen in the polls of the 1976 Massachusetts and 1982 California handgun ban referenda, where pollsters predicted a close election, but the prohibition referenda were defeated in landslides.

Survey researchers call any error other than sampling error "non-sampling errors." Three of the most important types of non-sampling errors are: coverage error, non-response error, and measurement error. "Coverage error" means failing to give any chance of being selected in the survey to some persons in the "target" population; "non-response error" arises from failing to collect data on all persons in the sample; and "measurement error" is any problem with getting the "true answer" from a respondent. The first potential for bias that will be discussed is that of problems with question wording, which is one of the most important sources of measurement error.

III. Questions

Small changes in wording can create large changes in results. In the polls regarding use of nuclear weapons against Iraq (discussed above) CNN simply asked whether such weapons should be used, whereas Gallup asked about "tactical" nuclear weapons, and hypothesized that such use could save American lives. While CNN had found Americans closely divided on the nuclear weapons issue, Gallup reported a large majority of 71% in favor of use of such weapons.

This section examines the questions that are often used in gun control polling, and argues that a variety of flaws in the questions exaggerates public support for severe gun laws.

Gallup's question about nuclear weapons use was what this paper calls an "argumentative question"--a question that presents explicit or implicit facts or arguments in favor of one of the results the respondent is evaluating. By modifying the wording of a question, "You can come up with any result you want," says Peter Hart, pollster for the Dukakis campaign.

An example of an argumentative gun control question is Gallup's query about waiting period, which is posed in a way that assumes the waiting period really would help the police keep guns away from illegitimate owners: "Would you favor or oppose a national law requiring a seven-day waiting period before a handgun could be purchased, in order to determine whether the prospective buyer has been convicted of a felony or is mentally ill." Contrary to Gallup's hypothesis, even criminologists who favor gun control have concluded that criminal and mental records are frequently not accurate enough to allow the police "to determine whether the prospective buyer has been convicted of a felony or is mentally ill." Nor are they accurate enough to allow such a check to be completed within seven days. Indeed, the debate in Congress over a waiting period focused heavily on whether state criminal records were good enough for a waiting period to be implemented right away, or whether it would be better to first spend several years improving the quality of existing records. Gallup, however, told his respondents to assume the very point that was at the heart of the controversy. One of the most central disputed facts having been assumed away, it was not surprising that Gallup found a huge majority in favor of a waiting period.

How much effect does the argumentative question have on gun control responses? In 1977, Schuman and Presser investigated that issue. They asked about support for requiring a police permit before a person could obtain a gun; one question was neutral, the other question (asked to a different sample) was argumentative against gun control. The anti-control argumentative questions resulted in 1.7% to 6.4% drops in support for control, depending on the year. On average, then, the argumentative pro-control questions increase the stated level of support for control by roughly 4%.

Sometimes a question itself may be neutral, but the meaning of the question may be distorted by the media pollster. For example, a media pollster may announce that the results of a question about one issue indicate the public's attitude on an entirely different issue. For example, in 1975 Gallup asked Americans: "In Massachusetts a law requires that a person who carries a gun outside his home must have a license to do so. Would you approve or disapprove of having such a law in your state?" Seventy-seven percent approved of such a law in their state.

The result was to be expected, for the vast majority of Americans as of 1975 lived in a state that has similar legislation. Requiring a permit for carry outside the home was not unique to Massachusetts. Over three-quarters of the states required either a permit to carry a gun or a permit to carry a concealed gun.

Yet Gallup did not announce "Most People Support Existing Gun Carry Laws," even though that was all his survey had shown. What Gallup claimed in his article was that the public supported "a specific plan...based on a law now in effect in Massachusetts." Gallup explained that the new Bartley Fox law in Massachusetts requires that "Anyone who is convicted of carrying a gun without a license is given a mandatory sentence of one year in jail."

The one year mandatory sentence was, of course, precisely what made the Massachusetts law different from every other state's. Gallup claimed that the American public supported the stern mandatory sentencing law; but Gallup had never asked them about it. Gallup's actual question merely mentioned the (ubiquitous) permit to carry provision and omitted the (controversial) mandatory sentence provision.

How many Americans actually do favor a one-year mandatory minimum? The analytical poll conducted by Cambridge Research Associates in 1978 asked that question, and found 55% support. The fifty-five percent is still a majority, but far from the overwhelming consensus falsely reported by Gallup. Gallup's misreporting had exaggerated public support for the mandatory sentence by 22%.

A. Questions about "Assault Weapons"

On the "assault weapon" controversy, most of the questions in media polls were argumentative, or were cited for a policy that had never been included in the polls, or both. Pollsters often asked questions about guns that were very different from the subject of the legislative controversy.

Before looking at the polls, it is necessary to examine precisely what guns the controversy involved.

The Department of Defense's Defense Intelligence Agency has long had a simple definition of "assault rifle": an intermediate caliber rifle or carbine capable of selective fire. In other words, a rifle like a soldier carries, capable of fully automatic fire. Examples would be the U.S. Army M-16, and the Soviet Army AK-47.

Only a few hundred AK-47s have ever been imported into the United States. Ever since the National Firearms Act of 1934, possession of automatics, such as assault rifles, has required an FBI background check, a $200 tax, and a six-month wait. Even stricter legislation was enacted in 1986 concerning civilian ownership of military assault rifles (and other full automatics).

The intense national controversy over "assault weapons" that occurred in 1989, and lingers to some degree today, had nothing to do with those fully automatic guns. Automatics, such as assault rifles, fire automatically. If the shooter squeezes the trigger, bullets will fire automatically and continuously until the trigger is released. It would not be surprising to find that most people favor strict controls over such rapid-fire guns.

In contrast, semiautomatic firearms cannot fire automatically. If the shooter squeezes the trigger, only one shot is fired. Each shot requires an additional trigger squeeze. A semiautomatic's rate of effective fire is nearly the same as (about 1/10th of a second faster) most other guns. Old-fashioned and common guns such as bolt action, lever action, pump action, and revolver all fire at essentially the same rate as a semiautomatic. All of the common gun types (semiautomatic, bolt action, lever action, pump action, and revolver) fire much slower than full automatics.

Some semiautomatics have brown, natural wood stocks attached. Other semiautomatics come with black, futuristic plastic stocks. Functionally, the guns are identical, since their internal parts operate on exactly the same principle. Some of the black-stock semiautomatics look like assault rifles (which are automatics). For example, the semiautomatic AKS looks like the fully automatic AK-47, but does not function like the AK-47. According to the Bureau of Alcohol, Tobacco and Firearms, guns like the (semi-automatic) AKS are functionally identical to common and well-known sporting guns such as Remington hunting rifles. The guns that were the subject of anti-gun lobby's push for weapons control were all semiautomatics. Examples were the Colt AR-15 Sporter, and the Norinco AKS. The guns looked like assault rifles, and had names similar to assault rifles. But they were not fully automatic.

Nevertheless, pollsters often asked questions that about full automatics. Questions are frequently asked about a ban on the "AK-47," which is a full automatic. It impossible to know what respondents thought when they were asked a question about rapid-fire guns, which the semi-automatic is not. For example, Gallup asked about banning "semi-automatic assault guns, such as the AK-47."

The polls sometimes hypothesized a degree of legal regulation of full automatics far less than the law existing since 1934. The Texas Poll asked if sale of "assault weapons remains legal, should there be a mandatory seven-day waiting period to purchase a high-caliber, fast-firing assault rifle." Ever since 1934, there has been not a "seven-day waiting period," but a six month transfer application period. Thus, the Texas Poll found 89% of Texans in favor of something far less strict than the existing federal law--one that had been in place for 56 years. Yet the Texas Poll was used to promote prohibition on semiautomatics--which the question had not even asked about.

Further, the Texas Poll incorrectly described the guns as "high caliber." In contrast, the Defense Intelligence Agency definition of "assault rifle" includes only guns that are intermediate in caliber or stopping power. Real assault rifle ammunition is "intermediate" in stopping power between handgun ammunition and full battle-rifle ammunition (such as for a Browning Automatic Rifle).

Semiautomatics which look like real assault rifles are also intermediate in caliber. They, like true assault rifles, fire intermediate rifle calibers like the .223 Remington. Many traditional big game weapons fire larger calibers, such as .378 Weatherby. Most people's common sense would suggest that larger calibers are more deadly, and medical research confirms this intuition.

Thus, it would be expected that persons asked a question about controls on "high caliber" guns in particular would be more supportive than they might be about gun control in general. The results from the "high caliber" gun question were touted in legislatures to promote laws that did not regulate high caliber guns, but instead applied to intermediate caliber arms.

In 1990, the anti-gun lobby Handgun Control, Inc. circulated a report listing the results of sixteen organizations' national and state polls, all claiming huge majorities in favor of a strict control. Fourteen of the media pollsters had factual errors in their questions of the type detailed above. Several of them were also argumentative.

Unfortunately, the pollsters who did not have factual errors did not ask about prohibition of semi-automatics, but instead asked about lesser control. Therefore, it is difficult to guess what percentage of the population actually does favor prohibition of some or all semi-automatics.

The two (non-flawed) questions did, however, reveal public support for treating semi-automatics with approximately the same strictness that the public supports for handguns. Virginia's Mason-Dixon Poll found 81% in favor of requiring "a permit in order to purchase a semi-automatic firearm." This percentage is very similar to what the public favors for handguns; Caddell's survey found 82% in favor of a "permit or license to purchase."

The Wisconsin Policy Research Institute found 91% in favor "requiring the owners of semi-automatic rifles to register their weapons with the state."12 Again, the result was within the sampling error range of Caddell's result for handguns. He had found 84% in favor of registering handguns at the time of transfer.13

The 14 pollsters, including Gallup and Harris, who asked factually incorrect (and sometimes argumentative) questions found gigantic majorities in favor of complete prohibition. The results were probably an accurate gauge of public opinion too--for what they asked about. They asked about the guns as a soldier carries like an "AK-47" or an "M-16" or an "assault rifle" or "assault weapon." Thus, the public seems to favor the currently federal policy, which bans all automatics manufactured after 1986.

Unfortunately, the 14 polls finding large public support for prohibition of military guns were misused to promote controls on very different guns.

Most states that studied the semiautomatic issue studied proposals for a total ban. Total bans were rejected in over two dozen states, and enacted in two (California, New Jersey). Despite the rhetoric of organizations such as Handgun Control, Inc., the rejection of the bans in most places was not necessarily contrary to popular will. The pollsters (Gallup) had found that bans were supported nationally by 72% of the population, from a low of 57% (Georgia) to a high of 78% (Massachusetts)--but they had asked about guns that were already banned (like the AK-47 and M-16). Sloppy question construction essentially ruined the utility of the polls conducted by 14 of the 16 organizations. They found, unsurprisingly, a large majority in favor of a gun ban that had been on the books since 1986. The public's affirmation of a law (about automatics) already in effect was misused to promote passage of entirely different laws, having nothing to do with "high caliber" "assault weapons" like the "M-16" and "AK-47."

In Virginia and Maryland, the legislatures passed laws making some semiautomatics subject to the same police background check as handguns.14 The Virginia and Maryland legislatures seemed to come closer to what the two non-flawed polls actually showed the larger segment of the public to want (approximately the same controls as are applied to handguns).

Because media reports of the polls do not reprint the actual question that was asked, readers are often prevented from even attempting to evaluate the distorting effect of misleading questions.15

B. Sloppy Questions

The propensity for sloppy questioning leading to results of little practical value is not confined to questions about "assault weapons." At least in regard to "assault" guns, pollsters might be excused for making a technical error in describing guns, a subject with which their question-writers apparently had little practical familiarity.

Other sloppy questions do not stem from technical mistakes. One of the most opaque gun questions was asked by Harris in January 1969, a few months after Congress had passed the first comprehensive national gun control law, the Gun Control Act of 1968: "Specifically, how would you rate the job Congress has done on not passing gun control legislation--excellent, pretty good, only fair or poor?"16 A strong opponent of gun control would have to answer "only fair" or "poor," because Congress had just enacted gun control legislation; a person favoring an absolutist interpretation of the right to bear arms could hardly say that a Congress which had just enacted the most sweeping federal gun control law in American history had done an "excellent" or "pretty good" job in "not passing gun control legislation."

Harris found that 59% of the country gave Congress low marks on the job of "not passing gun control legislation." While the results would seem to indicate opposition to federal gun control, the results were claimed to show public support for strict gun laws. And it is true that a gun prohibitionist, feeling that the Gun Control Act of 1968 did not go nearly far enough, might also give Congress low marks on "not passing gun control legislation." Thus, persons who thought there was too much federal gun control, and persons who thought there was too little, might both answer that Congress had done a "fair" or "poor" job of "not passing gun control legislation."

The question was, accordingly, worthless for research purposes. The most that the question revealed was that Harris himself apparently did not know about the 1968 Act, or thought that it did not go nearly far enough.

C. Overly General Questions

Another type of question which may lead to misleading results is the overly general question. Since at least 1975, Gallup has been asking if "the laws covering the sale of firearms should be more strict, less strict, or kept as they are now."17 Yankelovich asks the same question.18

The question is only a useful policy guide if the public actually knows that the current "laws concerning the sale of firearms" are. Unfortunately, Wright, Rossi, & Daly found "a substantial degree of misinformation on the matter." As a result, public "opinion that the existing measures should be made tougher is rather difficult to interpret meaningfully."19 For example, one Missouri newspaper excitedly headlined "Voters Back Gun Control." Yet analysis of the question indicated that a majority opposed laws as strict those in effect in Missouri for 65 years (apparently unbeknownst to most of its population).20

A question about whether present gun laws should be made stricter will be even less meaningful if the interviewer creates the incorrect impression that present laws are much weaker than they actually are. Gallup's first question in 1989 and 1990 asked about banning the "AK-47." The question assumed that the AK-47 (banned since 1986) was legal. If respondents believed that the first question was factually accurate (that AK-47s were legal), then they would be likely to favor making laws stricter than "they are now." Not surprisingly, most of Gallup's respondents thought that gun laws should be stricter than "they are now."

Of all social science work conducted, media polling such as Gallup's is near the top in influence on public policy. Unfortunately, it may be that Gallup's staff is not in touch with the world of academic social science. If the Gallup staff were even peripherally in touch with academic social science, Gallup might stop asking flawed question such as whether gun laws should be stricter "than they are now." It is unfortunate that the Gallup Poll apparently never became aware of the Wright, Rossi & Daly critique of the type of question Gallup uses.

Even questions which are slightly more specific may involve huge ambiguities. For example, a pollster may ask about requiring a "license" to own a gun. Do the "yes" respondents favor a system like that in Illinois, where everyone who is not insane or a criminal is readily granted a license, or like that in New York City, where virtually none of the applicants get a license?21

D. Questions that are Never Asked

One gun control question which has been conspicuous by its absence is the "instant check" for handgun buyers. Since 1988, there has been no disagreement about the issue of federally mandated pre-purchase screening of gun buyers. Both the largest anti-gun lobby, Handgun Control, Inc., and the largest pro-gun lobby, the National Rifle Association, have agreed that the federal government should encourage states to check handgun purchasers for records of criminal convictions. The two organizations have sharply disagreed, however, about the mechanism for the check. Handgun Control, Inc. favors a seven day waiting period, during which police officials would have the option of conducting a background check. The National Rifle Association prefers an instant telephone check, whereby a gun dealer calls criminal justice records number to verify a prospective purchaser's eligibility, much as the dealer already calls a credit card hotline to verify the purchaser's use of a credit card.

Because for the last four years the debate has been "waiting period" vs. "instant check," it might be expected that media pollsters would have repeatedly queried the public about which screening mechanism the public endorsed. But in fact, no media pollsters asked such questions. Instead, the pollsters asked only whether a waiting period was a good idea, and not whether an instant check was an even better idea. Regarding the waiting period in isolation, Gallup (using an argumentative question) found 95% support, and other pollsters reported similar numbers. Handgun Control, Inc. insisted that polls proved that the public favored the Handgun Control waiting period.

In May 1991, Lawrence Research conducted an analytical poll, and found that while 85% of the public liked the idea of a waiting period, only 33% liked a bill with the features of Handgun Control's bill (such as making the check optional, and allowing lawsuits against police for an insufficiently thorough check).22 When presented with a choice between the instant telephone check and the waiting period, the public preferred the instant check 78% to 14%. It is unfortunate that during a four year period when Congress was debating the merits of a waiting period vs. an instant telephone check, no media pollster bothered to survey public attitudes on the question.

Would it be cynical to suggest that the reason the question was never asked was that the media pollsters (correctly) feared that the answer would be an overwhelming preference for an instant check?

A person who read only media polls might develop the impression that a substantial tightening of gun laws is a very important public policy objective for a very large majority of the American people. But when pollsters ask the general open-ended question, "What should be done about violent crime?" The percentage of people who answer "gun control" is often smaller than the sampling error.23 Reports of such polls do not, however, run under headlines such as "Few See Gun Control as Solution to Crime." If, on the other hand, the open-ended questions did find large spontaneous support for gun control, we expect that the media reports would focus on "Record Support for Gun Control."

In sum, the formulation and reporting of gun control questions in media polls is seriously flawed. The flaws are so pervasive as to significantly undermine the reliability of the pro-control numbers reported by the media pollsters.

The next section of the paper examines serious practical problems in conducting representative samples that threaten the accuracy of media polls. Two of the most serious errors are coverage errors and non-response errors.

IV. Sampling biases
 
All too often media polls exhibit methodological limitations in sampling which can compromise their accuracy. These methodological problems are not unique to gun control polls, but the problems may have a particularly large impact on questions about sensitive questions such as gun ownership and attitudes towards gun control. Some errors exaggerate support for gun control while others work in the opposite direction. This section examines "coverage errors" as well as "non-response errors." after discussing the various sources of errors, we conclude that the net result artificially exaggerates popular support for gun control measures in media polls. 

A. Coverage error
 
"Coverage error" means failing to give any chance of being selected in the survey to some persons in the "target" population. The theoretical principles of sampling have long been known to statisticians, but the practical problems are still quite daunting--and quite expensive.
24 This section looks at problems that can invalidate a sample's accuracy over and above the purely statistical limitations of sampling, known as sampling error. Since almost all public surveys are conducted over the telephone, this discussion will be limited to telephone surveys.25

Conducting a sample of the general population involves two distinct stages: [1] selecting a random sample of households, and [2] selecting respondents randomly from within households. In selecting a sample of households, the first consideration is to get a complete list of the telephone numbers for the target population. If the target population has been selected as "all households in the state," failure to include households without telephones is called "coverage error," as is failure to include people who are not-at-home or who refuse to participate when the interviewer calls their household. Such errors are critical because the resulting sample may not reflect all segments of the target population.

The proliferation of computers during the past few decades has meant a tremendous increase in the use of "random digit dialing" [RDD] methods.26 Only the smallest and least sophisticated media polls still depend upon directory-based sampling methods. Polls using directory-based methods simply cannot be relied upon. Nevertheless, despite the widespread use of RDD, it is still not possible to guarantee coverage of all households in a state. This might seem surprising because of the high penetration of telephones in the 1990's, but it is true. Across the United States, the percentage of households who do not have telephones varies from 4% to 7%. More importantly, the likelihood of telephone ownership varies with family income, race, age and region.27

These biases are well known to most professional pollsters, who attempt to correct for them by weighting the sample to achieve the regional, sex, or race distributions that they decide is best. Unfortunately, studies have shown that weighting only partially corrects for these biases and occasionally exacerbates them.28 Households without telephones differ from those with telephones, so no amount of weighting can replace the use of proper sampling methods.

The extent of bias in media polls that derives from the use of improper sampling methods remains to be systematically evaluated, but it is non-trivial, particularly in media polls done for regional papers. Attitudes towards gun control varies across social groups so that differential sampling across these groups is bound to have an impact upon the relative support that is reported.29 However, it is difficult to assess the effects of sampling upon gun ownership or attitudes towards gun control legislation because reported gun ownership is negatively associated with some of these factors and positively associated with others.30 The likelihood of both firearm and telephone ownership increases with family income, but firearms ownership is higher in the West and the South where telephone coverage is its lowest.31 Telephone ownership is higher among whites than non-whites, and whites are more likely to report owning firearms.32 Finally, telephone ownership is lowest in rural areas--precisely where firearms ownership is highest.33 While it is difficult to fully assess, the net impact of coverage errors is probably in favor of gun ownership and thus would tend to exaggerate the anti-control opinions.

Proper sampling involves more than merely finding a representative sample of households. A more difficult challenge is to successfully interview respondents from within a household. The most important problem associated with within-household sampling is called "non-response error," which consists of two intertwined problems: finding respondents at home and getting them to participate in the survey. The key to reducing non-response error is the number of "callbacks." 34 If budgets are tight, it can be tempting to reduce the number of callbacks that are made to a household. Callbacks are expensive and it costs less for interviewers to simply call the next number on the list than to schedule time to pursue "incompletes." Top quality survey firms make as many as 20 attempts to call back after the targeted respondent, while low-quality survey houses may not make any.35

Callbacks are critical because the people who are easiest to contact differ from those who are more difficult to find, and who therefore require more callbacks to reach; for example, unemployed people, retired, or housewives are easier to find at home than are people who are employed, particularly men, or poorer people.36 For example, in a 1984 pre-election survey, Traugott found that repeated callbacks increased Reagan's plurality over Mondale from 3% to 12%.37

Refusals pose an even more important problem than not-at-homes. Not only are estimates of the refusal rate in commercial media polls over 25%,38 but, in addition, the demographics of the refusers is related to features which correlate with owning or  not owning guns. In a wide variety of studies, refusers have been found to be older males, people with a high-school level of education or lower, and to be non-whites.39 Since education level and gender have been found to be associated with opposition to stricter gun-control legislation,40 the net result is a small bias against those groups who oppose stricter gun-control laws.

Refusal rates are even higher for surveys dealing with sensitive issues such as gun ownership. In a survey of Louisiana automobile owners so few blacks participated that the survey had to be limited to whites only.41 Our knowledge of gun ownership is limited by how willing people are to report they own a gun. If some groups are more reluctant than others to admit to owning a firearm, than what we believe about the social patterns of gun ownership may be seriously skewed. As well, if people who own firearms are more likely to refuse to participate in surveys, then their opinions would be systematically under-represented in polls.

Non-response bias in media polls would appear to exaggerate support for stricter gun-control legislation. This follows primarily because errors due to non-response error (i.e., refusals and not-at-homes) are larger than those due to coverage errors. In short, the demographics that favor stricter gun legislation tend to be over-represented in media polls.

B. Fraud and Lies

Another potential problem with sampling is fraud on the part of the sampler. The media organizations which conduct media polls have no financial incentive to spend resources guarding against errors or fraud; the media have even less incentive to monitor closely the quality of the polling conducted by Gallup or Harris.

Accordingly, evidence of fraud is uncovered only sporadically, if ever. In 1968, the New York Timeshired Gallup to conduct a survey of Harlem residents. The results were so interesting that an editor sent a reporter and photographer to do follow-up stories of some of the Harlem residents who had been interviewed. But at 7 of the 23 addresses that Gallup gave the Times, there was no dwelling. At 5 more, there was a dwelling, but the person allegedly polled did not live there, and the people who did live there had never heard of the person. Even the respondents that did actually exist turned out to be somewhat different than Gallup had reported. One "respondent" was a composite of four people playing cards.42

In one Harris poll, the employee selecting the sample cities chose them not for methodological reasons, but because they had interesting names.43 In another Harris poll (this one conducted for a private organization), the initial round of polling omitted some questions that the organization had wanted asked. At the client's insistence, Harris sent follow-up mail questionnaires to all the persons who had been interviewed. Twenty-five percent of second surveys sent to the original "respondents" were returned as undeliverable, because there was no such address, or no such person.44

The anecdotes illustrate one reason why analytical polls may be more accurate than the media polls: the client has a larger financial stake in quality control.

It is impossible to conclude from anecdotes that fraud is rife within media polling. It is also impossible to exclude the possibility, particularly since the persons hired to actually conduct the interviewers tend to be young, and poorly attached to the labor force.

If fraud were not uncommon, it might be expected that the fraudulent poll-takers would be more likely to make up anti-gun responses from their non-existent interviewees, since the population segment most commonly hired as poll-takers, urban females, is also the most anti-gun segment.

Interviewers are, of course, not the only people who lie. Respondents may lie too. It has been suggested the 1973 Gallup and Harris surveys on the impeachment of President Nixon underestimated the breadth of support for impeachment because respondents were afraid that a pro-impeachment answer might expose them to retaliation from the government.45 Whether the respondents' fears were realistic is irrelevant to the question of whether the fear induced the respondents to lie.

In regard to gun control polling, there could be potential for fearful respondents to lie. During the 1989-90 controversy over "assault weapons," many politicians were calling for the confiscation of all such weapons in private hands. If a respondent owned an "assault weapon" and feared confiscation, he might also fear that a strong pro-gun response might indicate that he could be seen as an "assault weapon" owner. Again, whether the fear was realistic does not matter.

The possibility that some gun owners may lie to "protect" themselves relates to the larger problem of respondent refusal to answer. Questions about firearms ownership are highly reactive, much like questions about personal income or sexual or criminal activities. Hence, these questions have a higher rate of refusal than other less reactive questions, so that people who answer such questions may differ from the full sample. It would certainly be consistent with the stereotype of gun owners for the typical (slightly paranoid) gun owner to refuse to answer any questions about guns with a stranger over the phone. But because media polls typically do not report raw figures, the drop-off is invisible.

While survey methods have vastly improved since Gallup first introduced polls in the 1930s, polling is still a challenging endeavor fraught with many perils. There are still many ways in which errors or biases can be introduced, in part because top quality survey methods are very expensive. In the case of media polls, where budgets are so tight, the commitment to top quality methods may too frequently have to be sacrificed to meager budgets. Additionally, it may be easy to sacrifice quality because it is so difficult for readers to discover what methods were actually employed and if any short cuts had been adopted. In sum, the problems with sampling are understood theoretically, but too often, due to budget constraints or lack of concern, ignored by the media pollsters.

V. Interviewer effects
 
In addition to proper questions and scientific sampling methods, an accurate poll requires professional interviewing, a challenge as complex as sampling but less well understood theoretically. Interviewing refers to the general problem of questioning people to elicit their opinions or beliefs. Since the objective is to discover what the respondent believes, the interviewer should not introduce his or her own opinions.
 
An interview is a conversation between two people, and, despite the best training, all of the complexities of human interaction come into play. At the very start of a telephone interview, respondents can identify the interviewer's gender. Within a few minutes, a respondent can often identify an interviewer's race, social class, and where he or she grew up. Even if respondents are mistaken, their guesses still influence their answers.
 
Interviewer effects are a potential problem in all survey research studies, but their importance is substantially larger in surveys dealing with sensitive questions.
46 Interviewer effects have been found to be more important in telephone polls than in face-to-face surveys.47 This follows because each interviewer is responsible for a larger number of respondents in telephone surveys than in other kinds of surveys.

Awareness of interviewer effects is not new. Early survey researchers found that respondents were influenced by the race of the interviewer.48 The effect of race appears to be limited to race-related questions and has been found to exist for both white and black interviewers.49 Race is not the only interviewer characteristic that has been shown to influence respondents. Religion, age, social class, and sex differences have all been found to be important.50 In questions about abortion, female interviewers receive more "pro-choice" responses, and male interviewers receive more "pro-life" responses.51

The predominant explanation is that many respondents tailor their responses to conform to expectations or to perceived social norms.52 Respondents exaggerate their degree of schooling, and refuse to acknowledge their (private) support for political candidates considered extremist and unpopular (such as Barry Goldwater in 1964 and George Wallace in 1968). After President Kennedy was assassinated, few respondents admitted voting against him. During Watergate, a majority of Californians claimed to have voted for George McGovern--even though Nixon carried the state by 1,126,249 votes!53

In many cases, interviewer effects (or more precisely, respondents' desire to give responses which they think will please the interviewer) have been found to be substantially larger than sampling error.54 Two studies estimate that interviewer effects are about 5% to 7% of total response variance.55 However, this study averaged effects across both sensitive and non-sensitive questions.

Unfortunately, too many media polls ignore the problem of interviewer bias or assume it away. While it is true that no researcher can successfully deal with all possible problems in any given survey, it is necessary to address the major problems or else one is forced to accept a low level of quality.

Interviewer effects can be expected to cause problems in surveys on firearms issues not only because of the sensitivity of questions dealing with firearms, but also because of the social differences between interviewers and the respondents most likely to be pro-gun. Historically, urban-rural differences are one of the most important cleavages in politics, along with race, religion, and social class.56 In understanding public attitudes towards gun control, the most important variable is (not surprisingly) firearms ownership.57 Gun owners tend disproportionately to be rural and small town middle-class males.58 In contrast, telephone survey interviewers tend to be urban females, who are unlikely to own guns and are likely to support severe gun control. Accordingly, it is entirely possible that respondents answering questions about gun control posed by urban females might be inclined to give more pro-control answers.

VI. Conclusion and Discussion

For many years, academics have been discussing the significance of the large pro-control sentiment reported by the media polls. Hazel Erskine complained "It is difficult to imagine any other issue on which Congress has been less responsive to public sentiment for a longer period of time." More recently, Douglas Jeffe and Sherry Bebitch Jeffe examined gun control polling, and predicted that a wave of prohibitions on "semiautomatic assault weapons" would "reduce the NRA to a voice in the wilderness."59

Jeffe and Jeffe made their prediction in an article discussing the results of a 1989 CNN/Los Angeles Timespoll. The prediction of Jeffe and Jeffe turned out to be inaccurate. Only one state besides California enacted an "assault weapon" law, and in that state (New Jersey) the legislature voted to rescind much of the law the next year. Jeffe and Jeffe were not, however, guilty of a unique error. Reports in media polls of huge pro-control majorities have often been accompanied by incorrect predictions of the enactment of stricter gun laws.

Various hypotheses have been offered as to why there is such a large discrepancy between public opinion polls and legislative action. One theory has been that opponents of control hold their views with more intensity than do proponents of control, and hence exert greater influence.

One way to test the intensity hypothesis has been to ask respondents how strongly they feel about the gun control issue. In the polls conducted by Schuman and Presser, and the polling conducted by Caddell (none of these polls being "media polls"), it was the proponents of control who were more likely to describe the gun issue as important to themselves, and as an important basis for their votes.60 If the polling results about intensity are accurate, the gap between public opinion and legislative action becomes even more difficult to explain, since the gun control advocates are not only more numerous, but also more inclined to take political action on their beliefs.

But actual behavior does not comport with the polling results. The largest pro-gun organization, the National Rifle Association, has 2.5 million members, while the largest anti-gun organization, Handgun Control, Inc., has only 200,000. The mail received by most legislative offices generally runs at least 12:1 in favor of the pro-gun side. (In rural districts, the pro-gun advantage may be 100:1.) Pro-gun rallies at state capitols routinely draw hundreds, and sometimes tens of thousands of people. Few anti-gun organizations have enough grassroots strength to even schedule a rally. In short, if "intensity" is measured by visible involvement in public policy questions, the pro-gun forces are far more intense.

Thus, the "intensity of activity" hypothesis seems plausible, and may explain much of the discrepancy between public opinion polls and actual political results. The fact that so many respondents claim to have intense feelings in favor of gun control, but very few of those persons seems to act on those stated "intense" feelings, may be further support for an interviewer effect in gun control polling. Some respondents may feel that they are pleasing the interviewer by claiming that the anti-gun position is one of their most important political beliefs.

Besides the intensity hypothesis, another explanation for why legislative results are so divergent from media poll results about gun control may be that the polls themselves substantially overstate public support for gun control. The only research done thus far on that question was conducted by Schuman and Presser, who found that variations in question formulation accounted for a 1.7% to 6.4% variance in survey results. Schuman and Presser rejected the hypothesis that polling results are far out of line with actual public opinion; the authors noted, correctly, than even with a 6.4% variance, the reported level of support for gun control was still quite high.

Schuman and Presser, however, analyzed only one of the potential factors potentially causing inaccuracy in polling--the impact of argumentative polling questions themselves. This paper has attempted to more broadly survey several factors which, cumulatively, could sharply skew the results in public opinion polls on gun control. These factors are most commonly present in the hastily-conducted surveys which we have dubbed "media polls."

Our review shows that media polls typically exhibit numerous problems. The poll questions themselves suffer from myriad flaws, which give an unduly simplified view of public opinion. Slanted, loaded, or technically incompetent questions are common, and results are often claimed to support a position that was never queried. For example, questions asking about bans on particular models of fully-automatic firearms are used as evidence of public support for prohibition of semiautomatics. Short question series fail to explore respondents' opinions adequately. Questions about whether gun laws should be made stricter do not examine whether the respondents understand what the present laws are. The media typically only ask about the desirability of gun control and not about the right to bear arms or about alternative strategies for dealing with violent crime.61

A second major possible distorting factor in media polls stems from the lack of outside quality controls on media polls, and the propensity to produce results in a hurry. Methodological limitations arise which may over-emphasize the views of relatively anti-gun segments of the population (urban females) and under-emphasize the views of pro-gun segments (males with full-time jobs). In addition, the most militant gun owners may have reasons of their own for refusing to communicate their militancy to strangers on the telephone. Lastly, since most interviewers are urban females (an anti-gun group), it has been suggested that interviewer effect might produce relatively more anti-gun answers. Preliminary research suggests that the interviewer effect relating to gender may change the intensity of an answer, but not the basic position expressed.

In sum, media polls on gun control are often not scientific and should be interpreted with caution. Media polls may tend to exaggerate popular support for sterner gun control measures.


ENDNOTES

1 David Bordua, "Gun Control and Opinion Measurement: Adversary Polling and the Construction of Social Meaning," in Don B. Kates, Jr. [ed] Firearms and Violence, Issues of Public Policy. (San Francisco: Pacific Institute for Public Policy Research, 1984); and William R. Tonso, "Social Problems and Sagecraft: Gun Control and the Social Scientific Enterprise," in Don B. Kates, Jr. [ed], ibid; William R. Tonso. Gun and Society, The Social and Existential Roots of the American Attachment to Firearms. (Washington, DC: University Press of America, 1982).

2 Cambridge Reports, An Analysis of Public Attitudes Toward Handgun Control (Cambridge, Mass., June 1978). Decision Making Information, Attitudes of the American Electorate Towards Gun Control 1978 (Santa Ana, California, 1978).

Both surveys reported consistent findings that about 40-50% of U.S. households owned some kind of gun, and about half of those households owned a handgun. The two surveys agreed that about 7% of adults carry a gun on their person, and that 40% of handgun owners bought their weapon mainly for self-protection. About 15% of all registered voters or their families had used a gun in self-defense (including by brandishing it). Caddell reported that 2% of all adults had personally fired a handgun in self-defense; DMI found that 6% of all registered voters or their families had fired a gun in self-defense. The incidence of firearms accidents was about equal to the incidence of firearms use for self-defense.

The two surveys also produced similar results about gun control. Regarding mandatory prison sentences for criminals who use a gun, Caddell found 83% support, and DMI found 93% support. Requiring detailed record-keeping by gun dealers was favored by 54% of the DMI respondents, and 49% of Caddell's. Caddell found about 62% of the population against a ban on handgun ownership, while DMI found 83% opposed. Each survey found 40-50% agreeing that stricter gun controls would reduce crime. 78% of Caddell's sample thought that gun control laws only affect law- abiding citizens; 85-91% of DMI's sample thought registration would not prevent criminals from acquiring handguns. About half of the DMI and Caddell samples agreed that national gun registration might eventually lead to total firearms confiscation.

To the extent the surveys seemed to differ, it was usually because the pollsters had asked different questions. For example, 87% of DMI thought that the Constitution guaranteed an individual right to own a gun, and 53% of Caddell thought handgun licensing was Constitutional. The results were consistent, in that the majority may have felt that the Constitution guarantees a right to own a gun, but that handgun licensing does not violate that right.

Thus, Wright, Rossi and Daly concluded:

Despite the occasionally sharp differences in emphasis and interpretation...the actual empirical findings from the two surveys are remarkably similar. Results from comparable (even roughly comparable) items rarely differ between the two surveys by more than 10 percentage points, well within the "allowable" limits given the initial differences in sampling frame and the usual margin of survey error....[O]n virtually all points where a direct comparison is possible, the evidence from each survey says essentially the same thing. [p. 240].

In short, except for the fact that the two surveys came from different sides of the gun control debate and highlighted different aspects of their results, they were nearly identical. For a detailed comparison of these two polls, see Chapter 11, James D. Wright, Peter H. Rossi and Kathleen Daly. Under the Gun: Weapons, Crime and Violence in America. (New York: Aldine, 1983).

3 Don B. Kates, "Bigotry, Symbolism and Ideology in the Battle over Gun Control," Public Interest Law Review (Carolina Academic Press: 1992): 31-46.

4 "Under Fire," Time Magazine, January 29, 1990.

5 Gollin criticizes media polls for poor quality, but he does not specify which errors he is concerned about. Albert E. Gollin. "Polling and the News Media," Public Opinion Quarterly, 51 (no. 1, pt. 2), 1987, 86-94.

6 Cynthia Crossen, journalist for the Wall Street Journal, has argued that media polls are increasingly underfunded. See Globe and Mail, December 7, 1991, D5. For example, The Vancouver Sun, the largest daily in British Columbia, with a paid circulation of over 250,000, has a reputation of being cheap. For the past decade, the paper has relied upon the pollster with the worst record in the province because he was the low bidder. Since few editors were familiar with survey methods, the pollster had a free hand in conducting his polls, and because he is interested in profit, he has every reason to cut corners methodologically.

7 See Seymour Sudman and Norman M. Bradburn. Response Effects in Surveys: A Review and Synthesis. (Chicago: Aldine, 1974); Norman M. Bradburn, "Response Effects" in Peter H. Rossi, James D. Wright and Andy B. Anderson, (eds) Handbook of Survey Research, (New York: Academic Press, 1983); GE Lenski and J.C. Leggett. "Caste, Class, and Deference in the Research Interview," American Journal of Sociology. 1960. 65: 463-467.

8 For example, Associated Press, "Polls Shows Majority Favor Ban on Assault Weapons," Rocky Mountain News, March 19, 1989, p. 46: Los Angeles Times survey of "1,158 people...has a margin of error plus or minus 3 or 4 percentage points." For a Newsweek survey, "The telephone pole [sic] of 756 adults...has a margin of error of plus or minus 4 percentage points."

9 For a very readable account of the practical problems involved in conducting a modern survey study, and the kinds of potential errors, see Robert M. Groves, Survey Errors and Survey Costs. (New York: Wiley and Sons, 1989).

10 About 1/4 of American households contain a handgun, which means that about 1/4 of a nationwide sample of households would be expected to own a handgun. Thus, if the sampling error for the full sample is ± 2.5 percentage points, then for one-quarter of the sample the sampling error increases to ± 5 percentage points.

11 Robert S. Greenberg and John J. Fialka, "Calls by Some on GOP Right to Consider a Nuclear Strike Spark Heated Debate," Wall Street Journal, 1991.

12 Sept. 4-7, 1990.

13 Wright et al., supra, p. 223.

14 Maryland has a 14 day wait, and Virginia an instant telephone check. The Virginia check seems to have a lower error rate.

15 E.g., Associated Press, "Polls Shows Majority Favor Ban on Assault Weapons," Rocky Mountain News, March 19, 1989, p. 46 (reporting Los Angeles Times and Newsweek polls).

16 Reproduced in Hazel Erskine. "The Polls: Gun Control," Public Opinion Quarterly, vol. 36, 1972: 469.

17 "In general, do you feel the laws regarding covering the sale of firearms should be made more strict, less strict, or kept as they are now." Gallup Polls, Sept. 10-11, 1990, reporting results for 1980, 1986, 1989, and 1990. Should the laws covering the sale of handguns be made more strict. Gallup, 1975, discussed in Don B. Kates, Jr. "Toward a History of Handgun Prohibition in the United States," in Don B. Kates, Jr. ed., Restricting Handguns: The Liberal Skeptics Speak Out (North River Press, 1979), p. 27.

18 Douglas Jeffe and Sherry Bebitch Jeffe, "Gun Control: A Silent Majority Raises Its Voice," Public Opinion, May/June 1989, p. 9 (survey for CNN/Los Angeles Times).

19 Wright, et al, supra, p 232, citing 1975 DMI poll for evidence of public misunderstanding of current laws. George Gallup, "Gun Control Plan Favored," supra, 1975.

20 Kates, "Towards a History," supra, p. 27.

21 Kates, ibid., p. 28. Franklin E. Zimring. "Firearms, Violence and Public Policy," Scientific American, November 1991, 265[5]: 48-54.

22 Dr. Gary C. Lawrence, "Results of a National Telephone Survey of Registered Voters on Waiting Period and Immediate Check Legislation," May 1991. The poll was commissioned by the National Rifle Association. As discussed above, the fact that an organization with an interest in the result has paid for analytical polling does not make the poll invalid. Polls paid for the anti-gun Center for the Prevention of Handgun Violence and the pro-gun National Rifle Association have found strikingly similar results. Apparently if an organization is willing to pay the fee for thorough analytical polling by a professional polling firm, the results come out the same no matter who writes the check. Handgun Control, Inc., Assault Weapons: Polling Data (1990).

23 Bordua, supra, p. 347-348.

24 The principle is that all elements of the target population must have a known--often equal--probability of being selected.

25 A potential source of error is that not everyone in the general population has a telephone. In the mid-1980s it was estimated that approximately 93% of all households in the US have a telephone. Owen T. Thornberry, Jr. and James T. Massey. "Trends in United States Telephone Coverage Across Time and Subgroups," in Robert M. Groves, Paul P. Biemer, Lars E. Lyberg, James T. Massey, William L. Nicholls II, Joseph Waksberg, eds., Telephone Survey Methodology (New York: Wiley & Sons, 1988.), p 29.

26 See James M. Lepkowski, "Telephone Sampling Methods in the United States," In Groves et al., supra, p.73 The use of RDD in almost all state-wide polls has eliminated the problem of unlisted numbers that plagues directory-based sampling methods. RDD means that telephone numbers are created randomly using computer-generated lists, based on the prefixes for the target area. Clearly, unlisted numbers--either new listings or private numbers--can be generated so that the researcher is not dependent upon published lists of telephone numbers.

27 Thornberry and Massey, in Groves et al., supra, p. 37.

28 James T. Massey and Steven L. Botman 1988. "Weighting Adjustments for Random Digit Dialed Surveys," in Groves, et al., supra, p 143.

29 See Gary A. Mauser and Michael Margolis. "The Politics of Gun Control: Comparing Canadian and American Patterns," Presented to the American Political Science Association, San Francisco, 1990, and Arthur L. Stinchcombe, Rebecca Adams, Carol A. Heimer, Kim L. Scheppele, Tom W. Smith and D. Garth Taylor. Crime and Punishment--Changing Attitudes in America (San Francisco: Jossey Bass, 1980).

30 Another problem is that the willingness of people to admit that they own a firearm may vary across social categories. This is discussed in the next section.

31 The percent of households without telephones is the highest in the South (10%), followed closely by the West (7%), then comes the Midwest (6%), and is lowest in the Northeast (4%). Thornberry and Massey, supra, p 29

32 The percent of households without telephones is 6% for whites, 16% for blacks, and 11% for other non-whites. Ibid, p. 30.

33 The percent of households without telephones in urban areas is 8%, rural farm is 5%, while rural non-farm areas average 10%.

34 Robert M. Groves and Robert L. Kahn. Surveys by Telephone (New York: Academic Press, 1979), p 55.

35 Personal communication, Bruce Campbell, President, Campbell and Associates, Vancouver, BC.

36 Michael W. Traugott "Persistence in Respondent Selection." Public Opinion Quarterly (Spring 1987), p. 53.

37 Ibid., p. 54.

38 Groves, supra, p. 155.

39 Ibid., p. 205-207.

40 Mauser and Margolis, supra, p. 17.

41 Bankston, Carol Y. Thompson, Quentin A.L. Jenkins, and Craig
Forsyth, "The Influence of Fear of Crime, Gender, and Southern Culture on Carrying Firearms for Protection," Sociological Quarterly, 31[2]: 287-305, p. 302.

42 Michael Wheeler. Lies, Damn Lies, and Statistics: The Manipulation of Public Opinion in America (New York: W.W. Norton, 1976; republished by Dell), p. 111.

43 Ibid., p. 95.

44 Ibid., p. 112.

45 Ibid., p. 117.

46 Robert M. Groves and L.J. Magilavy. "Estimates of Interviewer Variance in Telephone Surveys." Proceedings of Survey Research Methods Section, American Statistical Association, (1980), 622-62.

47 Groves and Magilvay, supra.

48 Daniel Katz. "Do interviewers bias poll results?" Public Opinion Quarterly (1942) 6:248-268; Henry Cantril, Gauging Public Opinion. (Princeton: Princeton University Press, 1944).

49 Howard Schuman and Jean M. Converse. "The Effect of Black and White Interviewers on Black Responses in 1968." Public Opinion Quarterly,(1971), vol. 35: 44-68.

50 H.H. Hyman, W.J. Cobb, J.J. Feldman, and C.H. Stember. Interviewing in Social Research (Chicago: University of Chicago Press, 1954); David Riesman, "Orbits of Tolerance, Interviewers and Elites," Public Opinion Quarterly (1956), vol. 20: 49-73; G.E. Lenski and J.C. Leggett. "Caste, Class, and Deference in the Research Interview," American Journal of Sociology, (1960) vol. 65: 463-467; Michael D. Grimes and Gary L. Hansen, "Response Bias in Sex-Role Attitude Measurement," Sex Roles, (1984), vol. 10 (nos. 1 & 2): 67-72.

51 In a 1990 Eagleton Institute poll, women gave 84% pro-choice responses to female interviewers, and 64% pro-choice responses to identical questions from male interviewers. Men gave 77% pro-choice responses to female interviewers, and 70% pro-choice responses to male interviewers. Ellen Goodman, "Lies, More Lies, and Then There's Good Ol' Pillow Talk," Rocky Mountain News, May 14, 1990, p. 142.

52 Seymour Sudman and Norman M. Bradburn. Response Effects in Surveys: A Review and Synthesis, (Chicago: Aldine, 1974).

53 Wheeler, supra, pp. 116-17.

54 Groves and Kahn, supra; and Groves and Magilavy, supra.

55 RH Hanson and ES Marks. "Influence of the Interviewer on the Accuracy of Survey Results," Journal of the American Statistical Association, (1958), vol. 53: 635-655.; Seymour Sudman, Norman M. Bradburn, Ed Blair, and Carol Stocking. "Modest Expectations: The Effects of Interviewers Prior Expectations on Responses," Sociological Methods and Research, (1977), vol. 6 (no. 2): 171-182.

56 Seymour M. Lipset Political Man: The Social Basis of Politics (Garden City, NY: Doubleday, 1964).

57 Stinchcombe et al, supra; Mauser and Margolis, supra.

58 Wright et al. supra, p. 107

59 Jeffe and Jeffe, supra, p. 57.

60 It is true that of the small number of respondents who said that gun control was "one of the most important" issues to them, comprised a larger proportion of the pro-gun respondents (7.3 - 8.7%) than of the anti-gun respondents (3.1% - 4.0%). But since the anti-gun respondents (those in favor of requiring a police permit to buy a gun) outnumbered the pro-gun respondents about 2:1, the absolute numbers of pro-gun and anti-gun persons calling the issue "one of the most important" would be approximately equal. Schulman and Presser, supra, pp. 432-35.

61 A commendable exception was the Yankelovich poll for CNN/Los Angeles Times, which asked a long battery of questions about firearms attitudes, and found 84% of respondents believing that they had a right to own a gun. See Jeffe & Jeffe, supra, p. 10.

Share this page:

| More

 

Kopel RSS feed Click the icon to get RSS/XML updates of this website, and of Dave's blog posts.

Follow Dave on Twitter.

Search Kopel website:

Make a donation to support Dave Kopel's work in defense of constitutional rights and public safety.
Donate Now!

Nothing written here is to be construed as necessarily representing the views of the Independence Institute or as an attempt to influence any election or legislative action. Please send comments to Independence Institute, 727 East 16th Ave., Colorado 80203. Phone 303-279-6536. (email) webmngr @ i2i.org

Copyright © 2014