US Election 1948: The First Great Controversy about Polls, M
USINFO | 2013-09-04 15:52
The 1948 Failure of the U.S. Polls of Electoral Standing
 
In the presidential election 1948 in the United States, The Gallup, Roper, and Crossley polls had created a widespread certainty of Truman’s defeat. Some papers, including The Chicago Tribune, a subscriber to the Gallup poll, even announced Truman’s loss in front-page headlines before the returns were in. Instead of facilitating a transition in government, the polls had misled the presidential candidates and all other politicians, the Washington bureaucrats, the media, and the public.
 
The failure did not affect foreign polling operations nearly as much, but even European pollsters were occasionally accused of engaging in an intellectual scam. More important, academic survey researchers in the United States became upset as their research method became publicly questioned. Rensis Likert (1948) rushed to the nearest issue of Scientific American and argued that no matter how wrong Gallup, Roper, and Crossley had been in the election, "it would be as foolish to abandon this field as it would be to give up any scientific inquiry which, because of faulty methods and analysis, produced inaccurate results." The shock caused an intellectual crisis for polling and flurry of reflections. The failure of the polls in 1948 brought the academic and private survey researchers together.
 
Social Science Research Council organized a "Committee on Polls and Election Forecasts." It was composed of academics from mathematical statistics (Frederick Mosteller of Harvard University, Chairman of the Committee), political science, social psychology, sociology, history and some others. The pollsters gave the committee all their source material from their pre-election studies and full access to their staffs. The committee engaged a technical team that worked on and re-tabulated the data for three weeks.
 
In their 400 page report (Mosteller et al. 1949), the committee concluded that the pollsters had overreached their ability to pick the winner in the 1948 election; the possibility of a close election was obvious from their own data and the level of measurement errors in past elections.     A supposition of the committee was that in the last two weeks of the campaign when the interviewing after Crossley, Gallup, and Roper had completed their interviews, there had been a net shift to Truman of two to three percentage points. The pollsters had missed the necessity of measuring preferences just before the election. The Committee, however, would not flatly say that Crossley and Gallup had been right two weeks before the election. (Roper was too far off; he could not reasonably have been right a fortnight before.) The committee’s conclusion on the last weeks’ shift was "tentative."

The committee voiced the suspicion that the pollsters’ use of quota sampling rather than probability sampling had allowed interviewers to select somewhat more educated and well-off people within their assigned quotas. This biased their samples against Truman, who appealed more to the lower classes than Dewey.

The pollsters had assumed that the undecided at the time of the interview would vote in the same way as those who had already made up their minds. This was an unproven assumption, and may not have been the case in the 1948 electorate.

The pollsters had no certain way of deciding who would stay home on Election Day and who would go and cast a vote.
 
These errors were not new; the committee found previous instances in which they had been documented. In all, the conclusions from the committee were grim reading for Gallup, Roper, and Crossley.
 
After the publication of the conclusions from the Social Science Research Council, a conference on the topic was held in February 1949 at the University of Iowa under the title Polls and Public Opinion (Meyer and Saunders 1949; all page references in the present paper from conference contributions are from this work). At that time the full staff report was still not available to the participants, although many of them were aware of at least some of its findings. The Chairmen of the sessions were mostly from the University of Iowa, for example, Robert Sears, of the Psychology Department and Director of the famous Iowa Child Research Station. Some prominent outsiders were also invited to chair sessions; thus F. Stuart Chapin, the Head of the Sociology Department at the University of Minnesota chaired the session on future trends of opinion sampling at the symposium.
 
There were many editors and several persons from schools of journalism at the conference. Gideon Seymour, Vice President and executive editor of the Minneapolis Star and Tribune, was the main spokesman of journalism. Somewhat surprisingly, Mr. Seymour proved to be a satisfied client of polls. True, his paper had been misleading by publishing the polls about the presidential race. But the local poll had been correct! And, as all editors of newspapers know, the local audience is bread and butter, and the national audience is a luxury. Mr. Seymour liked the idea that he had presented a state poll in the news section of his paper showing that Minnesota’s voters preferred the winning Hubert Humphrey (Democrat) for senator, while his editorial page had endorsed the re-election of Senator Joe Ball (Republican). Polls had solved a common dilemma for an editor-in-chief; editorial opinion must at times be allowed to differ from the readers’ opinion, and, thanks to the polls, the paper had an easy way to publish both.

Francis H. Russell from the State Department and Morris S Hansen of the Bureau of the Census represented the federal bureaucracy. Dr. Hansen was active in the discussions; here spoke a future top statistician. Although he represented the census with its total numeration of the population, he spoke glowingly about samples. But he underlined the need to use probability sampling, which the pollsters had not.
 
Well-known university survey researchers were invited. Bernard Berelsen of Chicago University demonstrated the usefulness of looking at historical events when interpreting opinions, an implied criticism of both journalists and pollsters who focused on the situation of the day. C. Dodd, Director of the Public Opinion Laboratory at the University of Washington in Seattle, and some of his coworkers presented relevant suggestions from experiments in their statewide surveys. Hadley Cantril from Princeton University had a paper read. Paul F. Lazarsfeld from Columbia University participated with many spirited comments. He brought most of the laughter to the proceedings, one by telling his experience from a recent trip to Europe:
 
I just came back from Europe and I spent election week in Norway and election week end in Sweden, and the way things are discussed there is so: "Do you have a ‘Gallup’ yourself?" "Has Crossley’s ‘Gallup’ been better than Roper’s ‘Gallup’?" (laughter). And I tried to understand what they meant by "Crossley’s ‘Gallup,’" and then I found out that their notion of the word "Gallup" is the American word for "polls." What I need is a "Lazarsfeld’s Gallup." I have definitely lost status by claiming that I know Mr. George Gallup, who does excellent polls, because no one really believed me. (p.194)
 
Lazarsfeld held that the SSRC report was not particularly constructive as a basis for future work.
 
The head of NORC, Clyde W. Hart, was active at the conference. He took a sympathetic stand to polling for media, and emphasized that academic researchers also had a responsibility to advance the opinion research used in journalism. Clyde Hart also said, "Our samples are always samples of discrete individuals in a population; they are not samples of a public" (p 28). This seemingly innocuous remark pointed to a serious gap between academic theory and research. Public opinion, as conceived by social and political philosophers, including the fathers of democracy, was different. As specified by early students of public opinion such as Lowell (1913) and Lippmann (1922), public opinions emerge in lasting and functioning networks, "publics." These authors assumed that a public had such a density that the participants could talk and argue about a common issue so that everyone's view became known and influenced by everyone else's view. They called the result of this process "public opinion," not necessarily unanimous, but one that emerged after all sides had been heard and considered.
 
Hart emphasized the need for better integration of theory and research in this field and many academic interventions in the symposium called for a better theory of public opinion. This integration of theory and survey research may have been more relevant for polling on issues. Since the problem of the conference was an electoral standing poll, Hart’s theoretical concerns tended to be bypassed without conclusions.
 
None from the Survey Research Center in Michigan bothered to come to the Iowa conference. SRC leaked the message that they had had an unpublished survey with a probability sample in the field at the time of the 1948 election. It had showed Truman’s lead.

Neither did Elmo Roper, a big promoter of polling, come to the conference; he had been more off the mark than the other pollsters in the 1948 election. Stevens J. Stock from the Opinion Research Corporation in Princeton attended from the polling industry and, of course, Archibald Crossley and George Gallup.
 
The pollsters at the conference graciously accepted most of the committee’s criticism but apparently smarted under its know-it-all tone. Dr. Gallup said:
 
Up to this point we in the field of public opinion research have had to carry the ball ourselves with comparatively little help, but with plenty of criticism, from the social scientists.
 
The Social Science Research Council has laid the groundwork for a healthy and continuing partnership. I for one am ready at all times to work with any social scientist or any group of social scientists on any problem of public opinion research (p. 223).
 
Professor Samuel A. Stouffer of Harvard University, represented the Social Science Research Council Committee on Polls and Election Forecasts. He was one of the co-authors of the report; the main author, Frederick Mosteller, could not attend. Stouffer was a stellar person in the field in those days, working on a summary and evaluation and methodological lessons from over 500 survey studies of American soldiers in World War II (Stouffer et al. 1949).
 
Samuel Stouffer did not backtrack one iota from the committee’s conclusions that the 1948 polls were not up to the standards of science.
 
However, Stouffer added:
 
I may say that the more I worked on this report, the more I felt the debt that we owe to these men who have been willing to risk their own money in going out and trying to learn something about American political behavior. The pollsters have been ahead of the universities; the universities have been tagging along behind. This is no time for the universities to say, "Oh, well, this is something we don't want any part of." This is the time, it seems to me, for the universities to say that in this device, this new invention that we have, lies one of the great opportunities for developing an effective social science. And we have a responsibility, we in the universities, to do our best to help improve these techniques. With the improvement of these techniques I think we can have every confidence that we are going to improve social science, and I think we are going to help our country, because I believe in this work as an instrument of democracy. (p 214.)
 
We may summarize the evaluation of the polling experience in the 1948 presidential election by separating George H. Gallup’s scientific application of survey sampling and interviewing from his social invention of polling of the public, on the issues defined by the public, for the benefit of the public. The scientific application was on the right track but must be improved. The social invention was a brilliant and promising instrument for democracy.
美闻网---美国生活资讯门户
©2012-2014 Bywoon | Bywoon