Media pundits are having a tough time explaining why national polls seem to be fluctuating widely. Let's cut through the manure surrounding polls.
The polls you read about or see on television are designed to be easy to assimilate. Good campaign polling is far more complex, with many levels of questions that allow pollsters to construct useful profiles of voters and their momentary preferences. They test assumptions. When discussing polls, most journalists focus on who's ahead, as if that were any kind of prediction of who will win months down the road.
To understand any poll, you need to understand just a little bit about mathematics and methodology. Methodology describes how a poll is designed, constructed and implemented in the real world. Always remember that polling only works if a truly random pool of people has been surveyed. Unlisted phone numbers, time of day or night called and many other factors can warp who's in and who's out of the survey pool. Most people already grasp that once the voter is contacted, the way questions are worded and how they're asked can affect the answers given. Good pollsters account for these variables, and are willing to discuss their methods with the press.
Here's some math. I'm skipping the complex formulae, but please understand that millennia of bearded men rolling dice and figuring probabilities have worked this stuff out well. You need to know that most political polls are designed to have a "Confidence Level" of 95 percent. This means that there's a 95 percent chance that the poll is accurate.
Yes, you read right. One time in 20, even a well-designed, unbiased poll can just miss the mark. When one South Carolina poll showed McCain far ahead of Bush this spring, the press jumped all over it and almost changed the course of the election. But this one poll was just off. Surprise.
If we conduct the same survey 20 times, then 19 times every answer will fall within a certain range of the number we actually published. That range, plus or minus, is the "margin of error" and its size depends on the number of people polled. If 400 people are polled, then 19 out of 20 times the answers will fall 4.9 percentage points higher or lower than the number shown. If 1,200 people are polled, then the margin of error is only 2.8 percentage points plus or minus. If you analyze smaller subgroups, like when they compare men to women or upstate to downstate, the margin of error is higher for each little group than for the entire polled group.
So if a particular tracking poll of at least 400 people shows Gore leading Bush 43 to 41 percent, and three days later it shows Bush leading 43 to 41 percent, then nothing has happened. Don't be obsessed with these small fluctuations. All four major national tracking polls have been slipping around all over the place, but practically always within the margins of error.
If a commentator blabbers about "a sudden change" or "an unexpected tumble" when a tracking poll has only moved a few points, then he's a meatball. Turn the channel and vote on Election Day, the only poll in which the margin of error is zero.