Today marks the release of the initial results of the first University of Texas/Texas Tribune Poll of 2013. The poll focuses on public policy issues confronted by the 83rd Legislature, along with several recurring subjects we regularly monitor. It also introduces a few methodological changes, and it marks the beginning of an attempt to make more and better use of the considerable body of data we have accumulated since the poll began in October 2009.
The primary technical change was an increase of our sample size from 800 to 1,200 respondents beginning with this survey. While this may seem like a large increase at first blush — 50 percent to be exact — statistically, it won’t make as large a difference in our margin of error as one might expect. Sample size increases have a diminishing marginal return, which is why the standard nationwide poll usually surveys approximately 1,000 people. The reality is that increasing a survey pool much beyond 1,000 doesn’t appreciably change the margin of error. For example, in Texas, if we are interested in surveying registered voters, a 1,000-person sample has a margin of error of 3.1 percent. A 1,200-person sample only reduces our margin of error to 2.83 percent.
If the payoff in reducing our overall margin of error isn’t large, why do this? One major reason is to have larger samples of the subgroups that are becoming increasingly important to understanding the dynamics of Texas politics. For example, we have good reason to be interested in likely Republican primary voters, Latinos, and parents with children in the public school system. But each distinction we make within the overall survey sample results in an effective decrease in our sample size, and a higher margin of error for that subset. When it comes to trying to draw conclusions about the attitudes of subgroups within a survey sample, the smaller size of these subgroups results in less reliable estimates. By increasing the sample size of our survey, we hope to be able to provide more statistically confident, in-depth analyses of the groups that are most relevant for particular policy debates and for elections.
Second, in the methodological documentation that accompanies every poll, in addition to the standard calculation, we will now provide an alternative calculation of the margin of error. This alternative takes into consideration the impact that weighting our data has in an increasingly difficult environment for polling (we’ll say more on this in another post later this week).
Finally, we will be discussing the results of our current and past surveys more deeply and more frequently in the Polling Center blog. The purpose is to provide timely, cogent analyses of political events and public policy issues based on public opinion culled from the data in the University of Texas/Texas Tribune polling archive. We’ve amassed a considerable body of data during the life of the poll, and we want to do more to connect it to discussions of policy and politics on an ongoing basis — not just when the most recent poll is “news.”