09/19/2024 | Press release | Distributed by Public on 09/19/2024 12:39
Between the July assassination attempt on Donald Trump, President Biden dropping out of the race, and Kamala Harris becoming the Democratic nominee, this past summer was unlike any other period in the 2024 presidential race. But the faculty, staff, and student fellows of the Penn Program on Opinion Research and Election Studies(PORES) have been quick to adjust.
At the same time, they continue to contend with shifts in the field of public opinion research that preceded this summer, and they continue with their own research. William Marble, for example, has been studyingthe reasons for the educational realignment of white voters during the past 40 years.
Marble, director of data science at PORES and a consultant for the NBC News Decision Desk, talked to Penn Today about how PORES has adjusted to changes over the summer, the rapidly changing field of public opinion research, and his desire for voters and news media to not only focus on horse race coverage but also polling on issues.
There's been a lot of change in the 2024 presidential race. How did that impact the staff of PORES and the work you're doing?
We have been closely monitoring all these changes. I have been closely tracking movements in polling, and this isn't just to try to predict who will win but also to understand broader political phenomena and understand which issues are important to voters.
A lot of what I did over the summer with my PORES research fellows was to dig into movements in polling across demographic subgroups. We were trying to figure out what influence the change in nominee would have on different subgroups and what we can learn about the important issues to these subgroups as a result. One thing we've seen is a pretty big increase in the share of young people who say they're going to vote for the Democratic candidate.
More generally-and this is not really affected by the upheaval-we've been doing a lot of prep work in anticipation of the election to understand changes in election lawsand to understand how candidates are campaigning.
We have a huge data collection effort, led by my colleague Stephen Pettigrew, where something like 30 Penn undergrads will be working with us on election night to understand election results at a fine-grained geographic level. That's been a huge lift for my colleagues and for a lot of the Penn students we work with, and so throughout the summer we've used various primary elections as a testing ground to refine this process and make sure that it goes smoothly in November.
To what extent is data collected in the spring and early summer still useful and relevant for voters and the media?
I think it's useful in many ways. The topline horse race is interesting, and news consumers care a lot about this, but as a political scientist I approach this from the perspective of trying to understand the issue priorities of the electorate. From that lens, the polls that were conducted earlier in the campaign, in the spring and early summer, are still very relevant. We saw that, for instance, Biden's support was flagging due to his age, perceptions that he oversaw a weak economy and inflation, and concerns about foreign policy and the conflict in Gaza. All of that still is informative about the issue priorities of voters even though the candidates have flipped.
I think a key question for generalizing from that earlier polling to the current election with Kamala Harris as the Democratic nominee is the extent to which she can distinguish herself as different from Biden and his administration.
What do you think are some of the biggest and most impactful misconceptions among the public about polling and election forecasts?
I think a lot of the news media coverage of horse race polling and of election forecasts that aggregate many polls together create a sense of false precision that we actually know what's going to happen. Ultimately, we have to wait until the election to know who's going to win, and I really wish that people would focus more on the substance of political campaigns rather than the horse race coverage.
I think public discourse would benefit from a de-emphasis on the horse race and deeper engagement with policy issues, both that the candidates are putting forward but also the policy issues that voters find important.
Are the issues that mattered to people when Biden was the nominee still the same issues people are thinking about now?
If you ask people which issues are most important to them, they will almost always say the economy, and this makes a lot of sense. The amount of money in your bank account is perennially salient, and, to the extent that presidential candidates have any influence over the direction of the economy, it's going to be important to voters.
But there are some other issues I think the Biden campaign tried to emphasize that it seems like the Harris campaign is not emphasizing quite as much. For instance, the Biden campaign was really emphasizing democracy as an issue that should motivate voters, and some research has come outover the summer suggesting that campaign messages focusing on Jan. 6 or false claims of election fraud don't seem to be all that persuasive to voters. Kamala Harris has tried to play up her prosecutorial image to some extent, but she hasn't campaigned as vigorously on this idea of protecting democracy as Biden did.
I think we saw during the Democratic National Convention that the Democratic Party and Kamala Harris in particular have been trying to put abortion rights and reproductive rights more generally at the center of the election. When you look at surveys to see how many people say abortion rights is their most important issue, it isn't a huge group of people, but in a close election you don't need to persuade all that many people to vote for you to win.
For years now, there's been a lot of discussion among polling organizations and news media about whether to run polls and how to run them, especially in light of 2016. What is the nature of those conversations, and have they been shaped by recent events?
I think there's two pieces to this. One is that we learn a lot from each election, and we've seen how pollsters have gotten it wrong in past elections, and we hope that we can learn from that and adjust our methods. We see, for instance, that we didn't do enough adjustments for education in 2016, so now everybody's adjusting for education.
A second related point is that one of the things that we've learned is that the types of non-response we see in political polls are dynamic. After things like political conventions or after positive news events, people who are sympathetic to the parties that are getting positive news coverage are more enthusiastic about taking polls. That has generated some of this push for adjusting for things like party registration, party identification, or past vote choice.
We use surveys to learn about lots of things in the world. We do surveys to estimate the unemployment rate, we use surveys to estimate drug use, we use surveys to estimate public opinions. We don't have ground truth data to know how wrong our survey estimates are, and elections are one of these few opportunities to calibrate our survey methods. I think political polling gets a lot of attention because it's falsifiable. The survey methodology that we develop to improve election polling ideally should be more applicable to help us understand who takes surveys more generally, so we can implement lessons we learn across tons of different domains.
What's the latest on how polling organizations reach Gen Z and young voters, and how do you adjust for nonresponse in that situation?
It's tough. We're in a landscape where people are inundated with spam phone calls. Nobody wants to pick up their phone, young people especially. This has been really detrimental to survey research because for decades this was the gold standard we used to gauge public opinion.
There are a ton of new methods that have been developed, and these are rapidly shifting. That includes text-to-web surveys, where you get a text message asking you to take a poll. Other survey firms try to recruit people into a standing survey panel, and then they can get little incentives for taking surveys. Pollsters like YouGov have been doing this for a long time.
All these methods are evolving rapidly, which is exciting from the perspective of a survey methodologist like myself, but I think also frustrating for people in the public because what was the gold standard four years ago may be out of date at this point.
You're teaching a course on political polling this semester. How has it evolved over time, and what are you focusing on now?
It's very fun to teach because it's a very hands-on introduction to survey research. The structure of the course is generally through classic survey methodology, when we think about drawing random samples from the population and calculating margins of error. Then it covers the more realistic departures from those idealized versions of survey research, so we get into the cutting-edge survey methods on how to adjust for non-response and how to estimate public opinion in small geographic subgroups.
I have the students design their own survey and write their final paper based on the data they collect using the questions that they write. It's often the first opportunity students have to conduct real-world research analyzing data that they themselves collected, and so that's a very rewarding experience for me.