We begin a series of post-mortems of the UP verdict by the CSDS team. The first part turns the searchlight within: what went wrong with the pollsters?
Instead of mounting her now-famous verbal assault on the ‘Manuwadi’ media, Mayawati preferred sarcasm in her opening statement in the first press conference after the election. Feigning to apologise for being inaccessible during the campaign, she said she did not wish to “disturb” the media while it was busy making all kinds of projections. Mayawati did not mention opinion and exit polls separately, but there is little doubt that these were her principal targets.
After the Lok Sabha poll of 2004, the assembly election of Uttar Pradesh is clearly an instance of collective failure of opinion and exit polls. In relative terms, The Indian Express-CNN-IBN-CSDS post poll can definitely claim to have been the best of the lot. Our forecast captured the trend accurately, projected the BSP way ahead of its rivals (we projected 152-168 seats for it), forewarned that the BSP could “do better than projected here” and said that the BJP will finish a poor third. Yet, as my colleague Rajeeva Karandikar put it, this was the case of the best not being good enough. The fact is that none of the polls, including ours, came close to suggesting a clear majority for the BSP.
SITTING PRETTY: Mayawati has led BSP to a decisive victory in UP polls
This was a clear instance of the triumph of old-style political journalism over the new fangled number crunching. Political reporters may not have talked about a clear majority for the BSP, but they did capture the hawa. If you read the despatches by Manini Chatterjee, reportage by Vidya Subrahmaniam, articles by Mahesh Rangarajan or the field stories in papers like Jansatta and Amar Ujala - you got a clear sense of a decisive victory for the BSP.
There are three reasons why the polls were so off the mark. First, there was a sampling error, especially in the exit polls. In an exit poll, you don’t choose the person you wish to interview. The voter chooses to walk to or walk away from the investigator stationed outside the polling station. Unless systematic precautions are taken, chances are that any exit poll will over-represent the well-off and upper caste and under-represent the poor and lower caste. This has resulted in systematic under-estimation of the BSP and over-estimation of the BJP over the last decade.
Accuracy of various forecasts
Indian Express-CNN-IBN-CSDS:- 78
Star News-AC Nielsen:- 68
Times Now-HANSA:- 60
India TV-CVoter:- 66
The accuracy rate of any forecast is computed as follows: the sum of deviations between the actual results and mid-point of the projected range for the three leading parties/alliances expressed as percentage of the total seats in the Assembly is defined as the error. 100 minus the error is the accuracy rate.
Secondly, there was a response bias. Those who voted for the BSP were less likely to say so to an outsider. Non-dalits who voted for the BSP may have been unwilling to admit it to themselves. This very unusual situation led to an over-reporting for the SP and BJP.
Finally, the vote-seats equation in UP turned out to be very skewed this time. For every one per cent of its votes, the BSP won 6.8 seats, compared to 3.8 for the SP and 3.0 for the BJP. This meant that the BSP won a large number of seats with very small margins. Even if you could foresee the exact vote share of the BSP, it was difficult to forecast the number of its seats.
The trouble was that all these three factors — sampling error, response bias and vote-seat distortion — operated in the same direction: all these led to under-estimating the BSP and over-estimating the BJP. Pollsters usually hope that the different errors they make will cancel each other; this time they reinforced each other.
We were acutely conscious — something we said in every article — that all the polls in UP have always under-estimated the BSP. That is why The Indian Express-CNN-IBN-CSDS poll went for a post-poll survey rather than an exit poll. We chose our respondents by drawing a lottery from the voters’ list and ensured that we captured the poor and dalit voters in the correct proportion. We also corrected for response bias, which was perhaps bigger in our case than in the exit polls. Thus our estimate of the BSP and the SP votes was within one percentage point of the actual vote share. We did warn our readers and viewers that converting votes into seats in a four-way contest is very tricky and that is where we slipped. We could not assess that a 5 point lead would translate into an advantage of more than 100 seats.
In this sense, UP was the ultimate nightmare for election forecasting. But it would be a folly to treat this as a one-off exception and just forget about it, as the pollsters did after the 2004 LS elections. Such collective amnesia leads to public loss of faith in all forms of polling, besides leaving room for uninformed charges of political bias and petty professional turf wars between pollsters and journalists or between different methodologies in the social sciences.
By recognising this failure as a failure, we begin to admit that election forecasting in India has a long way to go, that there is a big gap between what polls promise and are expected to deliver on the one hand and what they are capable of doing on the other. This allows us to acknowledge that the art of polling and forecasting has not seen any major methodological innovations since the path-breaking research by Prannoy Roy and the India Today-MARG team in the 1980s. Above all, this allows us to respond to Behenji’s call to the media for atmachintan. It’s time pollsters got together in some Atmachintan Workshops.