Comments 1 - 3 of 3 Search these comments
Yeah it seems that truth wasnt their goal. Accuracy was a secondary requirement to the narrative. Incompetence also played a role. These polling companies need to let my team build their fucking models
Yeah.
Polling companies justify these decisions with their models. Assuming they believe their own BS, they should have at least known the models are broken. The world has changed. Support for Trump cant be modeled using historical Republican demographics. Its as if they ignored Brexit. A fool w a tool is still a fool.
Geesh how is it NOT obvious how the polls failed. Nobody has said the obvious, that the reason the Polls were one side. Was because they polled the North East, South Florida and Calinfornia.
That's if they actually polled at all.
Second the way our election process is set up with voting districts and the electorial college, and the Gerrymandering Libs and Reps have done for hundreds of years meddling and fudging the district lines.
It's impossible to get an accurate poll without basicaly having a pre-election and polling most of the whole country. They don't match results of Kentuky with Calinfornia and track the EC votes as they would pertain to the popular vote they are polling. They are repoting the popular vote of a few regions.
Also I'm certain Nate Silver was only polling Blue counties in the individual states he polled.
http://home.ubernerdz.com/index.php/2016/11/15/bad-election-day-forecasts-deal-blow-to-data-science/
It seems the Wall Street Journal has learned to write headlines from the tabloid news networks which shall here remain unnamed. You picture some evil giant called Big Data getting deservedly pummeled by a young handsome hero. To quote a favorite phrase of the president elect … “Wrongâ€.
A couple of reasons why:
1) A fool with a tool is still a fool. This election was unprecedented for negative rhetoric. Real issues facing the American people were brushed aside in favor of each candidate explaining why the other was unsuited for the job. Issue based algorithms were at a disadvantage. The best approach for issue based predictive analysis would have been to step back and admit that a fresh look was required. Of course some analysts did just that, but the majority were committed to a particular software package and were unable or unwilling to make radical modifications.
2) A few forecasters had better results. The USC/LA Times Daybreak poll and the IBD/TIPP presidential tracking poll both got it right. Obviously they were doing something different than the large group that got it wrong. But was it better? It is tempting to evaluate predictors by how well a particular prediction performed. Tempting and wrong. Let’s compare solving a puzzle to making a prediction. A puzzle will have a solution and if you find it you got it right. A prediction (and many decisions in life) has an associated probability rather than a clear solution. A forecaster could say that a particular candidate has an 80% chance of winning. Assuming the prediction is accurate there is still a one in five chance that the underdog will win. If you only ever see the one unhappy result it is simple to condemn the predictor and whatever technique was being used. Too simple and unsound.
The WSJ’s title was clearly attention seeking. I guess that sells newspapers. The article itself was fine and did finally get to a solid point – albeit not the one alluded to in the headline. The polling questions being asked around the country seemed to give misleading results. How to ask a question in order to get more accurate answers? That is a puzzle for the psychologists to solve, not the data scientists.