Skip to main content

Earlier we learned that Markos predicted the results of the closest 2012 presidential races with more accuracy than Nate Silver at 538, who was the best of the polling aggregators and/or modelers.

But guess what—there's actually a free-range, locally produced, antibiotic-free, (relatively) simple model that, to my everlasting surprise, does better than both of them. It's based on the idea that most of the time, the polls are off by a fairly predictable amount related to the partisan lean of the state. I'll call it DRM (Dreaminonempty's Regression Model), 'cause I gotta call it something.

Returning to Markos' post, here's the table he posted with DRM predictions and polling averages for the final 10 days added in (the closest predictions for each state are highlighted):

Dreaminonempty's predictions in close Presidential races were better than polling averages, Markos, or Nate Silver
And, to expand on the comparison, here's the Senate numbers (poll averages include entire month of October in some cases; see below the fold for details):
Dreaminonempty's predictions in close Senate races were better than polling averages, Markos, or Nate Silver
The average error of DRM predictions is lower for both close Senate and presidential races. Not only that, DRM predictions came closest to the actual result in 12 races, while Markos grabbed the honors in nine races, and the polling average was best three times. It looks like we can add another successful, reality-based, Daily-Kos-originated prediction model to the collection!

And the best news? It's easy to make a DRM prediction. See below the fold for simple instructions and analysis of additional races.

A simple explanation: (updated from the comments)
Polling averages often don't predict the election margin (%D-%R) correctly, even if they do predict the winner. Usually, they underestimate the Democrat's performance in Blue states, and underestimate the Republican's performance in Red states. This is likely because, as LNK put it, people tend to conform to their surroundings.

I attempted to correct for this in a very simple manner, in hopes that, on average, the new predictions would be more accurate than the polls alone. The correction involved adding a number to the polling margin in each state based on how 'red' or 'blue' it was.

In the end, the corrected polling numbers were more accurate than the polling numbers alone. The method worked.

Instructions for DRM
1. Look at polls for about the previous month (Oct. 1 onward for a final prediction). Are there more than three polls in the past 10 days? If yes, skip to Step 2. If no, average the Democratic margins (%D-%R) for all polls in the month, and go to Step 3.

2. Do you see a trend in the polls over the past month? If you see a trend, average the Democratic margins (%D-%R) for the polls for the previous 10 days only. Otherwise, average the Democratic margins (%D-%R) for all polls in the month.

3. Find your state on this table. Add the DRM factor you see to your polling average for the margin estimate.

4. Check for red flags. If there are only one or two polls total, or a third party candidate is drawing more than 5 percent of the vote, the estimate could be off by a substantial amount. Also, states with a federal/local party mismatch, like West Virginia, could run into trouble.

Example: In my first post on this subject, I used Elizabeth Warren's Senate race in Massachusetts as an example, so let's return to that race.

The polls for this race from Oct. 1 onward are pretty steady. There are 17 polls, with an average margin of +3.8 points in Warren's favor.

The state table shows Massachusetts with a DRM factor of +4.1 points, yielding a estimated margin of 7.9 points in Warren's favor. The actual result was Warren +7.4 points.

How does the DRM work?
The basic idea is that in red states, Republicans do a little better than the polls say they will, while in blue states, Democrats tend to do a little better than the polls say they will. You can see the relationship here:

Polling errors are a function of Obama's 2008 performance
The regression line is used to generate the DRM factors in the linked table. This regression will be updated after 2012 results are finalized.

How well does this method work?
In my final pre-election post, I said I would consider the model a success if the average error is lower for the predictions than for polling averages alone. As shown above, the DRM model was indeed successful by this measure in close presidential and Senate races. What about the rest?

Below I link to all the predictions and errors, including 538 predictions for comparison to the best of the well-known models. Please note that counting is not complete in many states, so some of these numbers could change. Also, I had a second set of predictions, based not on the regression but the prior performance of polls in each individual state in either presidential races or Senate and Governor races. This I will call DSE (Dreaminonempty's State Errors).

Here are links to predictions and errors for the three sets of races:

President - Predictions and Errors
Senate - Predictions and Errors
Governor - Predictions and Errors

Note that some changes were made after originally posting these predictions as data entry errors were found and corrected.

Generally speaking, the worst DRM predictions were for states with few polls, and some of the races with third party candidates drawing more than 5 percent of the vote.

The first way to test the predictions is to ask which prediction was best most often. By this measure, DRM was by far the best, with the closest prediction more than half the time:

DRM model is most accurate most often in Senate and Presidential races
Another, geekier, way to look at errors is the Root Mean Square Error (RMSE). It's basically a measure of accuracy—and all you need to know is that lower numbers are better.
RMSE errors lowest for DRM for Presidential and Senate races, but not Governors
DRM performs best in presidential and Senate races, but worse than the polling average in gubernatorial races.

Out of curiosity, I redid the above table, but only included states with 10 or more polls, and excluded states with third parties >5 percent. The accuracy of these predictions was clearly better, to no great surprise. The relative performance of the different prediction methods remained about the same.

Indeed, the errors of all methods increase as a function of the number of polls. Here's an example of presidential numbers for DRM:
Errors increase as the number of polls in a state decreases

Questions to address
With all the new 2012 data, I hope to look into the following issues:

Can we do anything to sort out races with third parties?

Is there a reason why sometimes DSE is better than DRM?

Is there a better measure for state partisanship than Obama's 2008 vote share?

Is there an issue with using this method for gubernatorial races?

Hopefully these questions will lead to improvement without sacrificing simplicity.

Final thoughts: (updated from the comments)
This method of prediction is nowhere close to the same level of sophistication as 538. The intent was to try to find something as simple as possible that was better than the polls alone. This is about as simple as you can get - it's only got two inputs. By keeping it simple, it is easy for many people to replicate on their own and and put to their own uses.

Originally posted to Daily Kos Elections on Thu Nov 15, 2012 at 09:00 AM PST.

Also republished by Daily Kos.

EMAIL TO A FRIEND X
Your Email has been sent.
You must add at least one tag to this diary before publishing it.

Add keywords that describe this diary. Separate multiple keywords with commas.
Tagging tips - Search For Tags - Browse For Tags

?

More Tagging tips:

A tag is a way to search for this diary. If someone is searching for "Barack Obama," is this a diary they'd be trying to find?

Use a person's full name, without any title. Senator Obama may become President Obama, and Michelle Obama might run for office.

If your diary covers an election or elected official, use election tags, which are generally the state abbreviation followed by the office. CA-01 is the first district House seat. CA-Sen covers both senate races. NY-GOV covers the New York governor's race.

Tags do not compound: that is, "education reform" is a completely different tag from "education". A tag like "reform" alone is probably not meaningful.

Consider if one or more of these tags fits your diary: Civil Rights, Community, Congress, Culture, Economy, Education, Elections, Energy, Environment, Health Care, International, Labor, Law, Media, Meta, National Security, Science, Transportation, or White House. If your diary is specific to a state, consider adding the state (California, Texas, etc). Keep in mind, though, that there are many wonderful and important diaries that don't fit in any of these tags. Don't worry if yours doesn't.

You can add a private note to this diary when hotlisting it:
Are you sure you want to remove this diary from your hotlist?
Are you sure you want to remove your recommendation? You can only recommend a diary once, so you will not be able to re-recommend it afterwards.
Rescue this diary, and add a note:
Are you sure you want to remove this diary from Rescue?
Choose where to republish this diary. The diary will be added to the queue for that group. Publish it from the queue to make it appear.

You must be a member of a group to use this feature.

Add a quick update to your diary without changing the diary itself:
Are you sure you want to remove this diary?
(The diary will be removed from the site and returned to your drafts for further editing.)
(The diary will be removed.)
Are you sure you want to save these changes to the published diary?

Comment Preferences

  •  Sorry you are not comparing (0+ / 0-)

    apples to apples .

    "Drop the name-calling." Meteor Blades 2/4/11

    by indycam on Thu Nov 15, 2012 at 09:40:35 AM PST

    •  Sure (s)he is -- predictions really were that good (5+ / 0-)
      Recommended by:
      GoUBears, MichaelNY, ZoBai, IM, AoT

      I was (understandably) skeptical, and looked closely at dreaminonempy's morning o' Election Day prediction.  And dreamin' is right.  Just look at the tables under "regression prediction".  Them are the numbers.  Dreamin's predictions were really good, and beat kos and Nate.

      Dreamin' is like that great indie band who (so far) hasn't got a record contract but is making amazingly great music.

      I could be wrong.  Maybe I'm missing something -- I don't know precisely how Dreamin' arrived at their prediction, and maybe some of it is luck.  Can it be reproduced?  That's the big question, and it's fair to ask of any method -- especially non-quantitative ones like kos's, BTW.  But it's obvious from the post linked above that Dreamin's prediction was really fucking accurate.

      And that should be recognized, not dismissed in a one-liner, at the top of the comments, without explanation.

      Damn fine job, Dreamin'.  

      "Happiness is the only good. The place to be happy is here. The time to be happy is now. The way to be happy is to make others so." - Robert Ingersoll

      by dackmont on Fri Nov 16, 2012 at 07:05:25 PM PST

      [ Parent ]

      •  P.S. Better to describe kos's approach as (2+ / 0-)
        Recommended by:
        MichaelNY, AoT

        .... involving a subjective step, i.e. a judgement call, in processing quantitative data.  This obviously has both an up and a downside.  It's not a great idea in physics, but in poltiical "science" there are so many variables that an educated judgement call can be a great idea.  It's just, obviously, going to lead to different results for different observers.  And it very likely won't be as reproducible.  Nate's model has no subjectivity built into it (although he hasn't released the "source code", so others can't replicate the results).

        "Happiness is the only good. The place to be happy is here. The time to be happy is now. The way to be happy is to make others so." - Robert Ingersoll

        by dackmont on Fri Nov 16, 2012 at 07:25:04 PM PST

        [ Parent ]

    •  How so? (0+ / 0-)

      Your argument fails to convince.

  •  Red Bean is quite impressed (0+ / 0-)
  •  How do you get that VA leans Democratic by 2.7%? (1+ / 0-)
    Recommended by:
    IM

    I realize we know have won the last 3 Democratic senator votes, and we had a pretty good run with 2 Democratic Governors, but this seems so counter-intuitive. It's a purple state at best in statewide races, but leaning red, not blue.  

    You might be giving to much credit to the Senate victories, 2 of which were due in no small part to George Allen being the GOP candidate. If you just look at VA officeholders right now, it's a very red state from top to bottom, except for those 2 US Senators. At the presidential level, Obama may have been helped by a large black population, 7% above the national average. That helped your model this year, but I wouldn't count on it in 4 years.

    Coming Soon -- to an Internet connection near you: Armisticeproject.org

    by FischFry on Thu Nov 15, 2012 at 10:56:09 AM PST

    •  The first table? (0+ / 0-)

      If you're talking about the first table, that's the predicted Democratic margin (Obama-Romney). It's not the same thing as the partisan lean of the state. The model says a Democratic candidate should end up with a margin about 1.0 point better than polls say - close to no effect.  The average of previous elections back to 2004 is about -0.5, again close to no effect. This means Virginia polls do not consistently favor one party or the other, but are generally close to the final outcome, as befits a purple state.

      •  With respect to Virginia (1+ / 0-)
        Recommended by:
        MichaelNY

        I think there has been so much change since 2004 I really don't know how useful that data is.

        The bitter truth of deep inequality has been disguised by an era of cheap imported goods and the anyone-can-make-it celebrity myth - Polly Toynbee

        by fladem on Sat Nov 17, 2012 at 02:15:47 PM PST

        [ Parent ]

  •  This is cool and useful, but not operating on (1+ / 0-)
    Recommended by:
    MichaelNY

    quite the same level as 538. In the end, a main feature of full modeling is that you get full probability distributions for everything. For a topline number, this is fine. But once you're looking at asking deeper questions about a race, you'll need something like 538--- or something more advanced than 538.

    •  Certainly not! (4+ / 0-)

      Goodness no, this is nowhere close to the same level as 538. The intent was to try to find something as simple as possible that was better than the polls alone. This is about as simple as you can get - it's only got two inputs. By keeping it simple, it is easy for many people to replicate on their own and and put to their own uses.

    •  What 538 did (1+ / 0-)
      Recommended by:
      MichaelNY

      in this cycle, and I think it did it well, was to weight pollsters by prior performance, something that I did not do.  Chris Bowers and I had looked at this and concluded the averages were more accurate with Rasmussen in rather than out.  But our approach was binary - and as a result not as good in this election.  I think over a number of elections ours would still be a better approach - but 538 really does deserve credit for performing the hardest work in this analysis - picking which pollster is right or wrong.  

      The bitter truth of deep inequality has been disguised by an era of cheap imported goods and the anyone-can-make-it celebrity myth - Polly Toynbee

      by fladem on Sat Nov 17, 2012 at 02:18:45 PM PST

      [ Parent ]

  •  This diary went right over my head. (0+ / 0-)

    Beginner version might help, alas.

    •  OK (3+ / 0-)
      Recommended by:
      LNK, MichaelNY, Alice Olson

      Polling averages often don't predict the election margin correctly, even if they do predict the winner. Usually, they underestimate the Democrat's performance in Blue states, and underestimate the Republican's performance in Red states.

      I attempted to correct for this in a very simple manner, in hopes that, on average, the new predictions would be more accurate than the polls alone. The correction involved adding a number to the polling margin in each state based on how 'red' or 'blue' it was.

      In the end, the corrected polling numbers were more accurate than the polling numbers alone. The method worked.

      Did that help any?

      •  Yes, this helps but.... (0+ / 0-)

        I'm replacing "margin" with  "margin of error" and "corrected polling numbers" with "adjusted polling numbers" in my imagination. So, now I think I get it.

        My summary would therefore be (I'm a picture person):

        When adjusted for how many sparks were flying out of voters heads about this election, the polling numbers were surprisingly accurate.
        I'm also adding some thoughts about human beings tending to conform to their surroundings......

        Thank you for your kind help.

        •  Ah! (4+ / 0-)
          Recommended by:
          MichaelNY, Alice Olson, LNK, jhop7

          'Margin' is the difference between the Democrat and the Republican.

          Example: Elizabeth Warren won her Senate race with 53.7% of the vote to  46.3% for Brown. Her margin was 7.4 percentage points (53.7-46.3).

          However, the polls said she was ahead by only 3.8 percentage points. The polls were off.

          My simple model says "Massachusetts is a Very Blue state. Voters there end up being more Democratic than the polls indicate. Add 4.1 points to Warren's margin in the polls."

          My model predicted Warren would win by 7.9 points. This is much closer to the actual result (7.4 points) than the polls were. So I consider it a success.

          Your summary, human beings tend to conform to their surroundings, is exactly what the model says.

  •  I used the 7eleven coffee cup science for (0+ / 0-)

    my scientific analysis. By jingo, it worked! No need for graphs.

  •  Congrats man!!! Impressive!!! (1+ / 0-)
    Recommended by:
    MichaelNY
  •  Undecided margin and PVI (3+ / 0-)
    Recommended by:
    MichaelNY, seriously70, IM

    Great work!  A good implementation of a straightforward idea (those are always the best ones to rely on!).

    I wonder if it might be possible to simplify even more, and still get a little more accurate.  But I defer to your impressive number crunching for whether it would work.

    Your underlying assumption (which Markos made explicit when he was doing his "manual adjustments") is that undecided or wavering voters will tend to break toward their natural partisan inclinations.  More undecideds will break red (or this year, Libertarian) in an Indiana Senate race, more blue in Massachusetts.

    So why not focus on the average percentage of undecided voters in the polling results?  Yes, different pollsters have different methods for pushing leaners (and a few make the error of leaving out undecideds entirely, oy), so a sophisticated version would account for that.  But still, assuming a random mix of pollsters in each state, you should get a solid number.  The lower the total of fully committed voters, the stronger the effect that you're identifying should be.

    And then, instead of having to do your own regression analysis, and update it after each election, why not piggyback on others' work, and just use PVI?  Allocate the undecideds based on how that jurisdiction leans (which means the method would work on downballot races too, not just statewide).

    And finally -- I've assume you've sent this to Nate, so he can improve his methods?  I'll bet he loves this.

    •  I think you may have hit upon something. (2+ / 0-)
      Recommended by:
      MichaelNY, seriously70

      Looking into the undecideds in polls that do not offer 3rd party candidate options may be a better measure of who will actual vote 3rd party than looking at the results of polls that explicitly include 3rd party candidates. I will look at the numbers. Thanks for the idea!

      PVI - see Woody's comment below. Also, PVI or other measures only takes a few minutes to calculate, so it's easy to play around with lots of different variations.

  •  PVI is a lousy measure (4+ / 0-)
    Recommended by:
    Garrett, GoUBears, seriously70, IM

    You ask,

    Is there a better measure for state partisanship than Obama's 2008 vote share?
    The advantage of the PVI is that someone has already done it, now anyone can do it, and it is constantly cited.

    But there needs to be a better measure. The presidential race figures depend too much on the individual candidates and other special factors not present in the basic lean of a state's vote.

    A few years back someone suggested that the better measure of a state's partisan lean is to avoid the top-of-the-ticket marquee races. Instead, calculate the average margin of downballot races like State Treasurer, Attorney General, Secretary of State, etc. That is, nobody votes for Joe Superstar for State Auditor, they all vote for the Democratic or Republican candidate for Auditor.

    But someone would have to do a lot of work to calculate such a better measure of state-level partisanship.

    •  Thanks (4+ / 0-)
      Recommended by:
      MichaelNY, jhop7, Scientician, seriously70

      That's a good idea. I think I will give it a trial run with one or two states and see if it works at all. One would guess it would work better for Governor's races. But maybe not with President - cause then you have the superstar factor overwhelming things.

    •  I'm not sure PVI is so bad (5+ / 0-)

      Because what you're measuring is the performance of a presidential candidate relative to that candidate's average performance, nationwide. So the average margin of victory or defeat for that candidate is disregarded.

      I think your idea is a good one, though, as an adjunct to PVI. However, I'd caution that there are some states like North Dakota, Montana, and especially West Virginia and Kentucky, where people are more likely to vote Democratic downballot than for president, however, and others where the situation is reversed, like Pennsylvania.

      Formerly Pan on Swing State Project

      by MichaelNY on Fri Nov 16, 2012 at 01:47:14 PM PST

      [ Parent ]

  •  This is awesome (10+ / 0-)

    and actually provides support for one of my intuitions about polling and final results -- that the undecided vote will go to the candidate who best reflects his or her state's partisan lean.

    I never sweat Massachusetts, because I never expected the undecided vote to go to Brown. Had Mourdock kept his mouth shut, he would've won Indiana easily.

    North Dakota was the one state that bucked the trend, but Heitkamp was close enough to 50, and the state was small enough that retail politics actually mattered.

    •  Thanks! (7+ / 0-)

      Your intuition is supported because intuition is just another word for pattern recognition, really (albeit one that is often used to belittle those without formal training).

      I very much wish we had an exit poll from North Dakota. What I find most intriguing is that Nate's model didn't do any better than the polling average in Senate races. There's something interesting going on there, and I don't know what it is.

    •  The other thing about North Dakota (5+ / 0-)

      is that it's one of the states that has the strongest tradition of ticket-splitting. In the Dakotas, voting Republican for President and Democratic for at least one Congressional race is the rule, rather than the exception. There are a decreasing number of states like that, but they still have to be accounted for in any predictive model.

      By the way, I'm unconvinced Mourdock would have won Indiana "easily" if he had avoided his rape comment. There were already a lot of extreme things he said that were being used against him. According to poll results, Donnelly was a slight underdog, not a clear blowout loser. If you want to take things all the way back and suggest an alternate world in which Mourdock said no extremist things ever, how would he have had a chance to defeat Lugar in a primary? His whole platform for defeating Lugar was that Lugar was essentially a wimp and he was the real conservative.

      Formerly Pan on Swing State Project

      by MichaelNY on Fri Nov 16, 2012 at 01:52:08 PM PST

      [ Parent ]

      •  Donnelly (1+ / 0-)
        Recommended by:
        MichaelNY

        was stuck in the low 40s. Getting to a majority was going to be extremely difficult. My education was formed in the 2004 Senate race in Oklahoma, where I thought Brad Carson had a fighting chance against crazy-talking Tom Coburn. It was 44-44! A tied race!

        Carson lost 42-53. Donnelly was headed toward the same path, irrespective of all of Mourdock's craziness. It just turned out that rape was a step too far.

        •  I don't think it's so clear that was going to be (1+ / 0-)
          Recommended by:
          dackmont

          Donnelly's fate. His electoral fate depended on the hard-to-predict choices of Lugar voters. That was a separate dynamic from the Oklahoma race you discuss.

          Formerly Pan on Swing State Project

          by MichaelNY on Fri Nov 16, 2012 at 11:25:55 PM PST

          [ Parent ]

        •  By the way, I do accept that Donnelly was (0+ / 0-)

          a bit behind in polling before the rape remarks, and you could be right that he was heading for a substantial loss, but I thought in advance of that debate that he had at least a 40% chance of winning. That could be naivete' on my part, but I do believe that non-extremist Lugar voters were already upset with Mourdock and concerned that he wasn't the kind of levelheaded person they wanted representing their state.

          Formerly Pan on Swing State Project

          by MichaelNY on Sat Nov 17, 2012 at 12:44:42 AM PST

          [ Parent ]

  •  This is the most blatant case (1+ / 0-)
    Recommended by:
    Bill Evans at Mariposa

    of over fitting I've ever seen. You need to read Nate's book.

  •  The Egg of Columbus (0+ / 0-)

    In the story, Christopher Columbus attends a dinner that a Spanish gentleman had given in his honor. Columbus asks the gentlemen in attendance to make an egg stand on end. After the gentlemen tried repeatedly and failed, they agreed that it was impossible. Columbus then placed the egg's small end on the table, breaking the shell a bit, so that it could stand upright. Columbus then said that it was "the simplest thing in the world. Anybody can do it, after he has been shown how!"

    "Beer is living proof that God loves us and wants us to be happy." -Benjamin Franklin

    by hotdamn on Fri Nov 16, 2012 at 06:12:07 PM PST

    •  This kind of aspersion (1+ / 0-)
      Recommended by:
      MichaelNY

      doesn't add anything to a reasoned discussion.  It's a nice story for dissing, say, Paul Ryan-ish lies or climate change pseudoscience, but in a case like this where a correct prediction has been made in a reasonable way, it's not useful.  If you know something that would help the educated reader understand your conclusion, I wish you'd share it.

      "Happiness is the only good. The place to be happy is here. The time to be happy is now. The way to be happy is to make others so." - Robert Ingersoll

      by dackmont on Fri Nov 16, 2012 at 08:38:56 PM PST

      [ Parent ]

  •  2 vodka martinis on a Friday night after a long da (0+ / 0-)

    ...y, and y'know what? I'm done with the poll-a-minute analysis, for at least, um, ah, for at least, so.... can we please not... How shall I say? ....I woud like to be poll- free, living in that pure non- statistical/demographic/trending state at least until, maybe....next Thursday?

    "So, am I right or what?"

    by itzik shpitzik on Fri Nov 16, 2012 at 06:15:38 PM PST

  •  Bravo! (1+ / 0-)
    Recommended by:
    MichaelNY

    Well done!

    Just to take proof-of-concept to the next level, would you consider comparing 2008 as well?  I'd be fascinated to take a look at those stats too!

    All your Supremes are belong to us. For Great Justices!

    by thenekkidtruth on Fri Nov 16, 2012 at 06:17:39 PM PST

  •  R-Square is kinda low... (2+ / 0-)
    Recommended by:
    se portland, dackmont

    at 0.56 per the image shack pic.

    "Detective, if ignorance was a drug, you'd be high all the time." Sam Tyler, 'Life on Mars'

    by Kokomo for Obama on Fri Nov 16, 2012 at 06:19:52 PM PST

  •  State Fundamentals (2+ / 0-)

    I would like to note that Nate includes something very similar to this in his forecast models: For each state, there is a "State Fundamentals" bias, which is one of the inputs he uses to adjust the aggregated state polling.

    Those who ignore the future are condemned to repeat it.

    by enigmamf on Fri Nov 16, 2012 at 06:23:11 PM PST

  •  Loved your posts (1+ / 0-)
    Recommended by:
    dackmont

    At Openleft, glad to see you're still at it.

  •  Thanks for posting this diary! :-) nt (1+ / 0-)
    Recommended by:
    dackmont
  •  OMG, you are Such a Geek (1+ / 0-)
    Recommended by:
    dackmont

    Nice job :)

  •  When is the official end of KOS poll gloating? (0+ / 0-)

    Hopefully soon:)

  •  I would suggest we all hide these (2+ / 0-)
    Recommended by:
    dackmont, happymisanthropy

    wonderful methadologies so the republicans can't use them, but then I realized the republicans will still make all their predictions by studying the entrails of dismembered American Girl dolls so it doesn't matter.

  •  Omg! Who cares?? (0+ / 0-)

    Really?  How will we ever get any work done with our heads cranked over our shoulders like this?

  •  Asdf (0+ / 0-)

    Very impressive.....

    But let me share a method that I used in my prediction Mao - which was right on the money. here is my final prediction, posted the morning of lithe election (scroll to the bottom of the diary to see it)

    FINAL PREDICTION

    Many acquaintances asked me how I was so sure about Florida. my answer?

    1. I looked at the TREND of polls over the final weeks of the campaign.

    2. To break the "tie", I considered factors that CANNOT be captured in polls, the mst obvious one being what I suspected would be the overwhelming anger of those whose votes were suppressed by the GOP in Florida and I assumed that MORE Democrats would turn out to vote as a result.

    Sadly, everything Communism said about itself was a lie. Even more sadly,, everything Communism said about Capitalism was the truth.

    by GayIthacan on Fri Nov 16, 2012 at 06:53:51 PM PST

    •  But how to quantify the effect of (0+ / 0-)

      vote-suppression by Rick Scott et. al.?  Voters are complaining about voter rolls being pruned etc. -- apparently R's tried but it wasn't enough.  

      "Happiness is the only good. The place to be happy is here. The time to be happy is now. The way to be happy is to make others so." - Robert Ingersoll

      by dackmont on Fri Nov 16, 2012 at 08:43:34 PM PST

      [ Parent ]

      •  An electoral-unfairness image index (0+ / 0-)

        applicable where one of the two state parties has broad control over the voting regulations?

        •  Sounds like a good starting point (0+ / 0-)

          How many the FL Rethugs kept from voting this year, or any year, is hard to guess.  Nate estimated something on the order of a 1% swing as a general rule of thumb for restrictive voting laws.  At least it's a "known unknown".

          "Happiness is the only good. The place to be happy is here. The time to be happy is now. The way to be happy is to make others so." - Robert Ingersoll

          by dackmont on Fri Nov 16, 2012 at 10:07:06 PM PST

          [ Parent ]

  •  Really, enough with election predictions ... (0+ / 0-)

    ... you were all right and Obama won; they were wrong and Romney's campaign sank with all hands aboard. That's all I need to know.

  •  The new age of data analysis (1+ / 0-)
    Recommended by:
    dackmont

    Believe it or not there is a huge amount of data being collected. More than every before. Techniques  to process  this data is becoming more sophisticated. Despite what you might think, Nielsen and polls are the old models which are are on their way out.

    The Obama campaign used set top box information from millions of cable users to pin point advertising.

    From the Washington Post.

    Obama campaign took unorthodox approach to ad buying

    The team bought detailed data on TV viewing by millions of cable subscribers, showing which channels they were watching, sometimes on a second-by-second basis. The information — which is collected from set-top cable boxes and sold by a company called Rentrak — doesn’t show who was watching, but the campaign used a third-party company to match viewing data to its own internal list of voters and poll responses.

    Davidsen said the campaign sought to reach two broad categories of voters: people who were still on the fence and Obama supporters who were sporadic voters.

    The team’s calculations showed that it would get the most bang for its buck in some strange places: the Family Channel, the Food Network and the Hallmark Channel, among others. On broadcast TV, the campaign went for more daytime programs and late-night entertainment shows than Republican nominee Mitt Romney did.

    Heisenberg was near here.

    Polls are built on models and people telling the truth, which they don't always do when they are being watched. If you could watch them in the wild, so to speak, you will get a more accurate picture.

    And you can...

    Or I suppose you could ... maybe if you used statistics, matrix reduction, other math and overlay demographics... maybe you could use all that data to draw conclusions ... maybe you could do better than the polls. I am just ... speculating on this of course.

    We have reached a new age. We have tons, and I am tons of data. More than ever in the history of the world. We have super computers that use GPUs to process the data in parallel.  And we they are starting to do something with it.

    It is just the beginning of a brave new world, I assure you. Stay tuned.

    It is possible to read the history of this country as one long struggle to extend the liberties established in our Constitution to everyone in America. - Molly Ivins

    by se portland on Fri Nov 16, 2012 at 07:19:59 PM PST

  •  Next election cycle I'm starting a blog (4+ / 0-)

    called "The539" which will aggregate all the poll aggregators.  

  •  One of these three people won't tell you exactly (1+ / 0-)
    Recommended by:
    MichaelNY

    what's in his "model."

    Nate Silver could be consulting Miss Cleo for his unpublished special adjustments, so far as we know.

    Ok, so I read the polls.

    by andgarden on Fri Nov 16, 2012 at 09:48:40 PM PST

  •  As I have written here before (1+ / 0-)
    Recommended by:
    MichaelNY

    the data sample from 2004 to 2010 is misleading.  My own model missed in Florida - in past because I excluded on line pollsters - who proved very accurate.

    All of these models based on 2004 - 2010 data are going to come apart at the seems at some point - because they aren't going back far enough.

    I wrote the day before the election that there were signs of movement to Obama - though I did not attempt to predict its size. I have a momentum model - but I only use it for primaries.  

    I will say this: my state model projected an Obama national margin of 3.06 - and I was right about the higher undecided in the West suggesting Obama was going to win the popular vote.

    But I did get Florida wrong.  

    http://www.dailykos.com/...

    The bitter truth of deep inequality has been disguised by an era of cheap imported goods and the anyone-can-make-it celebrity myth - Polly Toynbee

    by fladem on Sat Nov 17, 2012 at 02:14:43 PM PST

    •  Late nod of appreciation (0+ / 0-)

      Yours was the last comment and I missed being able to rec it, so just a note here for posterity.  I looked at the post you linked to and gotta say, we have some talented poll-interpreters at this site.  You and Dreamin really impress me.  Kudos.

      "Happiness is the only good. The place to be happy is here. The time to be happy is now. The way to be happy is to make others so." - Robert Ingersoll

      by dackmont on Wed Nov 21, 2012 at 07:59:39 AM PST

      [ Parent ]

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site