'538's 2018 midterm HOR battleground prognostications vis-a-vis the actual results

Discussion in 'Elections & Campaigns' started by Statistikhengst, Dec 8, 2018.

  1. Statistikhengst

    Statistikhengst Well-Known Member

    Joined:
    Oct 8, 2015
    Messages:
    16,828
    Likes Received:
    19,376
    Trophy Points:
    113
    Gender:
    Male
    It's been a long while since I have been here and since there are so many weird, unseemly characters out there, I probably won't be here much, but the data from the 2018 midterm elections, especially in the HOR (where the Democrats are currently leading by 9.7 million votes nationwide, a margin of D +8.6%, surely the largest percentage margin since 1932), is too tantalizing to not pass along. So, here I am.

    As a preface, and as a help to many people, I have created an enormous set of excel tables, comparing the exact results of the 2018 midterms to the 2014 midterms. This is the logical comparison (from mid-term to mid-term) as the top of the ticket is similar in those elections. That being said, the turnout in 2018, especially on the Democratic side, came closer to that of a presidential election year than that of a mid-term year. 2018 could almost be considered a hybrid between a mid-term election and a congressional election within a presidential election cycle.

    You can read all the tables here:

    https://docs.google.com/spreadsheets/d/1DBdjJubA8xrZWogrmqPoCoFmao7a0CZCkdbsTIzN22g/edit?usp=sharing

    Once you get past the national table, the state tables are in descending order of number of representatives, starting with California and ending with the 'at-large' states.

    At the very end of all the tables is a table that compares the end-data provided by Nate Silver's 538 (in conjunction with ABC) on the day of the mid-term elections, compared to the actual results themselves.

    Nate used 3 models for making a polling aggregate for all 435 HOR races; I included 2 of those three models in the table:

    https://docs.google.com/spreadsheets/d/1DBdjJubA8xrZWogrmqPoCoFmao7a0CZCkdbsTIzN22g/edit?usp=sharing

    The table hyperlinks to his site, so you can check the veracity of the numbers for yourself. I followed them quite closely for weeks on end.

    There are 65 CDs I included in the table. Only 64 were of real interest. I added OH-01 only as a sort of canary in the coal mine - the results of OH-01 were never in doubt, but polling in that CD has been historically very good and very stable, so I used it as a "test-strip." Of all the CDs expected to be potential battlegrounds or "flips", I included all except the newly drawn PA-14, which was an expected GOP landslide pick-up (and it was).

    So, a little primer on how to read the table and discern the numbers:

    2018-12-008 538 comparison to actual results - learning example.png

    In column A - if a CD is shaded in purple, this means that I added it AFTER election night; it was, essentially, a surprise, for instance, CA-21 or OK-05.

    Column B is the type of race - the designation is self-evident.

    Column C indicates which party currently held the seat in the 115th congress.

    Columns D to F comprise the data from Silver's deluxe prognostication model, and from columns H to J, the data from his classic model, which was generally less restrictive. The two models were similar to each other, but not identical. In 4 cases, the two models did not agree with each other.

    Columns K-L indicate the winner and the actual winning margin.

    Column N states whether Silver's prognistication was correct (Y) or not (N). In the case where the two models did not agree with each other - which happened in four CDs (IL-06, NC-09, TX-07, VA-07), the indication is "Y/N".

    So, let's take AZ-02 as a learning example. It was a GOP CD. 538, in both models, predicted that the Democrat, former Representative Ann Kirkpatrick, was going to win and therefore "flip" the seat for the Democratic Party, and indeed, she did. 538 saw a winning margin of between D +10.6 (deluxe) and D +11.4 (classic), which averages to +11. Kirkpatrick won by D +9.5, so she just slightly underperformed the polling aggregate, by 1.5 points, well within the standard MoE.

    Conversely, in CA-21, a GOP held seat, 538 saw David Valadao (R-inc) holding that seat by R +7.6 (deluxe) and R +6.4 (classic), which averages to R +7, but TJ Cox won by D +0.8, so Cox did almost 8 points better than the end pollling showed, meaning that end polling in CA-21 was skewed almost 8 points to the Right.

    Of the 65 CDs here, 538 nailed it in 50 (77%), missed it in 10 of 64 (15%) and in 4 CDs, the prognostication was inconclusive.

    So, with all that blah blah blah in mind, here the table, first in alpabetical order:

    2018-12-008 538 comparison to actual results - alphabetical 001.png 2018-12-008 538 comparison to actual results - alphabetical 002.png 2018-12-008 538 comparison to actual results - alphabetical 003.png
    And, the same table, in descending order of probability (deluxe mode):

    2018-12-008 538 comparison to actual results - descending by deluxe probability 001.png 2018-12-008 538 comparison to actual results - descending by deluxe probability 002.png 2018-12-008 538 comparison to actual results - descending by deluxe probability 003.png


    Now, let's go to the misfires: in the 10 CDs where 538s call was wrong, 8 of those were CDs that had been called for the Republican but ended up being a D pick-up: CA-21, GA-06, NM-02, NY-11, OK-01, SC-01, TX-32 and VA-02. However, that being said, GA-06, NM-02 and VA-02, 538 was seeing a 2 point race or less (essentially, a toss-up zone). There were 2 CDs where 538 prognosticated a Democrat winning, but the Republican won: MN-01 and KS-02. Both of those there GOP-holds and in both cases, a 2 point race or less. So, in 5 of 10 miscalls, those races were true tossups.

    In the four races where the two 538 models were not in agreement with each other, three of the four ended up being the nailbiters that Nate predicted:

    IL-06: the deluxe model showed Casten (D) by D +2, the classic model showed Roskam (R-inc) by R +0.2, average = Casten D +0.9. Casten won by D +7.1, so the aggregate polling was 6 points to the Right of his actual win. +7 is not even a close race, not even remotely. In the case of IL-06, the deluxe model was the more correct of the tw.

    But in TX-07, it was the opposite: the deluxe model showed Culberson (R-inc) by R +0.8 while the classive model showed Fletcher (D) by D +0.4 - true tossup numbers, the average of which would be Culberson R +0.2, about as close to a perfect tie as you can get. But Fletcher won by D +5.1, so the aggregate polling was just a little more than 5 points to the Right of her actual win.

    In VA-07, the deluxe model showed Spanberger (D) D +0.3 while the classic model showed Brat (R-inc) R +0.9, the average of which would be Brat R +0.3. Spanberger upset the incumbent by D +1.9, so she outperformed the polling by 2 points - all of this being well within the MoE.

    NC-09, NC-09, NC-09: hmmmmm...... 538 showed McCready (D) D +0.6 (deluxe) and Harris (R) R +0.4, aggregate would be McCready D+0.1, statistically, an absolute tie. Inoffically, it's Harris +0.3:

    2018-12-008 538 comparison to actual results - NC-09.png

    Those are micro-margins and so deep within the MoE, well, with numbers like this, anything can happen. However, since the state BOE - including the Republicans on the BOE, have twice refused to certify this race because of verifiable election fraud on the part of people working for Harris, an Evangelical minister, just to note, this race will either get a redo or incoming Speaker of the House Nancy Pelosi can expell Harris from the House. This has historically happened and it a very real possibility.

    Long and short of it is, 538s prognostications were excellent: the really close races that were identified as close races a long time ago were indeed close races. The wide margins that 538 saw a long time ago (PA-05, VA-10, CA-49, for instance), were wide-margins.

    And OH-01? Well, the polling showed Chabot R+5.3 and R +5.2, respectively. He won by R +4.4, within 1 point of the prognostications. So, yeah, canary in the coal mine, indeed.

    The races that suddenly appeared on the radar screen on election night were races where the aggregate and the electoral history of the CD showed no sign of the CD flipping. But as in every cycle, there usually 3 or 4 surprises, and so it was this time around as well.

    In most, but not all cases, the end polling had a slight mathematical bias to the Right but in most cases it was pretty close to the MoE.

    On election day, 538 gave the Democratic Party a deluxe model 85.8% (6 in 7) chance / classic model 87.9% (7 in 8 ) chance of taking the House with between +36 and +39 net seat gain. The Ds now have a net 40 seat gain, so 538 was not only on the mark, it was exquisitely on the mark. So, the next time that uninformed people who are ignorant of facts spew nonsense about Nate Silver and 538, you can just point them right to this thread....

    In 2018, we saw more congressional (for the House of Representatives) polling that ever before and it was scary-good, to say the least, especially the NYT/Siena in live-time polling. They did good, solid, well-grounded work, worth noting and praising.


    -Stat
     
    Last edited: Dec 8, 2018
    cd8ed and MrTLegal like this.
  2. An Old Guy

    An Old Guy Well-Known Member

    Joined:
    Oct 16, 2015
    Messages:
    3,634
    Likes Received:
    2,318
    Trophy Points:
    113
    Nice to 'see' you...and no, not everyone here is crazy ;). Thanks for posting this, forecasting and polling can be difficult at best but Nate is damned smart and has a pretty darn good crew. I don't need the intense data but I can definitely see the need for it in political circles. This past midterm cycle was a massive win for the Dems, an absolute rejection of Trumpism and doesn't bode well for Individual-1 and the Republicans in 2020, assuming Individual-1 gets that far.....:D
     
    Last edited: Dec 8, 2018
  3. Statistikhengst

    Statistikhengst Well-Known Member

    Joined:
    Oct 8, 2015
    Messages:
    16,828
    Likes Received:
    19,376
    Trophy Points:
    113
    Gender:
    Male

    Ahhh, that good old Individual-1. I hear he's drinking Covfefe coffee whilst dreaming about the heroic delicious chocolate cake slaughter at Bowling Green and writing lots of kokook stuff in all CAPS because Jeebus or something.
     
    AZ. and An Old Guy like this.
  4. Spooky

    Spooky Well-Known Member Past Donor

    Joined:
    Nov 29, 2013
    Messages:
    31,814
    Likes Received:
    13,377
    Trophy Points:
    113
    538 gave Donald Trump a 2% chance to win the nomination.

    They were so far off the mark that Nate Silver had to issue a public apology for his poor performance.

    Its the methodology they use and its unreliable.

    It will get you very close to the mark or so far off its laughable.

    That's why no reputable polling places use it.
     
  5. The Don

    The Don Well-Known Member

    Joined:
    Aug 9, 2018
    Messages:
    1,687
    Likes Received:
    803
    Trophy Points:
    113
    ....and once again you fail to understand how 538 works and/or deliberately misrepresent its results.

    538 merely runs the current polls through its model and gives a forecast of the range of outcomes based on these polls - it isn't, and neither does it pretend to be, a crystal ball predicting unlikely future trends. At the time that 2% forecast was made, that's what the polls were showing. Later forecasts would have put Donald Trump's likelihood of winning the nomination much higher.
     
    Statistikhengst and MrTLegal like this.
  6. MrTLegal

    MrTLegal Well-Known Member

    Joined:
    Feb 25, 2017
    Messages:
    41,095
    Likes Received:
    26,663
    Trophy Points:
    113
    You keep bringing up one prediction from early 2015 as a justification for ignoring the massive accuracy of his predictions in 2018.
     
  7. Durandal

    Durandal Well-Known Member Donor

    Joined:
    May 25, 2012
    Messages:
    55,674
    Likes Received:
    27,206
    Trophy Points:
    113
    Gender:
    Male
    Chances are she'll do it again, too.

    Trolololo.
     
  8. drluggit

    drluggit Well-Known Member

    Joined:
    Nov 17, 2016
    Messages:
    31,103
    Likes Received:
    28,555
    Trophy Points:
    113
    So, do we suppose that the real reason Democrats believe in interference in 2016 is that they invested so much into the Silver method and the "results" it delivers to the polling machines that maps so inherently closely indicates that the Russians somehow broke this tight relationship to the actual output of the machines that should have cemented a Hillary coronation in 2016?

    Now that's some wild shyte, huh. I and others have offered up trips to any Vegas casino of Nate's choice to see his method make us oodles of money at the games of chance. Obviously, so far Nate has never taken any of us up on those offers. The best question is why? Is it because there isn't an actual expectation that the predictive models are unable to actually be used in real life? Likely. And probably, the casino security folks would figure it out and offer Nate a long stay in the basement.

    The most interesting observation is when you put a candidate against mr Trump in 2020, how close will those models predict the outcomes... like 2016?
     
  9. Spooky

    Spooky Well-Known Member Past Donor

    Joined:
    Nov 29, 2013
    Messages:
    31,814
    Likes Received:
    13,377
    Trophy Points:
    113
    Yea because it shows how wrong he can be.
     
  10. MrTLegal

    MrTLegal Well-Known Member

    Joined:
    Feb 25, 2017
    Messages:
    41,095
    Likes Received:
    26,663
    Trophy Points:
    113
    Great. Predicting the future of human behavior is hard.

    This thread is specifically about how correct he can be as well. And the reasons (overly positive for the republicans) he could have been more correct.
     
    Last edited: Dec 11, 2018

Share This Page