• TropicalDingdong@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    6 months ago

    I think the right answer is to do what you described, in the aggregate. Don’t do it on a pollster to pollster basis, do it at the state level, across all polls. You don’t do this as a pollster because that isn’t really what you are trying to to model with a poll, and polls being wrong or uncertain is just a part of the game.

    So it’s important to not conflate polling with the meta-analysis of polling.

    I’m not so much interested in polls or polling but in being able to use them as a source of data to model outcomes that individually they may not be able to to predict. Ultimately a poll needs to be based on the data it samples from to be valid. If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.

    the -4, 8 spread is the prior I’m walking into this election year with. That inspire of their pollsters best efforts to come up with a unbiased sample, they can’t predict the election outcome is fine. We can deal with that in the aggregate. This is very similar to Nate Silvers approach.

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.

      On this, we 100% agree.