• fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      I think this statement is far too broad.

      It might be good to have AI review some imaging someone has had done to examine some particular ailment.

      It’s definitely not good to have a LLM review conversations with my GP and send me targeted marketing for “potential” ailments.

      • person420@lemmynsfw.com
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        It is, it was a bit terse, you’re right.

        A potential fantastic use of AI is to scan a person’s medical records against the vast medical knowledge humans have gathered over the past century or so to help doctors identify problems quicker and with more accuracy.

        While the general purpose AI’s we use today can’t be trusted to diagnose anything (but I’d argue they can assist a competent doctor) a future specific purpose AI that’s tailored to that task could revolutionize diagnosis. And with the rate AI is going (even if people like Sam Altman are stretching truths) it’s not a too distant future.

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      6 months ago

      Also worth mentioning.

      Ask for fucking consent.

      AI model training is off the fucking rails right now and we really need laws and lawsuits to punish assholes.

        • xmunk@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          Sorry, maybe I should clarify that to “informed consent” - if someone shoves an eighty page ToS in front of you to use the services of their private hospital you may be giving consent technically but it’s not informed consent.