• Tylerdurdon@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 months ago

    They should provide that instantly if the patient wants it (once the scan is developed). Ad whatever disclaimers and waivers you want, but I wouldn’t mind an instant answer.

    • hitmyspot@aussie.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      Or, just have it as part of the xrqy software.

      Analysis determines this could be X, here’s a link to Kore info on this rate condition. Please confirm diagnosis and report.

      We don’t need AI to make a diagnosis. Its a tool. The health professional can be trained in its use, just like they do for any other test.

      • Rednax@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        If you tell a profesional that the answer is “B”, while the professional had “A” in mind, you will have to convince them on why “B” is the correct answer, or they will ignore your suggestion. I think a good LLM model should be able to tell which features it valued most in it’s reasoning. It would make it much easier to get used to as a tool that way.

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          I agree, while they are sceotical. However research data over time should show sensitivity and specificity, just like any other test.