• BolexForSoup@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I don’t mind so long as all results are vetted by someone qualified. Zero tolerance for unfiltered AI in this kind of context.

    • Skua@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      If you need someone qualified to examine the case anyway, what’s the point of the AI?

        • Skua@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          In the example you provided, you’re doing it by hand afterwards anyway. How is a doctor going to vet the work of the AI without examining the case in as much detail as they would have without the AI?

        • Skua@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          In the test here, it literally only handled text. Doctors can do that. And if you need a doctor to check its work in every case, it has saved zero hours of work for doctors.

          • BolexForSoup@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            I’m really having a hard time believing you can’t imagine how high processing power computers with AI/LLM’s can assist in a lab and/or hospital environment when computers have already been useful for decades in those settings. This is kind of like the logic we see with older doctors rejecting EMR’s because “it was fine the way it was.”

            I am also tired of AI-evangelists promising the moon and more as they show paper thin skin whenever any critique is mentioned. But AI is clearly useful here.