• Xander707@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    2 days ago

    This is an asinine position to take because AI will never, ever make these decisions in a vacuum, and it’s really important in this new age of AI that people fully understand that.

    It could be the case that an accurate, informed AI would do a much better job of diagnosing patients and recommending the best surgeries. However, if there’s a profit incentive and business involved, you can be sure that AI will be mangled by the appropriate IT, lobbyist, congressional avenues to make sure if modifies its decision making in the interests of the for-profit parties.

    • Corkyskog@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      They will just add a simple flow chart after. If AI denies the thing, then accept the decision. If AI accepts the thing, send it to a human to deny.

    • fodor@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I think your hypothetical is just false, that we can’t even give AI that much potential credit. And this is incredibly obvious if you ask about transparency, reliability, and accountability.

      For example, it may be possible to come up with a weighted formula that looks at various symptoms and possible treatments and is used to come up with a suggestion of what to do with a patient in a particular situation. That’s not artificial intelligence. That’s just basic use of formulas and statistics.

      So where is the AI? I think the AI would have to be when you get into these black box situations, where you want to throw a PDF or an Excel file at your server and get back a simple answer. And then what happens when you want clarity on why that’s the answer? There’s no real reply, there’s no truthful reply, it’s just a black box that doesn’t understand what it’s doing and you can’t believe any of the explanations anyway.

      • Xander707@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        I’m going to have to disagree with your reply.

        AI is capable of doing a better and more efficient job of diagnosing and recommending surgeries than humans, or even human created algorithms.

        Think about Chess. When computers were in their infancy, there was much skepticism that a computer could ever master the game of chess and reliably beat the world’s best players. Eventually we made chess engines that were very strong, by feeding them tons of data and chess theory, basically giving them algorithms that helped them contend with top players. These engines performed well because they played the game at the level of top players but without the human component to make a natural human error. They could beat grand masters, but it wasn’t a sure victory.

        Enter AI. New chess engines were made with AI neural networks, and rather than feeding them tons of chess data and theory, they are just given the rules of the game and set to play and learn with the goal of increasing their win rate. These AI chess engines were able to far surpass previous conventional algorithmic engines because they were self-learning and defied conventional chess theory, discovering new ways to play and win, showing humans variations and positions never considered before that could win.

        In a similar way, AI could do the same with healthcare, and basically anything else. If the AI is advanced enough and given the goal of finding the best survival rate/quality of life for diagnosis and surgery, it will do so more efficiently than any human or basic algorithm because it will see patterns and possibilities that today’s best doctors and surgeons do not. It is obvious that a sufficiently advanced AI would diagnose you and recommend the correct and best surgery more accurately and more efficiently than even the worlds best possible team of professionals or any non-learning algorithm.

        But the issue is the insurance companies will never instruct the AI that best survival rates/quality of life is the “checkmate”, but rather whatever outcomes lead to the highest profit with least amount of legal or litigation risk.