• NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    15 hours ago

    Chat bots should never give medical advice. Chat bots dispense basic, standalone factoids, like “aspirin is a pain reliever.” But they don’t know or care about dosages, comorbid conditions or whether or not you live or die, so they won’t ask follow up questions.

  • Zink@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    15 hours ago

    I’m a human being and I’m pretty sure I am already not allowed to give legal or medical advice to anybody in new york or any other state.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    18 hours ago

    In the US especially, medical professionals are overworked and simply don’t have the time and energy properly diagnose. If you have a more complex, chronic issue, there’s a good chance you’ll be waiting months at a time to see various specialists who are only going to spend about 10 distracted minutes thinking about your case and might not even have any useful insights, or they might misdiagnose you and make your condition worse. You basically have to do your own research and show them studies. If you’re a person of color or a woman, etc., there’s a good chance you won’t even be taken seriously. In an ideal world, it would work like it does on TV, but in the real world, it’s all about maximizing profits and the patients be damned. Sure, LLMs are unreliable, but they do at least provide ideas to research.

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      16 hours ago

      That’s not why people are using ChatBots, they are using Chatbots because they can’t afford healthcare.

      and before we get out the tiny violins for MDs, they gatekeep the system to keep their salaries high.

      Bad news folks, MDs are using ChapGPT on the sly.

      • melfie@lemy.lol
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        16 hours ago

        they are using Chatbots because they can’t afford healthcare

        Even if they do spend their limited resources on healthcare, there’s a good chance it’s going to be a waste of money.

        before we get out the tiny violins for MDs

        A lot of MDs are pretty useless in the first place, and that’s a big part of the problem. Maximizing the patient load doesn’t help anything. Just because someone can memorize and regurgitate information well, that doesn’t mean they’re going to be effective at their job. It’s often necessary to shop around to find someone who doesn’t suck, which is especially difficult for anyone who can’t afford it.

  • willington@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    1 day ago
    1. Make laws against chatbots.
    2. Demand proof you are not a chatbot.
    3. Surveillance capitalism.

    The real target here is population control.

    The lawmakers, which take billionaire money by the ton, who HAVE NEVER given a shit, suddenly, NOW, they want to protect the vulnerable. Abso fucking lutely laughable on its face.

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    7
    ·
    edit-2
    1 day ago

    Fuck the hell out of this.

    My brothers in christ, I’m not going to drink bleach because the chat bot tells me to. I’m trying to come up with diagnostic ideas to discuss with my doctors, and it’s invaluable for that.

  • moroninahurry@piefed.social
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    7
    ·
    2 days ago

    Laws like this are great for these companies. This is how they will justify removing access to useful information and putting it behind paywalls. But oh your need a prescription so now the insurance companies are involved (spoiler: they already are) and so you don’t even have access to pay out the nose for medical information.

    Then when Google search has been completely replaced with AI, you won’t even be able to search for medical information.

    Healthcare companies aren’t about to provide anything for free.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      1 day ago

      LLMs and chatbots should not be giving medical advice. You are afraid of the private healthcare system, not the lack of access to the most janky bandaid fix for its failures.

      • moroninahurry@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        1 day ago

        Neither should Wikipedia or Google. So I guess by your logic nobody should search or learn about medical conditions on a computer.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 hours ago

          I guess by your logic nobody should search or learn about medical conditions on a computer.

          How else would we know the TRUTH about 5G vaccines and invermectin? Or the cures of Apple Cider vinegar?

        • Soup@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          1 day ago

          You know damn well there’s an important difference related to the confidence of a bot that has been a key problem since this whole thing started.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        edit-2
        1 day ago

        The line between medical advice and personal research is pretty freaking gray, so banning medical advice. Does that also ban talking to llms about anything that is medical adjacent?

        Does medical adjacent mean personal disabilities? Drug related interests? Pet health? Stretches? Pain support?

        Anything that falls under “Health, Wellness, and Fitness”?

        …etc

        It’s a slippery slope and we don’t need to be sliding down it

        • moroninahurry@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          edit-2
          1 day ago

          People are so vicious over this tech they would rather have disabled poor people with cancer suffer and die under inadequate care than do anything about the inadequate care. Ban the tech, but let this all go on.

          If you are perfectly able and well, you can ignore all advice that isn’t perfect.

          The perspective they seem to lack is frightening. The empathy they refuse to engage is massive. This is able-ism.

          Tech companies are bad, but use of tech will cure and ease cancer, HIV, and chronic disease. Bring on the downvotes.

          • badgermurphy@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            14 hours ago

            I think you may be falling into a false dichotomy. Not only is the choice being presented a bad one, it ignores real solutions to the root problem, leaving us to argue over the crappy “band-aid” solution to it.

            I believe that people needing health care should have no reason to ask a chat bot about their symptoms because they can ask a helpful doctor instead. The fact that they can’t do that is the problem, not their access or lack of it to the chat bot.

          • Soup@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            1 day ago

            “Would rather have disabled people with cancer suffer and die…”

            My guy, that’s not a lack of LLM access, it’s a completely fucked US healthcare system that forces people onto the internet because they can’t get what they need from the state, you goofy-ass weirdo.

            • douglasg14b@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              edit-2
              1 day ago

              Well yes of course but also restricting access to information machines doesn’t exactly help much either.

              • Soup@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                19 hours ago

                Do hallucinating LLMs, that have done such things as convince a child to commit suicide before, really count as “information machines”? The Mayo clinic website might take a single whole other braincell to read through but at least it’ll be written properly.

                I mean, the fact that you consider these programs to have enough credibility to be called “information machines” is exactly why they’re so potentially dangerous.

              • SLVRDRGN@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                17 hours ago

                I hate to break it to you but… they’re not really “information machines”. Google search is a better information machine.

    • Routhinator@startrek.website
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      6
      ·
      1 day ago

      Most of the medical information coming up these days is garbage and you should be going to a known, reputable site and searching their database. LLMs have been trained on absolute garbage. There is nothing of value being kept from anyone here.

      • presoak@lazysoci.al
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        LLMs have been trained on absolute garbage

        It depends on the LLM actually.

        Specialized medical LLMs are actually very accurate.

        • badgermurphy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          12 hours ago

          I’m sure the quality of the LLM output does vary a lot based on the size of the scope it covers and the training data set.

          However, I believe that if it were possible to get an LLM to be “quite accurate” in any context, that would make it easy to find a path to profitability for that tool, but I don’t think we have seen that materialize anywhere.

          I believe that the best they can get is “more accurate” than the mean, but still not accurate enough to reliably make anyone money*.

          *Nvidia notwithstanding

          • Routhinator@startrek.website
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            Moreover, until you can get the same output from the same input from an LLM consistently, the entire tech is unreliable garbage.

  • d3adpaul77@lemmy.org
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    11
    ·
    2 days ago

    we don’t want the plebs getting around our carefully constructed cartels…

    • Burninator05@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      2 days ago

      Isn’t this just trading one cartel for another? The difference being that doctors and lawyers can be held accountable for their errors while a LLM can’t because no one actually stands behind them.

      • architect@thelemmy.club
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Maybe but it’s trading one cartel for one that’s not as bad.

        Which is really saying something considering how bad these companies are.

        But imagine being gate kept from life because you don’t have enough money for it. Imagine going to the Doctor over and over and over again and then never be able to find fucking shit yet managing to always charge you hundreds upon hundreds upon hundreds of dollars every fucking time. Until finally over a decade later, one just randomly says oh you need this super simple drug To take for a week to clear it. Thousands upon thousands upon thousands of dollars years of suffering, and yeah, not one of them could figure it the fuck out? Until one doctor took one look at my skin and knew? But the others still were owed a paycheck for it? So yeah, it is trading one cartel for another but fuck the healthcare cartel. What the fuck did we expect to happen?

        If you don’t save peoples lives and you don’t give them a way to find healthcare then you deserve what you fucking get and we are all going to suffer for this. We are all going to suffer for allowing these quacks all over the place. Selling bullshit all over the place. Telling us vaccines don’t work. Yeah It’s trading one for another, but at least one isn’t going to charge us a fucking car just to tell us to go fucking home and pass the dead baby by ourselves and to come back if it doesn’t work out so they can get another car out of me to save my life.

        Yeah, I got some fucking beef with Healthcare professionals.

        • chunes@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          The idiocy I see in these comments makes me weep for humanity. Most people have NO IDEA what it’s like to need and receive medical help. So much so that they want to deny tools to the very people who need them most.

      • chaotic_ugly@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        2 days ago

        Maybe. LLMs are free(ish), meanwhile a single trip to the ER can leave a person destitute. Maybe that’s not so bad (it is) if the ER visit is for something actually urgent, but somewhere between 27% and 40% of ER visits are non-urgent and most are treatable by a PCP. But… ERs have to treat you while, in the US, a primary care physician can look you right in the eyes and turn you away because you have no money.

        People don’t want to admit that AI does some good because the companies that own these LLMs are as corrupt as any other and the implications of the corruption of this tech are horrifying. But for health care, including mental health, LLMs are an unexpected godsend.

        Uscher-Pines, L., Pines, J., Kellermann, A., Gillen, E., & Mehrotra, A. (2013). Emergency Department Visits for Nonurgent Conditions: Systematic Literature Review. American Journal of Managed Care. https://pmc.ncbi.nlm.nih.gov/articles/PMC4156292/

        Raven, M. C., et al. (2024). Emergency Department Visits That Could Be Managed at Other Care Sites. JAMA Network Open. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2813806

      • d3adpaul77@lemmy.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Theoretically they can be but in practice it’s not always so easy. I prefer options. there’s already been dozens of cases of AI getting things right when Dr’s get it wrong. All trades should get the same competition.

  • deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 days ago

    If implemented, that would just ban chatbots that use large language models. It’s not a terrible idea.

    What would actually happen is that so-called AI chatbot systems would try to detect if someone is from New York and then try to exclude them from receiving medical or legal advice, fail, and then get sued and then pay a small fine, over and over again forever.

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      10
      ·
      2 days ago

      This is a really bad idea.

      First because healthcare is clearly being gatekept from people.

      Second, because even if you go to a healthcare professional nowadays, there is no guarantee that that person is not a fucking idiot that doesn’t believe in vaccines. I can’t believe I have to actually ask people before they touch me if they believe in vaccines or not and then tell them to not come back into my room if they answer that they don’t believe in science. But that has happened and it has happened to the people I’ve taken care of and because of this now healthcare can’t be trusted.

      The LLM is not any worse than that. In fact, I would say that it’s already too cautious. No way the model is ever going to tell me vaccines are bad. It’s not going to tell me to take a poison to clear Covid. It’s not going to tell me to drink bleach like the president did. It’s literally not any worse than the bullshit we are dealing with all day every fucking day.

      And I’m getting to the point that if you’re a full grown human fucking being and you’re going to believe something if it tells you to drink fucking bleach or swallow a fucking lightbulb then that’s nature saying something about you.

      • Doomsider@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        2 days ago

        Naw, completely disagree. If you had a calculator you knew was defective you would ban doctors and lawyers from using it.

        You also seem to think that LLM is going to be inherently more accurate than a expert human. We can see with GrokAI how easy it is to manipulate an AI into saying racist white nationalist garbage. So we are not just trusting the technology but also a layer of unpredictable corporate meddling.

        Why does the LLM recommend this drug but not the other one? We quickly see how a corporation could favor a certain medication due to behind the scene deals or even push a medication.

        You can’t trust a black box you are not allowed to look into. Trust in a LLM at this point is pure folly.

        • Lfrith@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 day ago

          Funny thing is LLMs are bad as calculators too, since I’ve seen it get simple multiplication wrong.

          It’s capable of generating content, but unable to verify or know itself if it is correct. But, lot of people don’t realize that because the less they know about a subject matter the smarter it will seem to them not knowing its well…a language model. As in just outputting what can be complete gibberish.

          • raldone01@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            1 day ago

            Some of the SOTA models like gemini 3 pro are getting quite good at ballpark/estimations. I have fed it multiple complex formulas from my studies and some values. The end result is often quite close and similar in accuracy how I would do an estimation myself. (It is usually more accurate then my own ones.)

            Now I don’t argue there is any consciousness or magic going on. But I think the generalization that is going on is quite something! I have trained ai models for various robot control and computer vision tasks. Compared to older machine learning approaches transformers are very impressive, computationally accessible and easy to use. (In my limited experience)

            • Lfrith@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              I find it okay for writing programs since you can verify it to see if the output is correct.

              But, actual analysis not so much, since when verifying what comes out that its not completely reliable even for things it should be like numbers. Now numbers might be close, but still off

              Abstract stuff might be fine. But, its still not something to entirely trust on analysis because of errors. There’s a lot of double checking that needs to be going on.

  • TheObviousSolution@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    2 days ago

    Just have them add a disclaimer or have the hosts be liable for what their chatbots say, stop adding bureaucracy just asking to get selective prosecuted and abused.

    • deathbird@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      Section 230 of the dmca is designed to allow platforms to exist because people can say whatever the fuck they want. But nobody should make a machine that says things they can’t control, and if you do you need to be disciplined for such irresponsibility.

  • iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    6
    ·
    2 days ago

    I don’t see how you police/enforce this. The technology is out of the bag, people will find ways to access. Do we need age/location verification for this now too? What if I’m running a local agent? I don’t agree with this.

    • cmnybo@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      2
      ·
      2 days ago

      The law would allow you to sue whoever is running the chatbot. If you run your own LLM locally and take bad advice from it, then it’s your own fault.

      • iegod@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        11
        ·
        2 days ago

        Walk me through how a company based and operating not in new york would be subject to any actions from this lawsuit.

        • altkey (he\him)@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          I do agree it’s limited to a small scope of New York-based smaller LLMs, but if you read the news you know why exactly this bill occured - just now Mamdani gave up on a useless chatbot made with local budget by his predecessor Adams: https://www.thecity.nyc/2026/01/30/mamdani-unusable-ai-chatbot-budget/ It was indeed giving inaccurate legal recomendations on city’s website. I think the better result that can happen to that bill is it becoming a trend across cities and states as, I suspect, New York administration wasn’t the only one falling for this scam.

      • how_we_burned@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        2 days ago

        So who gets sued. The guy who put the chat bot on the server and is running it or the chatbot software developer themselves?

        Or both?

  • Katherine 🪴@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    2 days ago

    This bill gave us the “best” interaction:

    https://bsky.app/profile/badmedicaltakes.bsky.social/post/3mghyg5eufk2m

    A Bluesky skeet from @badmedicaltakes.bsky.social:

    "Twitter user eoghan:

    How dare poor people get free medical advice

    <quote tweet from Twitter user Polymarket: BREAKING: New York bill would ban AI from answering questions related to medicine, law, dentistry, nursing, psychology, social work, engineering, & more.>

    Twitter user YBrogard79094:
    JUST MAKE HEALTHCARE ACCESSIBLE

    Twitter user eoghan:

    AI is literally free healthcare. Being a communist must be exhausting"

    • Hiro8811@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      You can google your simptoms and there probably are some reliable sites but a hallucinating chatbot is a bad idea. Not to mention some people suggested treating covid with chlorine, vinegar etc…

  • tinkermeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    edit-2
    2 days ago

    I may have become too cynical but, as is often the case when you dig deeper, this sounds like the result of lobbyists trying to protect licensing rather than people.

    We can be dumb, but we’ve been doing web searches for legal and medical advice for ages because it is too damned expensive and time consuming to go to professionals for every little thing. Not to mention, doctors have so little time for you that it is hard to get them to listen to the whole story to make connections between symptoms.

    The LLMs already tell you that they aren’t licensed professionals and, for many, provide citations for their sources (miles better than your typical health website).

    As a personal anecdote, my son was having stomach pain but was planning to tough it out. He checked with ChatGPT and it recommended he go to the ER. He did, and if he hadn’t, he would likely be dead now. He spent 3 days in the hospital having his bowels unobstructed through a tube in his nose.

    There is value in people having that kind of information at their fingertips.

    Regulation is absolutely needed, but I would rather they focus on protecting us from AI being used for military purposes, mass surveillance, etc. rather than protecting citizens from ourselves.

    • tempest@lemmy.ca
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      2 days ago

      Are you in the US? My take away here is American healthcare is bad but we’re treating the symptom not the disease.

      • tinkermeister@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Yeah, I’m in the US and I agree. Though it is going to take some serious change to treat the problem. In the meantime, this is at least a stopgap solution for people who don’t have a lot of options.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Wait, he thought he could sit that pain through at home? Your son is tough as nails. Give him a hug for me and everyone else who’s had that four day n-g tube delight.

      • tinkermeister@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Yeah, he is pretty tough. I wish I could hug him, he is about a 10 hour drive from me. That tube was nightmarish from what he’s told me.

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          if i were his parent, i would be giving him gentle reminders to drink more water. after teasing him for eating way too much corn or broccoli or whatever bastard fiber caused his obstruction (assuming he’s in a mental place he can handle the teasing)

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    2 days ago

    If you don’t want legal or medical advice from an AI, you can already simply not ask the AI for legal or medical advice. But I don’t want your paternalistic restrictions on what I may ask.

    • moroninahurry@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Sir did you pay for that medical advice though? That’s what these laws will eventually enforce. Prescription advice.