Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an “edu” version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.

Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the “robots in front of the kids so they can learn to dance with them” (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn’t something we should probably take lightly. In very important ways, AI isn’t comparable to technologies that came before it.

The kind of reasoning we’re hearing from those educators in favor of AI adoption in schools doesn’t seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn’t sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.

ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it’s been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It’s really a stretch to say it’s had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven’t. We’re still scrambling and debating about how we should be using it in general. We’re still in the AI wild west, untamed and largely lawless.

The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?

The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they’re young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.

While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.

Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.

The way people (of all ages) often use AI has often been shown to lead to a tendency to “offload” thinking onto it — which doesn’t seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.

This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.

This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I’ve heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what’s real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.

I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we’re hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    15 days ago

    AI highlights a problem with universities that we have been ignoring for decades already, which is that learning is not the point of education, the point is to get a degree with as little effort as possible, because that’s the only valueable thing to take away from education in our current society.

    • ThomasWilliams@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      14 days ago

      The rot really began with Google and the the goal of “professionalism” in teaching.

      Textbooks were thrown out, in favour of “flexible” teaching models, and Google allowed lazy teachers to just set assignments rather than teach lessons (prior to Google, the lack of resources in a normal school made assignments difficult to complete to any sort of acceptable standard).

      The continual demand for “professionalism” also drove this trend - “we have to have these vast, long winded assignments because that’s what is done at university”.

      AI has rendered this method of pedagogy void, but the teaching profession refuses to abandon their aim for “professionalism”.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      14 days ago

      I’d argue schooling in general. Instead of being something you do because you want to and enjoy it, it’s instead a thing you have to do either because you don’t have the qualifications for a promotion, or you need the qualifications for an entry-level position.

      People that are there because they enjoy study, or want to learn more are arguably something of a minority.

      Naturally if you’re there because you have to be, you’re not going to put much, if any effort in, and will look to take what shortcuts you can.

  • tehn00bi@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    15 days ago

    I just keep seeing in my head when John Connor says “we’re not going to make it, are we?”

  • lechekaflan@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    14 days ago

    Thru AI as some glorified meme generators, what oligarchies are now steering millions of people to become… cows.

  • jpreston2005@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    4
    ·
    15 days ago

    I gotta be honest. Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets. What these things are doing to our minds, is akin to how your diet and cooking habits change once you start utilizing doordash extensively.

    I say this with full understanding that I’m coming off as just some luddite, but I don’t care. A tool is only as useful as it improves your life, and off-loading critical thinking does not improve your life. It actively harms your brains higher functions, making you a much easier target for propaganda and conspiratorial thinking. Letting children use this is exponentially worse than letting them use social media, and we all know how devastating the effects of that are… This would be catastrophically worse.

    But hey, good thing we dismantled the department of education! Wouldn’t want kids to be educated! just make sure they know how to write a good ai prompt, because that will be so fucking useful.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      15
      ·
      15 days ago

      That sounds like a form of prejudice. I mean even Siri and Alexa? I don’t use them for different reaons… but a lot of people use them as voice activated controls for lights, music, and such. I can’t see how they are different from the clapper. As for the llms… they don’t do any critical thinking, so noone is offloading thier critical thinking to them. If anything, using them requires more critical thinking because everyone who has ever used them knows how often they are flat out wrong.

      • jpreston2005@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        4
        ·
        15 days ago

        voice activated light switches that constantly spy on you, harvesting your data for 3rd parties?

        Claiming that using ai requires more critical thinking than not is a wild take, bro. Gonna have to disagree with all of what you said hard.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          15 days ago

          You hit on why I don’t use them. But some people don’t care about that for a variety of reasons. Doesn’t make them less than.

          Anyone who tries to use AI and not apply critical thinking fails at thier task because AI is just wrong often. So they either stop using it, or they apply critical thinking to figure out when the results are usable. But we don’t have to agree on that.

          • jpreston2005@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 days ago

            I don’t think using an inaccurate tool gives you extra insight into anything. If I asked you to measure the size of objects around your house, and gave you a tape measure that was not correctly metered, would that make you better at measuring things? We learn by asking questions and getting answers. If the answers given are wrong, then you haven’t learned anything. It, in fact, makes you dumber.

            People who rely on ai are dumber, because using the tool makes them dumber. QED?

            • Modern_medicine_isnt@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              13 days ago

              How about this. I think it is pretty well known that pilots and astronauts are trained on simulations where some of the information they get from “tools” or gauges is wrong. On the surface it is just simulating failures. But the larger purpose is to improve critical thinking. They are trained to take each peice of information into context and if it doesn’t fit, question it. Sound familiar?

              AI spits out lots of information with every response. Much of it will be accurate. But sometimes there will be a faulty basis in it that causes one or more parts of the information to be wrong. But the wrongness almost always follows a pattern. In context the information is usually obviously wrong. And if you learn to spot the faulty basis, you can even sus out which information is still good. Or you can just tell it where it went wrong and it often will come back with the correct answer.

              Talking to people isn’t all that different. There is a whole sub for confidently wrong on reddit. But spotting when a person is wrong is often harder because the depth of thier faulty basis can be soo much deeper than an AIs. And, they are people, so you pften can’t politely question the accuracy of what they are saying. Or they are just a podcast… I think you get where I am going.

              • jpreston2005@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                13 days ago

                you are really reaching to justify this stuff, it’s wild. No. I disagree. using a flawed tool doesn’t increase your critical thinking skills. All it will do is confuse and ill inform the vast majority of people. Not everybody is an astronaut.

                • Modern_medicine_isnt@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  12 days ago

                  I didn’t need to reach at all. I brought down to several simple examples. You just aren’t willing to open your mind and consider it.
                  I 100% agree that it confuses and ill informs many adults. That is why I think it is so important that kids be exposed to it, and taught to think critically about what it tells them. It isn’t going to go away. And who kmows, they might learn to apply that same critical thinking to what the talking heads on the internet tell them. But even if not, it would be worth it.

        • BananaIsABerry@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 days ago

          If AI has a significant amount of incapabilities and is often wrong (which it definitely is), wouldn’t it take more critical thinking to determine when it’s done something wrong?

          • jpreston2005@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 days ago

            If I were to give you a calculator that was programmed to give the wrong answers, would that be a useful tool? Would you be better off for having used it?

            • BananaIsABerry@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              14 days ago

              Does a calculator do a significant amount of statistical analysis and base its output on the most probable result from a massive data set?

              No. That would be stupid.

              People taking the response from LLMs at face value is a problem, which is the point of the discussion, but disregarding it entirely would be equally dumb. Critical thinking would include knowing when and where to use a specific tool instead of trying to force one to be universal.

              • jpreston2005@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                14 days ago

                but that’s the problem. Ai people are pushing it as a universal tool. The huge push we saw to have ai in everything is kind of proof of that.

                People taking the response from LLMs at face value is a problem

                So we can’t trust it, but in addition to that, we also can’t trust people on TV, or people writing articles for official sounding websites, or the white house, or pretty much anything anymore. and that’s the real problem. We’ve cultivated an environment where facts and realities are twisted to fit a narrative, and then demanded that we give equal air time and consideration to literal false information being peddled by hucksters. These LLMs probably wouldn’t be so bad if we didn’t feed them the same derivative and nonsensical BS we consume on a daily basis. but at this point we’ve just introduced and are now relying on a flawed tool that’s basing it’s knowledge on flawed information and it just creates a positive feedback loop of bullshit. People are using ai to write BS articles that are then referenced by ai. It won’t ever get better, it will only get worse.

              • bthest@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                13 days ago

                Does a calculator do a significant amount of statistical analysis and base its output on the most probable result from a massive data set?

                Well, they will be very soon. And probably require a monthly monthly subscription fee as well. They will stop at nothing to make sure that every digital technological orifice is filled with a giant black AI dildo.

                May God have mercy on us.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          7
          ·
          14 days ago

          Read the word. Prejudice … pre judice… pre judgment. Judging someone on limited information that isn’t adequate to form a reasonable opinion. Hearing someone uses siri and thinking less of them on that tiny fact alone is prejudice. For all you know, siri is some part of how they make a living. Or any of a thousand reasons someone may use it and still be a good intelligent person.

          • CXORA@aussie.zone
            link
            fedilink
            English
            arrow-up
            5
            ·
            14 days ago

            Its not prejudgement if you know their actions… that’s. What that means. At that point its just judgement.

            You can consider it unfair, unjust, narrow minded or any of a number of other terms, but absolutely not prejudice.

            • Modern_medicine_isnt@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              14 days ago

              But he doesn’t actually know thier actions. He knows they “use” siri. But he knows absolutely nothing about how. If they explained in detail how they use siri, then it would not be prejudice. But just the phrase, I use siri, is far from knowing thier actions. It’s not like I use an Ice pick, which has one generally understood use.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          13 days ago

          Did you even read the comment I responded to? “Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets.”

          They are litterally judging someone before they even know any details other than that they use any form of AI at all. Could be a cyber security researcher fir all the commenter knows.

  • SnarkoPolo@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    14 days ago

    People who can’t think critically tend to vote Conservative.

    Coincidence? I think not.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      14 days ago

      thats why conservative govts are all in adopting AI. because conservatives cant tell the difference between an AI video and a real one. jus tlook on reddit how many videos are accused of being AI when its not.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    13 days ago

    Previous tech presented information, made it faster and more available. It also just processed information. AI however claims to do the creativity and decision making for you. Once you’ve done that you’ve removed humans from any part of the equation except as passive consumers unneeded for any production.

    How you plan on running an economy based on that structure remains to be seen.

    • E_coli42@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      27
      ·
      15 days ago

      Old man yells at cloud.

      I remember the “ban calculators” back in the day. “Kids won’t be able to learn math if the calculator does all the calculations for them!”

      The solution to almost anything disruptive is regulation, not a ban. Use AI in times when it can be a leaning tool, and re-design school to be resilient to AI when it would not enhance learning. Have more open discussions in class for a start instead of handing kids a sheet of homework that can be done by AI when the kid gets home.

      • Chulk@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        15 days ago

        Cant remember the last time a calculator told me the best way to kill myself

      • Jason2357@lemmy.ca
        link
        fedilink
        English
        arrow-up
        11
        ·
        15 days ago

        Offloading onto technology always atrophies the skill it replaces. Calculators offloaded, very specifically, basic arithmetic. However, Math =/= arithmetic. I used calculators, and cannot do mental multiplication and division as fast or well as older generations, but I spent that time learning to apply math to problems, understand number theory, and gaining a mastery of more complex operations, including writing computer sourcecode to do math-related things. It was always a trade-off.

        In Aristotle’s time, people spent their entire education memorizing literature, and the written world off-loaded that skill. This isn’t a new problem, but there needs to be something of value to be educated in that replaces what was off-loaded. I think scholars are much better trained today, now that they don’t have to spend years memorizing passages word for word.

        AI replaces thinking. That’s a bomb between the ears for students.

        • E_coli42@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          It doesn’t have to replace thinking if used properly. This is what schools should focus on instead of banning AI and pretending that kids are not going to use it behind closed doors.

          For example, I almost exclusively use Gen AI to help me find sources or as a jumping-off point to researching various topics, rather than as a source of truth itself (because it is not one). This is super useful as it automates away the tedious parts of finding the right research papers to start learning something and gives me more time to focus on my actual literature review.

          If we ban AI in schools instead of embrace it with caution, students won’t know how to learn skills in order to use it effectively. They’ll just start offloading their thinking to AI when doing homework.

      • lemmy_outta_here@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        ·
        edit-2
        15 days ago

        I remember the “ban calculators” back in the day

        US math scores have hit a low point in history, and calculators are partially to blame. Calculators are good to use if you already have an excellent understanding of the operations. If you start learning math with a calculator in your hand, though, you may be prevented from developing a good understanding of numbers. There are ‘shortcut’ methods for basic operations that are obvious if you are good with numbers. When I used to teach math, I had students who couldn’t tell me what 9 * 25 is without a calculator. They never developed the intuition that 10 * 25 is dead easy to find in your head, and that 9 * 25 = (10-1) * 25 = 250-25.

        • E_coli42@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          Interesting. The US is definitely not doing a good job at this then and needs to re-vamp their education system. Your example didn’t convince me that calculators are bad for students, but rather than the US schooling system is really bad if they introduce calculators so early that students don’t even have an intuition of 9 * 25 = (10-1) * 25 = 250-25.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    15 days ago

    This is interesting. This entire post reads like a hot take from the poster themselves, unsupported by any actual article. While there are some linked sources, the author fails to specify what kind of AI is being discussed or how it is being used in the classroom. Overall, the post appears to be little more than anti-AI ragebait. More telling is that commenters attempting to inject nuance or level-headed discussion are being downvoted simply because they are not explicitly anti-AI. Frankly, the anti-AI rhetoric on this platform is becoming incoherent, nonsensical, and increasingly idiotic. Many of the loudest critics clearly have no understanding of what it is they claim to dislike.

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      While there are some linked sources, the author fails to specify what kind of AI is being discussed or how it is being used in the classroom.

      One of the important points is that there are no consistent standards or approaches toward AI in the classroom. There are almost as many variations as there are classrooms. It isn’t reasonable to expect a comprehensive list of all of them, and it’s neither the point nor the scope of the discussion.

      I welcome specific and informed counterarguments to anything presented in this discussion, I believe many of us would. I frankly find it ironic how lacking in “nuance or level-headed discussion” your own comment seems.

      • mechoman444@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        15 days ago

        I don’t disagree that there’s no single, unified standard for AI use in classrooms. That’s obvious and not controversial. But that point doesn’t actually address the criticism being made.

        “No consistent standards” is not a license to be vague. You don’t need an exhaustive list of every classroom implementation to name which AI tools you’re talking about, how they’re being used, or what specific harms you’re alleging. Minimum specificity is not the same thing as total coverage, and pretending otherwise is a dodge.

        Appealing to “scope” here also feels convenient. Scope is a choice made by the author. If the scope of an argument can’t tolerate basic clarification, then the argument itself is underdeveloped. Complexity does not excuse imprecision.

        As for the irony comment, asking for clarity, definitions, and informed counterarguments is nuance. What’s missing from this discussion isn’t level-headedness it’s commitment to concrete claims. Abstract complaints about “AI in the classroom” without operational detail aren’t thoughtful critiques; they’re nothing more than feelings.

        You’ve offered nothing with your response except visceral. Do you have anything to add to the conversation aside from the fact that you obviously don’t like AI??

        • Disillusionist@piefed.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          14 days ago

          Your engagement on this issue is still clearly in bad faith. It reads like a common troll play where they attempt to draw a mark down a rabbit hole.

          Understand that I don’t play these games. This is me leaving you to your checkerboard. Take care.

          [Edited for grammar and brevity]

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            14 days ago

            So you’re not going to address anything that I’ve said at all aside from you don’t like it.

            Good day to you sir.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 days ago

      Are you familiar with a social media site where it’s common to post well-researched and cited position papers? A rant is about what I expect in a place like this. The goal, I think, is to start a discussion -which is where your commentors injecting nuance or level headed opinions comes in. I personally don’t know what the solution is, but students using AI is an incredible experiment being conducted on the next generation. No one has anything but an opinion, because there’s no outcome data yet. My opinion is that it is scary as hell.

        • mechoman444@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          14 days ago

          There is nothing nuanced or level-headed about his response.

          Don’t get all salty because I negatively critique your post.

      • mechoman444@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        I agree that Lemmy isn’t a venue for peer reviewed position papers, and I’m not asking for one. But “it’s a rant” doesn’t exempt an argument from basic clarity. Informal discussion still benefits from naming what you’re actually worried about.

        Calling this an “experiment” on the next generation is fair. Saying it’s “scary as hell” is also fair. What’s missing and what people are reacting to is why and how. Is the concern skill atrophy, academic integrity, surveillance, equity, or something else entirely? Those distinctions matter if the goal is discussion rather than venting.

        Also, “no one has anything but an opinion” isn’t quite true. We don’t have long-term outcome data, but we do have analogs: calculators, spellcheck, search engines, LMS tools, and early AI pilots. That context doesn’t settle the debate, but it does constrain it.

        I’m not dismissing fear or uncertainty. I’m pushing back on the idea that vagueness is a virtue. If nuance is welcome in the comments, as you say it is, then the original framing should at least give people something concrete to engage with. Otherwise, the discussion predictably devolves into vibes and outrage, which helps no one.

  • StitchInTime@piefed.social
    link
    fedilink
    English
    arrow-up
    15
    ·
    15 days ago

    When I was in school I was fortunate enough that I had educators who strongly emphasized critical thinking. I don’t think “AI” would be an issue if it were viewed as a research tool (with a grain of salt), backed by interactive activities that showcased how to validate what you’re getting.

    The unfortunate part is instructor’s hands are more often than not tied, and the temptation to just “finish the work” quickly on the part of the student is real. Then again, I had a few rather attractive girls flirt with me to copy my work and they didn’t exactly get far in life, so I have to wonder how much has truly changed.

    • CXORA@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      Using ai for research is absolutely insane to me. Isn’t an important part of research being able to cite sources?

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      15 days ago

      This is also the kind of thing that scares me. I think people need to seriously consider that we’re bringing up the next wave of professionals who will be in all these critical roles. These are the stakes we’re gambling with.

    • jpreston2005@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      15 days ago

      When I was in medical school, the one thing that surprised me the most was how often a doctor will see a patient, get their history/work-up, and then step outside into the hallway to google symptoms. It was alarming.

      Of course, the doctor is far more aware of ailments, and his googling is more sophisticated than just typing in whatever the patient says (you have to know what info is important in the pt. history, because patients will include/leave out all sorts of info), but still. It was unnerving.

      I also saw a study way back when that said that hanging up a decision tree flow chart in Emergency rooms, and having nurses work through all the steps drastically improved patient care; additionally new programs can spot a cancerous mass on a radiograph/CT scan far before the human eye could discern it, and that’s great but… We still need educated and experienced doctors because a lot of stuff looks like other stuff, and sometimes the best way to tell them apart is through weird tricks like “smell the wound, does it smell fruity? then it’s this. Does it smell earthy? then it’s this.”

  • Modern_medicine_isnt@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    11
    ·
    15 days ago

    I couldn’t even finish the article. The mental gymnastics it would take to write it could only come from someone who never learned how to use AI. If anything, the article is a testament to how our children and everyone should be taught how to use AI effectively.

  • Comrade_Squid@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    14 days ago

    I’ve been working on formal a socialist students society, our first and current campaign is fighting back against AI in the local college, the reaction from students has been electric. Students don’t want this, they know they are being deskilled, they know who profits.

    • Madzielle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 days ago

      My brother in law is in college for engineering. His mom was telling me he uses AI for his assignments and just edits the responses. She is writing a book, and said she uses AI all the time for it.

      It makes me want to scream. We weren’t even allowed to use spark notes when I was a student, and yet, the schools today seem to be pushing this tech on them. The mother used my spark notes example as an excuse, “see kids always have looked for ways to make their work easier”. It’s not the same lady…

      While I’m glad to hear your school cohort is enthusiastic and informed, I’m not so sure it’s the general consensus among college students.

      • Comrade_Squid@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        I’ve met some in this campaign who like AI, but they don’t take much convincing once the material conditions are explained. I’m sure most of those students will continue to use AI. 90% of the students I’ve spoken with so far agree tho. I’d imagine the education institutions are getting some form of kick back or guidance from AI firms/partners, we saw similar with PCs entering education, something silicon valley opt their own children out off.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      15 days ago

      This. (Offline too.)

      Which generation did we really taught critical thinking to? In general, those “thinkers” or people with nice research skills (e.g. reading comprehension and other traits) were always a minority within each generation. And I agree there will be less now with AI. But we have no polls or measurement, so the title goes a little clickbaity, in resonance to the generalized discomfort towards a new technology that schools haven’t accomodated yet (e.g. all kind of solutions are seen in the wild)

      I reckon it was the same with arithmetislcs and calculators in the past. We were able to deal with that! (so that whatever proportion of people that graduates knowing arithmetics with each generation didn’t shrink “too much”.)

      If we are considering possible scenarios, let’s be optimistic too.

      AI (discounting other problems like their ecological footprint) may not be that bad on our educational systems once we adjust…

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    15 days ago

    The very same people, who called me stupid for thinking typing will be a more important skill that “pretty writing” now think art education is obsolete, because you can just ask a machine for an image.

    AI stands for “anti-intellectualism”.

    • dukemirage@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      15 days ago

      Handwriting’s still important. “Pretty” usually means legible, too, and the point of art education is not to be able to confidently produce pictures.

        • dukemirage@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          15 days ago

          never left a note for someone in that time? a few quick thoughts before you forget? a list, some date, when a device is out of reach/battery/service?

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      15 days ago

      One of Big Tech’s pitches about AI is the “great equalizer” idea. It reminds me of their pitch about social media being the “great democratizer”. Now we’ve got algorithms, disinformation, deepfakes, and people telling machines to think for them and potentially also their kids.

    • termaxima@slrpnk.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      15 days ago

      It seems writing things by hand is better for memorization, and it certainly feels more personal and versatile in presentation.

      I write lots of things by hand. Having physical papers is helpful, I find, to see lots of things at once, reorganise, etc. I also like being able to highlight, draw on things, structure documents non-linearly…

      I’m a computer scientist, so I do value typing immensely too. But I find it too constraining for many reasoning tasks, especially for learning or creativity.