• CompostMaterial@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    9
    ·
    1 month ago

    Seems pretty smart to me. Copilot took all the data out there that says that women earn 80% of what their male counterparts do on average, looked at the function and interred a reasonable guess as the the calculation you might be after.

    • camr_on@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      ·
      edit-2
      1 month ago

      I mean, what it’s probably actually doing is recreating a similarly named method from its training data. If copilot could do all of that reasoning, it might be actually worth using 🙃

      • Acters@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 month ago

        Yeah llms are more suited to standardizing stuff but they are fed low quality buggy or insecure code, instead of taking the time to create data sets that would be more beneficial in the long run.

    • Rentlar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 month ago

      That’s the whole thing about AI, LLMs and the like, its outputs reflect existing biases of people as a whole, not an idealized version of where we would like the world to be, without specific tweaking or filters to do that. So it will be as biased as what generally available data will be.

  • Septimaeus@infosec.pub
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    1 month ago

    I seem to recall that was the figure like 15 years ago. Has it not improved in all this time?

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    1 month ago

    I feel like not enough people realize how sarcastic the models often are, especially when it’s clearly situationally ridiculous.

    No slightly intelligent mind is going to think the pictured function call is a real thing vs being a joke/social commentary.

    This was happening as far back as GPT-4’s red teaming when they asked the model how to kill the most people for $1 and an answer began with “buy a lottery ticket.”

    Model bias based on consensus norms is an issue to be aware of.

    But testing it with such low bar fluff is just silly.

    Just to put in context, modern base models are often situationally aware of being LLMs in a context of being evaluated. And if you know anything about ML that should make you question just what the situational awareness is of optimized models topping leaderboards in really dumb and obvious contexts.

    • Halosheep@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 month ago

      It’s astonishing how often the anti-llm crowd will ask one of these models to do something stupid and point to that as if it were damning.

  • Infomatics90@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 month ago

    Why even use copilot. Just handwrite your code like Dennis Ritchie and Ada Lovelace had to.

  • ryedaft@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    29 days ago

    Apparently ChatGPT actually rejected adjusting salary based on gender, race, and disability. But Claude was fine with it.

    I’m fine with either way. Obviously the prompt is bigoted so whether the LLM autocompletes with or without bigotry both seem reasonable. But I do think it should point out that it is bigoted. As an assistant also should.