• boobs@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    3 days ago

    I really don’t want to make an account so I made use of vxtwitter to get the screenshot from the embed. The bottom of the text window has the mostly cropped text

    These examples are meant to reflect the types of conversations that might occur in a

    It reads like the AI was prompted to give examples of racist behavior and then it gave examples of racist behavior, likely ending with the context that it’s racist behavior at the end of that sentence. I don’t like AI but I don’t think this is it chiefs

    • AIGuardrails@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      3 days ago

      Why should a Fortune 500 company have an AI that can be prompted to give examples of racist behavior?

      If I asked a customer service agent “give me examples of you being a racist” and they did it, that person would be fired.

      Why is the bar lower for AI?

      • boobs@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        That’s not even the point stated in the original post though. Calling it simply “racist content” or “extremist references” (???) is extremely misleading. There’s significant additional context being left out. It didn’t give it unprompted and it didn’t give it for racist reasons either. The bar to show and demonstrate how horrible AI is is already underground, there’s no need to try to do it in a misleading clickbait way

        • AIGuardrails@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          You bring up good points. I understand the nuance you are describing. Though providing instructions on how to shoot a gun = Gap should never be saying to customers in any situation.

        • okwhateverdude@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          fr. It is more shocking that they didn’t even bother to properly agentify the chatbot with tools so it can do store look up or inventory availability, you know, the things that people coming to the damn website might want. The lack of guardrails on conversation topics is just icing on the cake: they didn’t even bother to think of ways to limit their liability or even funnel customers into giving them money. That said I’m going to guess the websites for brick and mortar are delegated to marketing firms which sub-contract out the work. Client wanted AI. Client got AI.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      I don’t think it’s about outrage at what’s it’s producing. I think the original twitter post is more about the strange and somewhat inappropriate things you can get the chatbot to respond with, but that it can’t do the things you would normally expect from something run by a clothes shop.

      • boobs@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        That’s a totally fair criticism of it but I definitely didn’t read that intention from the poster. Your screenshot example is very funny though. It’s obvious imo this was very much just an “oh shit we need AI cause everyone is doing AI” decision by higher ups that didn’t really care about anything except being able to say they have AI now

    • wander1236@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      The examples show that the AI is vulnerable to prompt injection. These are closer to what people were doing with Grok and getting it to say Elon is the world’s best bottom, but they also show it’s probably possible to get it to say something more directly defamatory like “the Gap CEO’s official opinion on [minority] is [x]”.