• ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    48
    ·
    edit-2
    5 个月前

    I was going to say this has to be BS but this guy is some AI snake oil salesmen so it’s actually possible he has 0 idea how any of this works.

    • biscuitswalrus@aussie.zone
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      5 个月前

      When I read this first, someone commented that they’d never ever post this. It’s like you’re admitting you’re incompetent.

  • notabot@piefed.social
    link
    fedilink
    English
    arrow-up
    81
    ·
    5 个月前

    Assuming this is actually real, because I want to believe noone is stupid enough to give an LLM access to a production system, the outcome is embarasing, but they can surely just roll back the changes to the last backup, or the checkpoint before this operation. Then I remember that the sort of people who let an LLM loose on their system probably haven’t thought about things like disaster recovery planning, access controls or backups.

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        41
        ·
        5 个月前

        LLM seeks a match for the phrase “take care of” and lands on a mafia connection. The backups now “sleep with the fishes”.

      • pulsewidth@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        5 个月前

        Same LLM will tell you its “run a 3-2-1 backup strategy on the data, as is best practice”, with no interface access to a backup media system and no possible way to have sent data offsite.

        • Swedneck@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          15
          ·
          5 个月前

          there have to be multiple people now who think they’ve been running a business because the AI told them it was taking care of everything, as absolutely nothing was happening

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 个月前

        Without a production DB we don’t need to pay software engineers anymore! It’s brilliant, the LLM has managed to reduce the company’s outgoings to zero. That’s bound to delight the shareholders!

        • MoonRaven@feddit.nl
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 个月前

          Without a production db, we don’t need to host it anymore. Think of those savings!

    • pulsewidth@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      5 个月前

      I think you’re right. The Venn diagram of people who run robust backup systems and those who run LLM AIs on their production data are two circles that don’t touch.

      • Asswardbackaddict@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 个月前

        Working on a software project. Can you describe a robust backup system? I have my notes and code and other files backed up.

        • Winthrowe@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 个月前

          Look up the 3-2-1 rule for guidance on an “industry standard” level of protection.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 个月前

          Sure, but it’s a bit of an open-ended question because it depends on your requirements (and your clients’ potentially), and your risk comfort level. Sorry in advance, huge reply.

          When you’re backing up an production environment it’s different to just backing up personal data so you have to consider stateful-backups of the data across the whole environment - to ensure for instance that an app’s config is aware of changes made recently on the database, else you may be restoring inconsistent data that will have issues/errors. For a small project that runs on a single server you can do a nightly backup that runs a pre-backup script to gracefully stop all of your key services, then performs backup, then starts them again with a post-backup script. Large environments with multiple servers (or containers/etc) or sites get much more complex.

          Keeping with the single server example - those backups can be stored on a local NAS, synced to another location on schedule (not set to overwrite but to keep multiple copies), and ideally you would take a periodical (eg weekly, whatever you’re comfortable with) copy off to a non-networked device like a USB drive or tape, which would also be offsite (eg carried home or stored in a drawer in case of a home office). This is loosely the 3-2-1 strategy is to have at least 3 copies of important data in 2 different mediums (‘devices’ is often used today) with 1 offsite. It keeps you protected from a local physical disaster (eg fire/burglary), a network disaster (eg virus/crypto/accidental deletion), and has a lot of points of failure so that more than one thing has to go wrong to cause you serious data loss.

          Really the best advice I can give is to make a disaster recovery plan (DRP), there are guides online, but essentially you plot out the sequence it would take you to restore your environment to up-and-running with current data, in case of a disaster that takes out your production environment or its data.

          How long would it take you to spin up new servers (or docker containers or whatever) and configure them to the right IPs, DNS, auth keys and so on? How long to get the most recent copy of your production data back on that newly-built system and running? Those are the types of questions you try to answer with a DRP.

          Once you have an idea of what a recovery would look like and how long it would take, it will inform how you may want to approach your backup. You might decide that file-based backups of your core config data and database files or other unique data is not enough for you (because the restore process may have you out of business for a week), and you’d rather do a machine-wide stateful backup of the system that could get you back up and running much quicker (perhaps a day).

          Whatever you choose, the most important step (that is often overlooked) is to actually do a test recovery once you have a backup plan implemented and DR plan considered. Take your live environment offline and attempt your recovery plan. It’s really not so hard for small environments, and can make you find all sorts of things you missed in the planning stage that need reconsideration. 'Much less stressful when you find those problems and you know you actually have your real environment just sitting waiting to be turned back on. But like I said it’s all down to how comfortable you are with risk, and really how much of your time you want to spend considering backups and DR.

    • BigDanishGuy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 个月前

      I want to believe noone is stupid enough to give an LLM access to a production system,

      Have you met people? They’re dumber than a sack of hammers.

      people who let an LLM loose on their system probably haven’t thought about things like disaster recovery planning, access controls or backups.

      Oh, I see, you have met people…

      I worked with a security auditor, and the stories he could tell. “Device hardening? Yes, we changed the default password” and “whaddya mean we shouldn’t expose our production DB to the internet?”

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 个月前

        I once had the “pleasure” of having to deal with a hosted mailing list manager for a client. The client was using it sensibly, requiring double opt-in and such, and we’d been asked to integrate it into their backend systems.

        I poked the supplier’s API and realised there was a glaring DoS flaw in the fundamental design of it. We had a meeting with them where I asked them about fixing that, and their guy memorably said “Security? No one’s ever asked about that before…”, and then suggested we phone them whenever their system wasn’t working and they’d restart it.

  • peregrin5@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 个月前

    I’m going to guess the one who wrote the prompt is the one getting fired regardless of what the AI did or did not admit to.

  • Bongles@lemmy.zip
    link
    fedilink
    English
    arrow-up
    28
    ·
    5 个月前

    This replit thing… does it just exist all the time? Doing whatever it wants to your code at all times? If you have a coding freeze why is it running?

    If real this is dumber than the lawyers using AI and not checking it’s references.

  • pyre@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    5 个月前

    “yeah we gave Torment Nexus full access and admin privileges, but i don’t know where it went wrong”

  • UnspecificGravity@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    5 个月前

    My favorite thing about all these AI front ends is that they ALL lie about what they can do. Will frequently delivery confidently wrong results and then act like its your fault when you catch them in an error. Just like your shittiest employee.

  • TriflingToad@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 个月前

    I thought they were talking about the replika ai girlfriend thing there are ads for and I was like “damn slay girlboss” till I opened comments lol

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    112
    ·
    5 个月前

    I was gonna ask how this thing would even have access to execute a command like this

    But then I realized we are talking about a place that uses a tool like this in the first place so, yeah, makes sense I guess

    • Ech@lemmy.ca
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      5 个月前

      Step 1. Input code/feed into context/prompt

      Step 2. Automatically process the response from the machine as commands

      Step 3. Lose your entire database