• rozodru@pie.andmc.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 month ago

    Not with any of the current models, none of them are concerned with security or scaling.

  • onlinepersona@programming.dev
    link
    fedilink
    arrow-up
    28
    ·
    1 month ago

    I tried using AI in my rust project and gave up on letting it write code. It does quite alright in python, but rust is still too niche for it. Imagine trying to write zig or Haskell, it would make a terrible mess of it.

    Security is an afterthought in 99.99% of code. AI barely has anything to learn from.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      Even in Python you have to keep it siloed. You have to drip feed it pieces because if you give it the whole script it’ll eat comments, straight up chop out pieces so you end up with something like

       def myFunction():
            # ...start of your function here...
      

      replacing actual code.

    • wiegell@feddit.dk
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      Mitchell Hashimoto writes a lot of Zig with AI (and this interview is almost a year old), see: https://www.youtube.com/watch?v=YQnz7L6x068&t=490s How long since you have tried tools? I think there has been some pretty astounding progress during the last couple of months. Until recently i did not use it daily, but now I just cant ignore the efficiency boost it gives me. There are definitely security concerns, and at this point you should not trust code that you do not read/understand, but tbh. i’m starting to believe that AI might (at least in the short term) free up resources to patch stuff and implement security features, that otherwise was not prioritised before due to focus on feature development. What it does to the IT sector in the long run - who knows…

      • onlinepersona@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        That video showed him saying that it’s good for autocomplete. But speaking from experience testing it on Rust, Python, JS, HTML and CSS, it performed the worst on Rust. It wrote tests well, but sucked at features or refactoring. Whether the problem is between the chair and the screen, I don’t know.

        Whether AI will be able to write secure code, I dunno, I haven’t tried. It could be put into the rules to consider security and add tests relating to security or add an adversarial agent that tries to find flaws in the code which can be exploited. That could probably do more than a developer who has no time assigned to care about testing, much less security.

        What it does to the IT sector in the long run - who knows…

        Agreed. Things are moving so quickly, it’s impossible to predict. There are lots of people on LinkedIn screaming about obsoletion of humans or other bold claims, but to me they are like drunk fortune tellers: tell enough fortunes and one is bound to be right.

        • wiegell@feddit.dk
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          My naive hope is that local models or maybe workplace distributed clusters catch up and the cloud based bubble bursts. I am under the impression, that atm. a big difference as to whether a tool works well or not is more related to how well all the software around the actual llm is constructed. E.g. for discovery - being able to quickly ingest an internet url and access a web index is a big force of the cloud based providers atm. And for coding it has a lot to do with quickly searching and finding the relevant parts of the codebase and evaluate whether the llm has all the required information to correctly perform a task.

    • buddascrayon@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 month ago

      It does quite alright in python

      That’s cause python is the most forgiving language you could write in. You could drop entire pages of garbage into a script and it would figure out a way to run properly.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      32
      ·
      1 month ago

      If you’re using Hannah Montana Linux you can just open a terminal and type “write me ____ in the language ____” and the Hannai Montanai will produce perfectly working code every time.

  • LemmyLegume@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    People who say these things clearly have no experience. I spent an hour today trying to get one of the better programming models to parse a response. I gave it the inputs and expected outputs and it COULD not derive functional code until I told it what the implementation needed to be. If it isn’t cookie-cutter problems then it just can’t predict it’s way through it.

  • skuzz@discuss.tchncs.de
    link
    fedilink
    arrow-up
    10
    ·
    1 month ago

    All these brainwashed AI-obsessed people should be required to watch I, Robot on loop for a month or two.

  • Routhinator@startrek.website
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 month ago

    AI is opening so many security HOLES. Its not solving shit. AI browsers and MCP connectors are wild west security nightmares. And that’s before you even trust any code these things write.

  • Baron von Fajita@infosec.pub
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    1 month ago

    Except that most risks are from bad leadership decisions. Exhibit A: patches exist for so many vulnerabilities that remain unpatched because of bad business decisions.

    I think in a theoretical sense, she is correct. However, in practice things are much different.

    • Ex Nummis@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      1 month ago

      My old job had so many unpatched servers, mostly Linux ones. Because of the general idea that “Linux is safe anyway”. And because of how Windows updates would often break critical infrastructure, so they were staggered and phased.

      But we’ve seen plenty of infected Linux packages since, so it’s almost a given there’s huge open holes in that security somewhere.