I support free and open source software (FOSS) like VLC, Qbittorrent, LibreOffice, Gimp…

But why do people say that it’s as secure or more secure than closed source software?

From what I understand, closed source software don’t disclose their code.

If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.

But open source has their code available to the entire world on websites like Github or Gitlab.

Isn’t that actually also helping hackers?

  • emb@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    6 months ago

    The idea you’re getting at is ‘security by obscurity’, which in general is not well regarded. Having secret code does not imply you have secure code.

    But I think you’re right on a broader level, that people get too comfortable assuming that something is open source, therefore it’s safe.

    In theory you can go look at the code for the foss you use. In practice, most of us assume someone has, and we just click download or tell the package manager to install. The old adage is “With enough eyes, all bugs are shallow”. And I think that probably holds, but the problem is many of the eyes aren’t looking at anything. Having the right to view the source code doesn’t imply enough people are, or even meaningfully can. (And I’m as guilty of being lax and incapable as anyone, not looking down my nose here.)

    In practice, when security flaws are found in oss, word travels pretty fast. But I’m sure more are out there than we realize.

    • towerful@programming.dev
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      6 months ago

      It’s also easier to share vulnerability fixes between different projects.

      “Y” was using a similar memory management as “T”, T was hacked due to whatever, people that use Y and T report to Y that a similar vulnerability might be exploitable

      Edit:
      In closed source, this might happen if both projects are under the same company.
      But users will never have the ability to tell Y that T was hacked in a way that might affect Y

  • Canaconda@lemmy.ca
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    6 months ago

    Zero day exploits, aka vulnerabilities that aren’t publicly known, offer hackers the ability to essentially rob people blind.

    Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities. So while it’s not inherently more secure, it is in practice.

    Exploiting four zero-day flaws in the systems,[8] Stuxnet functions by targeting machines using the Microsoft Windows operating system and networks, then seeking out Siemens Step7 software. Stuxnet reportedly compromised Iranian PLCs, collecting information on industrial systems and causing the fast-spinning centrifuges to tear themselves apart.[3] Stuxnet’s design and architecture are not domain-specific and it could be tailored as a platform for attacking modern SCADA and PLC systems (e.g., in factory assembly lines or power plants), most of which are in Europe, Japan and the United States.[9] Stuxnet reportedly destroyed almost one-fifth of Iran’s nuclear centrifuges.[10] Targeting industrial control systems, the worm infected over 200,000 computers and caused 1,000 machines to physically degrade.

    Stuxnet has three modules: a worm that executes all routines related to the main payload of the attack, a link file that automatically executes the propagated copies of the worm and a rootkit component responsible for hiding all malicious files and processes to prevent detection of Stuxnet.

    Wikipedia - Stuxnet Worm

    • CompactFlax@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 months ago

      “Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities.”

      Heartbleed has entered the chat

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      The whole Stuxnet story is fascinating. A virus designed to spread to the whole Internet, and then activate inside a specific Iranian facility. Convinced me that we already live in a cyberpunk world.

  • mvirts@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    6 months ago

    Because more eyes spot more bugs, supposedly. I believe it, running closed source software is truly insane

  • omzwo@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    6 months ago

    Exactly. Open source means by design there are more people able to look at the code and therefore more emphasis for those interested in the code to want to make sure it works securely. You can be exploitative and try to keep your hack secret but there’s also a chance that someone else will see the same thing you saw and then patch the code with a PR. Granted it depends on how much the original developer cares about the code to begin with to then accept or write in a patch/fix for the vulnerability that someone else brings up but the example software you listed are larger projects where lots of people have a vested interest in it working securely. For smaller projects or very niche software that have less eyes and interest, open source might not be the most secure.

    On the closed source side, the people who are interested in looking for hacks are the ones who are much more motivated to actually exploit vulnerabilities for personal gain. The white hat hackers on the other hand for closed source software are fewer because not having the code available openly means they have to have more motivation (ie the company offering bounties/incentives because they care about security) to actually try to work out how the closed source software works.

  • CrazyLikeGollum@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    6 months ago

    It’s not “assumed to be secure.” The source code being publicly available means you (or anyone else) can audit that code for vulnerabilities. The publicly available issue tracking and change tracking means you can look through bug reports and see if anyone else has found vulnerabilities and you can, through the change history and the bug report history, see how the devs responded to issues in the past, how they fixed it, and whether or not they take security seriously.

    Open source software is not assumed to be more secure, but it’s security (or lack thereof) is much easier to verify, you don’t have to take the word of the dev as to whether or not it is secure, and (especially for the more popular projects like the ones you listed) you have thousands of people with different backgrounds and varying specialties within programming, with no affiliation with and no reason to trust the project doing independent audits of the code.

  • chunes@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    Helping hackers is the whole point. They can read the source code and report problems with the software.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    ·
    6 months ago

    Somewhat of a different take from what I’ve seen from the other comments. In my opinion, the main reason is this:
    XKCD comic showing other engineers proud of the realibility of their products and then software engineers freaking out about the concept of computerized voting, because they absolute do not trust their entire field.

    Companies have basically two reasons to do safety/security: Brand image and legal regulations.
    And they have a reason to not do safety/security: Cost pressure.

    Now imagine a field where there’s hardly any regulations and you don’t really stand out when you do security badly. Then the cost pressure means you just won’t do much security.

    That’s the software engineering field.

    Now compare that to open-source. I’d argue a solid chunk of its good reputation is from hobby projects, where people have no cost pressure and can therefore take all the time to do security justice.
    In particular, you need to remember that most security vulnerabilities are just regular bugs that happen to be exploitable. I have significantly fewer bugs in my hobby projects than in the commercial projects I work on, because there’s no pressure to meet deadlines.

    And frankly, the brand image applies even to open-source. I will write shitty code, if you pay me to. But if my name is published along with it, you need to pay me significantly more. So, even if it is a commercial project that happens to be published under an open-source license, I will not accept as many compromises to meet deadlines.

  • Luffy@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    6 months ago

    By your logic no one can break locks because they can’t see it. There are going to be people trying to break into everything even tho they don’t have the source code.

    9/10 people looking into your code are the ones using it for themselves, so fixing a bug for everyone is beneficial to them too.

    Also, there are entire companies working and sponsoring these projects and paying people to find bugs because if someone finds out that curl has a problem, they are gonna have that too, so the only difference between something like vlc and adobe is that you don’t have to suck their dick really.

    There’s also curl and others which are offering bug bounties, since they are way more cost efficient than paying someone full time.

  • assembly@lemmy.world
    link
    fedilink
    arrow-up
    32
    ·
    6 months ago

    One thing to keep in mind is that NO CODE is believed to be secure…regardless of open source or closed source. The difference is that a lot of folk can audit open source whereas we all have to take the word of private companies who are constantly reducing headcount and replacing devs with AI when it comes to closed source.

    • bestboyfriendintheworld@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      9
      ·
      6 months ago

      You theoretically can see the code. You don’t actually look at it. Nor can you even have the knowledge to understand and see security implications for all the software you use.

      In practice it makes little difference for security if you use open or closed source software.

      • Grenfur@pawb.social
        link
        fedilink
        arrow-up
        14
        ·
        6 months ago

        No, you literally can see the code, that’s why it’s open source. YOU may not look at it, but people do. Random people, complete strangers, unpaid and un-vested in the project. The alternative is a company, who pays people to say “Yeah it’s totally safe”. That conflict of interest is problematic. Also, depending on what it’s written in, yes, I do sometimes take the time. Perhaps not for every single thing I run, but any time I run across niche projects, I read first. To claim that someone can’t understand is wild. That’s a stranger on the internet, you’re knowledge of their expertise is 0.

        In practice, 1,000 random people with no reason to “trust you, bro” on the internet being able to audit every change you make to your code is far more trustworthy than a handful of people paid by the company they represent. What’s worse, is that if Microsoft were to have a breach, then like maybe 10 people on the planet know about it. 10 people with jobs, mortgages, and families tied to that knowledge. They won’t say shit, because they can’t lose that paycheck. Compare that to say the XZ backdoor where the source is available and gets announced so people know exactly who what and where to resolve the issue.

  • chocrates@piefed.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 months ago

    Per Eric S. Raymond “many eyes make all bugs shallow”.

    Basically it’s not inherently more secure, but often it’s assumed that enough smart people have looked at it.

    But yes all software is going to have vulnerabilities

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    4
    ·
    6 months ago

    It helps hackers sure, but it also help the community in general also vet the overall quality of the software and tell the others to not use it. When it’s closed source you have no choice but to trust the company behind it.

    There’s several FOSS apps I’ve encountered, looked at the code and passed on it because it’s horrible. Someone will inevitably write a blog post about how bad the code is warning people to not use the project.

    That said, the code being public for everyone to see also inherently puts a bit of pressure to write good code because the community will roast you if it’s bad. And FOSS projects are usually either backed by a company or individuals with a passion: the former there’s the incentive of having a good image because no company wants to expose themselves cutting corners publicly, and the passion project is well, passion driven so usually also written reasonably well too.

    But the key point really is, as a user you have the option to look at it and make your own judgement, and take measures to protect yourself if you must run it.

    Most closed source projects are vulnerable because of pressure to deliver fast, and nobody will know until it gets exploited. This leads to really bad code that piles up over time. Try to sneak some bullshit into the Linux kernel and there will be dozens of news article and YouTube videos about Linus’ latest rant about the guilty. That doesn’t happen in private projects, you get a lgtm because the sprint is ending and sales already sold the feature to a customer next week.

  • With open source code you get more eyes on it. Issues get fixed quicker.

    With closed source, such as Photoshop, only Adobe can see the code. Maybe there are issues there that could be fixed. Most large companies have a financial interest in having “good enough” security.