As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the “good” ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I’m doing something in a particular way.

One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

  • gezero@lemmy.bowyerhub.uk
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 month ago

    If you are creating interfaces for classes that will not have second implementation, that sounds suspicious, what kind of classes are you abstracting? Are those classes representing data? I think I would be against creating interfaces for data classes, I would use records and interfaces only in rare circumstances. Are you complaining about abstracting classes with logic, as in services/controllers? Are you creating tests for those? Are you mocking external dependencies for your tests? Because mocks could also be considered different implementations for your abstractions. Some projects I saw definitely had taken SOLID principles and made them SOLID laws… Sometimes it’s an overzealous architect, sometimes it’s a long-lasting project with no original devs left… The fact that you are thinking about it already puts you in front of many others…

    SOLID principles are principles for Object Oriented programming so as others pointed out, more functional programming might give you a way out.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    62
    ·
    edit-2
    1 month ago

    If it makes the code easier to maintain it’s good. If it doesn’t make the code easier to maintain it is bad.

    Making interfaces for everything, or making getters and setters for everything, just in case you change something in the future makes the code harder to maintain.

    This might make sense for a library, but it doesn’t make sense for application code that you can refactor at will. Even if you do have to change something and it means a refactor that touches a lot, it’ll still be a lot less work than bloating the entire codebase with needless indirections every day.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      4
      ·
      1 month ago

      I remember the recommendation to use a typedef (or #define 😱) for integers, like INT32.

      If you like recompile it on a weird CPU or something I guess. What a stupid idea. At least where I worked it was dumb, if someone knows any benefits I’d gladly hear it!

      • SilverShark@programming.dev
        link
        fedilink
        arrow-up
        9
        ·
        1 month ago

        We had it because we needed to compile for Windows and Linux on both 32 and 64 bit processors. So we defined all our Int32, Int64, uint32, uint64 and so on. There were a bunch of these definitions within the core header file with #ifndef and such.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          2
          ·
          1 month ago

          But you can use 64 bits int on a 32 bits linux, and vice versa. I never understood the benefits from tagging the stuff. You gotta go so far back in time where an int isn’t compiled to a 32 bit signed int too. There were also already long long and size_t… why make new ones?

          Readability maybe?

          • SilverShark@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            It was a while ago indeed, and readability does play a big role. Also, it becomes easier to just type it out. Of course auto complete helps, but it’s just easier.

          • Consti@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            1 month ago

            Very often you need to choose a type based on the data it needs to hold. If you know you’ll need to store numbers of a certain size, use an integer type that can actually hold it, don’t make it dependent on a platform definition. Always using int can lead to really insidious bugs where a function may work on one platform and not on another due to overfloe

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              1 month ago

              Show me one.

              I mean I have worked on 16bits platforms, but nobody would use that code straight out of the box on some other incompatible platform, it doesn’t even make sense.

              • Consti@lemmy.world
                link
                fedilink
                arrow-up
                4
                ·
                1 month ago

                Basically anything low level. When you need a byte, you also don’t use a int, you use a uint8_t (reminder that char is actually not defined to be signed or unsigned, “Plain char may be signed or unsigned; this depends on the compiler, the machine in use, and its operating system”). Any time you need to interact with another system, like hardware or networking, it is incredibly important to know how many bits the other side uses to avoid mismatching.

                For purely the size of an int, the most famous example is the Ariane 5 Spaceship Launch, there an integer overflow crashed the space ship. OWASP (the Open Worldwide Application Security Project) lists integer overflows as a security concern, though not ranked very highly, since it only causes problems when combined with buffer accesses (using user input with some arithmetic operation that may overflow into unexpected ranges).

                • Valmond@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 month ago

                  And the byte wasn’t obliged to have 8 bits.

                  Nice example, but I’d say it’skind of niche 😁 makes me remember the underflow in a video game, making the most peaceful npc becoming a warmongering lunatic. But that would not have been helped because of defines.

      • Hetare King@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        If you’re directly interacting with any sort of binary protocol, i.e. file formats, network protocols etc., you definitely want your variable types to be unambiguous. For future-proofing, yes, but also because because I don’t want to go confirm whether I remember correctly that long is the same size as int.

        There’s also clarity of meaning; unsigned long long is a noisy monstrosity, uint64_t conveys what it is much more cleanly. char is great if it’s representing text characters, but if you have a byte array of binary data, using a type alias helps convey that.

        And then there are type aliases that are useful because they have different sizes on different platforms like size_t.

        I’d say that generally speaking, if it’s not an int or a char, that probably means the exact size of the type is important, in which case it makes sense to convey that using a type alias. It conveys your intentions more clearly and tersely (in a good way), it makes your code more robust when compiled for different platforms, and it’s not actually more work; that extra #include <cstdint> you may need to add pays for itself pretty quickly.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          1 month ago

          So we should not have #defines in the way, right?

          Like INT32, instead of “int”. I mean if you don’t know the size you probably won’t do network protocols or reading binary stuff anyways.

          uint64_t is good IMO, a bit long (why the _t?) maybe, but it’s not one of the atrocities I’m talking about where every project had its own defines.

          • Feyd@programming.dev
            link
            fedilink
            arrow-up
            4
            ·
            1 month ago

            “int” can be different widths on different platforms. If all the compilers you must compile with have standard definitions for specific widths then great use em. That hasn’t always been the case, in which case you must roll your own. I’m sure some projects did it where it was unneeded, but when you have to do it you have to do it

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              1 month ago

              So show me two compatible systems where the int has different sizes.

              This is folklore IMO, or incompatible anyways.

                • Valmond@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 month ago

                  Okay, then give me an example where this matters. If an int hasn’t the same size, like on a Nintendo DS and Windows (wildly incompatible), I struggle to find a use case where it would help you out.

              • Corbin@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                RPython, the toolchain which is used to build JIT compilers like PyPy, supports Windows and non-Windows interpretations of standard Python int. This leads to an entire module’s worth of specialized arithmetic. In RPython, the usual approach to handling the size of ints is to immediately stop worrying about it and let the compiler tell you if you got it wrong; an int will have at least seven-ish bits but anything more is platform-specific. This is one of the few systems I’ve used where I have to cast from an int to an int because the compiler can’t prove that the ints are the same size and might need a runtime cast, but it can’t tell me whether it does need the runtime cast.

                Of course, I don’t expect you to accept this example, given what a whiner you’ve been down-thread, but at least you can’t claim that nobody showed you anything.

                • Valmond@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 month ago

                  Bravo, you found an example!

                  You’re right, we should start using #define INT32 again…

          • Hetare King@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 month ago

            The standard type aliases like uint64_t weren’t in the C standard library until C99 and in C++ until C++11, so there are plenty of older code bases that would have had to define their own.

            The use of #define to make type aliases never made sense to me. The earliest versions of C didn’t have typedef, I guess, but that’s like, the 1970s. Anyway, you wouldn’t do it that way in modern C/C++.

          • xthexder@l.sw0.com
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 month ago

            I’ve seen several codebases that have a typedef or using keyword to map uint64_t to uint64 along with the others, but _t seems to be the convention for built-in std type names.

    • termaxima@slrpnk.net
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Getters and setters are superfluous in most cases, because you do not actually want to hide complexity from your users.

      To use the usual trivial example : if you change your circle’s circumference from a property to a function, I need to know ! You just replaced a memory access with some arithmetic ; depending in my behaviour as a user this could be either great or really bad for my performance.

    • NigelFrobisher@aussie.zone
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      True. Open-closed principal is particularly applicable to library code, but a waste much of the time in a consuming application, where you will be modifying code much more.

    • Mr. Satan@lemmy.zip
      link
      fedilink
      arrow-up
      12
      ·
      1 month ago

      Yeah, this. Code for the problem you’re solving now, think about the problems of the future.

      Knowing OOP principles and patterns is just a tool. If you’re driving nails you’re fine with a hammer, if you’re cooking an egg I doubt a hammer is necessary.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Exactly this. And to know what code is easy to maintain you have to see how couple of projects evolve over time. Your perspective on this changes as you gain experience.

  • Corbin@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Java is bad but object-based message-passing environments are good. Classes are bad, prototypes are also bad, and mixins are unsound. That all said, you’ve not understood SOLID yet! S and O say that just because one class is Turing-complete (with general recursion, calling itself) does not mean that one class is the optimal design; they can be seen as opinions rather than hard rules. L is literally a theorem of any non-shitty type system; the fact that it fails in Java should be seen as a fault of Java. I is merely the idea that a class doesn’t have to implement every interface or be coercible to any type; that is, there can be non-printable non-callable non-serializable objects. Finally, D is merely a consequence of objects not being functions; when we want to apply a functionf to a value x but both are actually objects, both f.call(x) and x.getCalled(f) open a new stack frame with f and x local, and all of the details are encapsulation details.

    So, 40%, maybe? S really is not that unreasonable on its own; it reminds me of a classic movie moment from “Meet the Parents” about how a suitcase manufacturer may have produced more than one suitcase. We do intend to allocate more than one object in the course of operating the system! But also it perhaps goes too far in encouraging folks to break up objects that are fine as-is. O makes a lot of sense from the perspective that code is sometimes write-once immutable such that a new version of a package can add new classes to a system but cannot change existing classes. Outside of that perspective, it’s not at all helpful, because sometimes it really does make sense to refactor a codebase in order to more efficiently use some improved interface.

  • brian@programming.dev
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 month ago

    most things should have an alternate implementation, just in the unit tests. imo that’s the main justification for most of SOLID.

    but also I’ve noticed that being explicit about your interfaces does produce better thought out code. if you program to an interface and limit your assumptions about implementation, you’ll end up with easier to reason about code.

    the other chunk is consistency is the most important thing in a large codebase. some of these rules are followed too closely in areas, but if I’m working my way through an unfamiliar area of the code, I can assume that it is structured based on the corporate conventions.

    I’m not really an oop guy, but in an oop language I write pretty standard SOLID style code. in rust a lot of idiomatic code does follow SOLID, but the patterns are different. writing traits for everything instead of interfaces isn’t any different but is pretty common

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    1 month ago

    One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

    Sounds like you’ve learned the answer!

    Virtual all programming principles like that should never be applied blindly in all situations. You basically need to develop taste through experience… and caring about code quality (lots of people have experience but don’t give a shit what they’re excreting).

    Stuff like DRY and SOLID are guidelines not rules.

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        Even KISS. Sometimes things just have to be complex. Of course you should aim for simplicity where possible, but I’ve seen people fight against better and more capable options just because they weren’t as simple and thus violated the KISS “rule”.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    I’m making a separate comment for this, but people saying “Liskov substitution principle” instead of “Behavioral subtyping” generally seem more interested in finding a set of rules to follow rather than exploring what makes those rules useful. (Context, the L in solid is “Liskov substitution principle.”) Barbra Liskov herself has said that the proper name for it would be behavioral subtyping.

    In an interview in 2016, Liskov herself explains that what she presented in her keynote address was an “informal rule”, that Jeannette Wing later proposed that they “try to figure out precisely what this means”, which led to their joint publication [A behavioral notion of subtyping], and indeed that “technically, it’s called behavioral subtyping”.[5] During the interview, she does not use substitution terminology to discuss the concepts.

    You can watch the video interview here. It’s less than five minutes. https://youtu.be/-Z-17h3jG0A

  • aev_software@programming.dev
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    1 month ago

    The main lie about these principles is that they would lead to less maintenance work.

    But go ahead and change your database model. Add a field. Then add support for it to your program’s code base. Let’s see how many parts you need to change of your well-architected enterprise-grade software solution.

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      10
      ·
      1 month ago

      Sure, it might be a lot of places, it might not(well designed microservice arch says hi.)

      What proper OOP design does is to make the changes required to be predictable and easily documented. Which in turn can make a many step process faster.

      • aev_software@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        I guess it’s possible I’ve been doing OOP wrong for the past 30 years, knowing someone like you has experienced code bases that uphold that promise.

        • calliope@retrolemmy.com
          link
          fedilink
          arrow-up
          7
          ·
          edit-2
          1 month ago

          Right, knowing when to apply the principles is the thing that comes with experience.

          If you’ve literally never seen the benefits of abstraction doing OOP for thirty years, I’m not sure what to tell you. Maybe you’ve just been implementing boilerplate on short-term projects.

          I’ve definitely seen lots of benefits from some of the SOLID principles over the same time period, but I was using what I needed when I needed it, not implementing enterprise boilerplate blindly.

          I admit this is harder with Java because the “EE” comes with it but no one is forcing you to make sure your DataAccessObject inherits from a class that follows a defined interface.

      • Log in | Sign up@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 month ago

        I have a hard time believing that microservices can possibly be a well designed architecture.

        We take a hard problem like architecture and communication and add to it networking, latency, potential calling protocol inconsistency, encoding and decoding (with more potential inconsistency), race conditions, nondeterminacy and more.

        And what do I get in return? json everywhere? Subteams that don’t feel the need to talk to each other? No one ever thinks about architecture ever again?

        I don’t see the appeal.

  • Log in | Sign up@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 month ago

    The promise of oop is that if you thread your spaghetti through your meatballs and baste them in bolgnaise sauce before you cook them, it’s much simpler and nothing ever gets tangled up, so that when you come to reheat the frozen dish a month later it’s very easy to swap out a meatball for a different one.

    It absolutely does not even remotely live up to it’s promise, and if it did, no one in their right mind would be recommending an abstract singleton factory, and there wouldn’t be quite so many shelves of books about how to do oop well.

  • Beej Jorgensen@lemmy.sdf.org
    link
    fedilink
    arrow-up
    17
    ·
    1 month ago

    I’m a firm believer in “Bruce Lee programming”. Your approach needs to be flexible and adaptable. Sometimes SOLID is right, and sometimes it’s not.

    “Adapt what is useful, reject what is useless, and add what is specifically your own.”

    “Notice that the stiffest tree is most easily cracked, while the bamboo or willow survives by bending with the wind.”

    And some languages, like Rust, don’t fully conform to a strict OO heritage like Java does.

    "Be like water making its way through cracks. Do not be assertive, but adjust to the object, and you shall find a way around or through it. If nothing within you stays rigid, outward things will disclose themselves.

    “Empty your mind, be formless. Shapeless, like water. If you put water into a cup, it becomes the cup. You put water into a bottle and it becomes the bottle. You put it in a teapot, it becomes the teapot. Now, water can flow or it can crash. Be water, my friend.”

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 month ago

      It’s been interesting to watch how the industry treats OOP over time. In the 90s, JavaScript was heavily criticized for not being “real” OOP. There were endless flamewars about it. If you didn’t have the sorts of explicit support that C++ provided, like a class keyword, you weren’t OOP, and that was bad.

      Now we get languages like Rust, which seems completely uninterested in providing explicit OOP support at all. You can piece together support on your own if you want, and that’s all anyone cares about.

      JavaScript eventually did get its class keyword, but now we have much better reasons to bitch about the language.

      • Brosplosion@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        It’s funny cause in C++, inheritance is almost frowned upon now cause of the performance and complexity hits.

        • wicked@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          1 month ago

          It’s been frowned upon for decades.

          That leads us to our second principle of object-oriented design: Favor object composition over class inheritance

          • Design Patterns - Elements of Reusable Object-Oriented Software (1994)
  • termaxima@slrpnk.net
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    1 month ago

    99% of code is too complicated for what it does because of principles like SOLID, and because of OOP.

    Algorithms can be complex, but the way a system is put together should never be complicated. Computers are incredibly stupid, and will always perform better on linear code that batches similar operations together, which is not so coincidentally also what we understand best.

    Our main issue in this industry is not premature optimisation anymore, but premature and excessive abstraction.

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 month ago

      This is crazy misattribution.

      99% of code is too complicated because of inexperienced programmers making it too complicated. Not because of the principles that they mislabel and misunderstood.

      Just because I forcefully and incorrectly apply a particular pattern to a problem it is not suited to solve for doesn’t mean the pattern is the problem. In this case, I, the developer, am the problem.

      Everything has nuance and you should only use in your project the things that make sense for the problems you face.

      Crowbaring a solution to a problem a project isn’t dealing with into that project is going to lead to pain

      why this isn’t a predictable outcome baffles me. And why attribution for the problem goes to the pattern that was misapplied baffles me even further.

      • termaxima@slrpnk.net
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        No. These principles are supposedly designed to help those inexperienced programmers, but in my experience, they tend to do the opposite.

        The rules are too complicated, and of dubious usefulness at best. Inexperienced programmers really need to be taught to keep things radically simple, and I don’t mean “single responsibility” or “short functions”.

        I mean “stop trying to be clever”.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 month ago

    YAGNI ("you aren’t/ain’t gonna need it) is my response to making an interface for every single class. If and when we need one, we can extract an interface out. An exception to this is if I’m writing code that another team will use (as opposed to a web API) but like 99% of code I write only my team ever uses and doesn’t have any down stream dependencies.

  • ravachol@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 month ago

    My opinion is that you are right. I switched to C from an OOP and C# background, and it has made me a happier person.