When I was young and starting out with computers, programming, BBS’ and later the early internet, technology was something that expanded my mind, helped me to research, learn new skills, and meet people and have interesting conversations. Something decentralized that put power into the hands of the little guy who could start his own business venture with his PC or expand his skillset.

Where we are now with AI, the opposite seems to be happening. We are asking AI to do things for us rather than learning how to do things ourselves. We are losing our research skills. Many people are talking to AI’s about their problems instead of other people. And they will take away our jobs and centralize all power into a handful of billionaire sociopaths with robot armies to carry out whatever nefarious deeds they want to do.

I hope we somehow make it through this part of history with some semblance of freedom and autonomy intact, but I’m having a hard time seeing how.

  • zombiebot@piefed.social
    link
    fedilink
    English
    arrow-up
    101
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Librarian here, can confirm.

    I started my Master’s in Library and Information Science in 2010. We were told not to worry about the internet making us obsolete because we would be needed to teach information literacy.

    Information literacy turned out to be something people didn’t want. They wanted to be told what to think, not taught skills to think for themselves.

    It’s been the single greatest and most expensive disappointment of my life.

    • jimmy90@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      if people don’t want to use computers to expand their mind, empower themselves and others then, obviously they won’t get those benefits

      you can still use computers to do those things

      • TubularTittyFrog@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        2 months ago

        classes in philosophy, literature, politics, and digital media. typically.

        you know, those evil humanities that are destroying society… because they don’t produce ‘value’.

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      They wanted to be told what to think, not taught skills to think for themselves.

      This must be one of the wisest statements I ever read on the internet.

  • Apytele@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    edit-2
    2 months ago

    If AI can even half ass your job you barely had one to begin with. All us healthcare workers and the tradies are still making a half decent wage for real work just like we always have. And the food service and sanitation workers still aren’t doing the absolute best but they’re not hurting for work either. I’m not going to tell you I like the way my work is valued under capitalism but at least I’m tangibly benefitting other humans.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      2 months ago

      I don’t think it’s fair to say that just because you were a commercial graphic designer or translator or copywriter that you were doing bullshit work that was barely worth being called work.

      Yes, healthcare is a very commendable line of work, no doubt, but we will see radiologists out of work fairly soon IMO, as well as anyone who interprets lab results, and very likely those who make diagnoses of all types. These are all things that AI will likely be doing better if they aren’t already.

      Physical care will take longer and won’t be replaced until we have AI robots, but the gains there are happening fast too. We may only have another decade or so until we see a lot of that stuff being automated. It’s really hard to tell how fast this will all happen. Things do tend to happen slower than the hype around them, but the progress that’s happening every year is pretty staggering if you are really tracking it. I’d love to think that my job which requires mostly creative ways of dealing with people and negotiation is safe for some time, but I’m really doubting that I can make it the next 12 years I need to until retirement without some disruption.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Maybe you don’t, but I have a father in assisted living and know for a fact that there are an awful lot of nursing jobs that don’t look particularly different than this. AI will start with the hardest diagnosis tasks first, and at some point start doing the easiest physical ones. Then it will eat away the stuff in the middle gradually. This is one of the most needed areas for non human labor so it will be one of the most heavily focused on.

  • ji59@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 months ago

    I have to disagree. The only reason computer expanded your mind is because you were curious about it. And that is still true even with AI. Just for example, people doesn’t have to learn or solve derivations or complex equations, Wolfram Alpha can do that for them. Also, learning grammar isn’t that important with spell-checkers. Or instead of learning foreign languages you can just use automatic translators. Just like computers or internet, AI makes it easier for people, who doesn’t want to learn. But it also makes learning easier. Instead of going through blog posts, you have the information summarized in one place (although maybe incorrect). And you can even ask AI questions to better understand or debate the topic, instantly and without being ridiculed by other people for stupid questions.

    And to just annoy some people, I am programmer, but I like much more the theory then coding. So for example I refuse to remember the whole numpy library. But with AI, I do not have to, it just recommends me the right weird fuction that does the same as my own ugly code. Of course I check the code and understand every line so I can do it myself next time.

  • tomiant@piefed.social
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    2 months ago

    Any good thing will inevitably be corrupted by capitalism, because that is what capitalism does. It is a cancer, and it will consume everything and us all in the process.

    I don’t know if it was in Strauss’ “Accelerando” that humanity told an AI to solve some complex problem at any cost, and the AI promptly turned all the matter in the solar system into a supercomputer capable of solving it.

    That’s capitalism in a nutshell: “do profit” is the only imperative, and it will destroy everything, just like a cancer is predicated upon “do growth”, forever, at any cost, regardless of whether the host organism dies.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Yes, I remember the day I quit the football team and started hanging out with the nerds. I lost a lot of friends and coolness points, but I was so much happier sitting in the library for lunch playing with computers.

      • YeahIgotskills2@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 months ago

        Don’t get me wrong - I spent many a lunchtime in the library and the computer lab. Loved it. But by 16 I had to repress it and get into drinking and music (which, honestly wasn’t hard), just to fit in and meet girls.

        The taboo of IT stayed with me, so I never openly discussed my interest in it.

        Happily, online life has been normalised and teens and adults game all the time without it being seen as odd.

        Ironically, despite being into 16-bit games in my teens I never really allowed myself to get into gaming in the suceeding years.

        I regret that now, as I reckon I missed out on a Golden age of gaming that I would have enjoyed had I just been born a decade or so later and been less uptight about what people think.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Yeah I also started partying in my teens and met lots of girls… Also kept my IT hobby mostly to myself. But gave me a great career and as you say, now it’s fully normalized so no need to hide it too much, though there aren’t a huge amount of 50 year olds that are into gaming and home automation as I am outside of forums like Lemmy.

  • _cnt0@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    2 months ago

    With AI, now it does the thinking for you […]

    No, it doesn’t. It’s just mimikry. Autocomplete on steroids.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      11
      ·
      2 months ago

      This was true last year. But they are cranking along the ARC-AGI benchmarks designed specifically to test the kind of things that cannot be done by just regurgitating training data.

      On GPT 3 I was getting a lot of hallucinations and wrong answers. On the current version of Gemini, I really haven’t been able to detect any errors in things I’ve asked it. They are doing math correctly now, researching things well and putting together thoughts correctly. Even photos that I couldn’t get old models to generate now are coming back pretty much exactly as I ask.

      I was sort of holding out hopes that LLM’s would peak somewhere just below being really useful. But with RAGs and agentic approaches, it seems that they will sidestep the vast majority of problems that LLM’s have on their own and be able to put together something that is better at even very good humans at most tasks.

      I hope I’m wrong, but it’s getting pretty hard to bank on that old narrative that they are just fancy autocomplete that can’t think anymore.

          • pinball_wizard@lemmy.zip
            link
            fedilink
            arrow-up
            7
            ·
            2 months ago

            was dotcom this annoying too?

            Surprisingly, it was not this annoying.

            It was very annoying, but at least there was an end in sight, and some of it was useful.

            We all knew that http://www.only-socks-and-only-for-cats.com/ was going away, but eBay was still pretty great.

            In contrast, we’re all standing around today looking at many times the world’s GDP being bet on a pretty good autocomplete algorithm waking up and becoming fully sentient.

            It feels like a different level of irrational.

          • hitmyspot@aussie.zone
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            2 months ago

            Dot com bubble was optimistic. AI bubble is pessimistic. People thought their lives would improve due to improved communication and efficiency. The internet was seen as a positive thing. The dot com bubble was more about monetizing it, but that wasn’t the zwitgeist. With AI people don’t see much benefits and are aware it’s purpose is to take their jobs.

            With the dot com bubble, it was mainly mom and pop investors that were worst off, but many companies died. With AI bubble it seems like it’s the companies that will do worst when it crashes. Obviously, it affects everyone, but this skews more to the 1%. So hopefully it’s a lesson on greed. Unlikely though.

      • Cevilia (they/she/…)@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        2 months ago

        I’m pleased to inform you that you are wrong.

        A large language model works by predicting the statistically-likely next token in a string of tokens, and repeating until it’s statistically-likely that its response has finished.

        You can think of a token as a word but in reality tokens can be individual characters, parts of words, whole words, or multiple words in sequence.

        The only addition these “agentic” models have is special purpose tokens. One that means “launch program”, for example.

        That’s literally how it works.

        AI. Cannot. Think.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          edit-2
          2 months ago

          …And what about non LLM models like diffusion models, VL-JEPA, SSM, VLA, SNN? Just because you are ignorant of what’s happening in the industry and repeating a narrative that worked 2 years ago doesn’t make it true.

          And even with LLM’s, even if they aren’t “thinking”, but produce as good or better results than real human “thinking” in major domains, does it even matter? The fact is that there will be many types of models working in very different ways working together and together will be beating humans at tasks that are uniquely human.

          Go learn about ARC-AGI and see the progress being made there. Yes, it will take a few more iterations of the benchmark to really challenge humans at the most human tasks, but at the rate they are going that’s only a few years.

          Or just stay ignorant and keep repeating your little mantra so that you feel okay. It won’t change what actually happens.

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            2 months ago

            Yeah those also can’t think, and it will not change soon

            The real problem though is not if LLM can think or not, it’s that people will interact with it as if it can, and will let it do the decision making even if it’s not far from throwing dice

            • realitista@lemmus.orgOP
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              2 months ago

              We don’t even know what “thinking” really is so that is just semantics. If it performs as well or better than humans at certain tasks, it really doesn’t matter if it’s “thinking” or not.

              I don’t think people primarily want to use it for decision making anyway. For me it just turbocharges research, compiling stuff quickly from many sources, writes code for small modules quite well, generates images for presentations, etc, does more complex data munging from spreadsheets or even saved me a bunch of time taking a 50 page handwritten ledger and near perfectly converting it to excel…

              None of that requires decision making, but it saves a bunch of time. Honestly I’ve never asked it to make a decision so I have no idea how it would perform… I suspect it would more describe the pros and cons than actually try to decide something.

    • Xella@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      My father is convinced that humans and dinosaurs coexisted and told me that ai proved that to him. So… people do let it think for them.

  • IWW4@lemmy.zip
    link
    fedilink
    arrow-up
    15
    arrow-down
    7
    ·
    2 months ago

    Bud… they said the same thing about computers when I was a kid in the 70s.

      • IWW4@lemmy.zip
        link
        fedilink
        arrow-up
        13
        arrow-down
        8
        ·
        2 months ago

        I was certainly there and I do… this is from a google search

        Key Themes and Examples from the Era

        Concerns about automation and job displacement by computers were widely documented, particularly as computer technology became smaller, cheaper, and more integrated into various industries, from manufacturing floors to office settings.

        • Manufacturing and “Blue-Collar” Jobs: The introduction of computer numerical control (CNC) machinery led to a 24% drop in employment for high school dropouts in the metal manufacturing industry, fueling concerns about job security for skilled factory workers in the “Rust Belt”.

        • Office and “White-Collar” Jobs: White-collar workers also felt unease. Innovations like the automated teller machine (ATM) threatened bank tellers, while photocopiers were viewed with suspicion by some in publishing. The transition to computers on every desk in the late 70s and early 80s initially led to the firing of secretarial pools, forcing others (often men) to learn typing and computer skills.

        • Media Coverage and Public Discourse: The topic was covered by major publications.

          • In 1965, Time Magazine ran a cover story on “the computer in society,” which included a prediction of shorter workweeks due to automation.
          • In the UK, Prime Minister James Callaghan requested a think tank to investigate the potential impact of new technologies on employment.
          • The term “job killer computer” was a popular slogan expressing the fear of technological unemployment.
            • chicken@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              2 months ago

              I overall like AI, but it’s not great for making this type of argument because it doesn’t offer anyone anything they can really use to update their beliefs about what’s true. Any of the factual claims there could be hallucinated, and most are only tangentially relevant to the question of how strong the parallels between the attitude towards computers 50 years ago are to attitudes towards AI now. If someone wants to seriously consider the question, it isn’t useful.

              A better way to do it is to use it like a search engine to find relevant citeable information and then make your own case for its relevance. Or maybe in this case just some personal anecdotes would work pretty well, you’re claiming personal experience as your main source here and I kind of wanted to hear more about it, having not been there.

        • realitista@lemmy.today
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          2 months ago

          Well sure every new technology to some extent replaces jobs, but that wasn’t my primary thesis.

          My primary thesis is that it is disempowering us, and centralizing power in a handful of billionaires. Personal computers in those days were empowering to the individual, whereas AI is empowering only for a handful of billionaires and disempowering for most other people.

          I don’t remember anyone complaining back then that personal computers were taking their power and autonomy away and giving it to billionaires.

        • Hackworth@piefed.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          2 months ago

          This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. - Plato on the invention of writing in The Phaedrus

          Every notable invention associated with language (and communication in general) has elicited similar reactions. And I don’t think Plato is wholly wrong, here. With each level of abstraction from the oral tradition, the social landscape of meaning is further externalized. That doesn’t mean the personal landscape of meaning must be. AI only does the thinking for you if that’s what you use it for. But I do fear that that’s exactly what it will largely be used for. These technologies have been coming fast since radio, and it doesn’t seem like society has the time to adapt to one before the next.

          There’s a relevant Nature article that touches on some/most of this.

          • aesthelete@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 months ago

            I see these thought-terminating cliches everywhere, and nowhere do their posters pause a moment to consider the specifics of the actual technology involved. The people forewarning about this stuff were correct about, for instance, social media, but who cares because Plato wasn’t a fan of writing, we rode on horses before in cars, or the term Luddite exists…etc. etc.

            • Hackworth@piefed.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 months ago

              I talked about the way in which Plato’s concerns were valid and expressed similar fears about misuse. The linked article is about how to approach the specific technology.

              • aesthelete@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                2 months ago

                You didn’t say his concerns were valid. You said you thought he was not “wholly wrong”. Regardless, Plato being a crank about writing proves only that cranks existed before writing. It does nothing to help you interrogate nor help set you down the path to interrogate the problems mentioned (which is why I categorized it as a thought terminating cliche).

                Your referenced article is basically a long-form version of your post, which has a perceivable bias toward the viewpoint that every newly-introduced technology can or will inevitably result in “progress” for humanity as a whole regardless of the methods of implementation or the incentives in the technology itself.

                Far from being an instance of skub (https://pbfcomics.com/comics/skub/) as trumpeting this perspective – perhaps unknowingly – implies that it is (i.e. an agnostic technology / inanimate object that “two sides” are getting emotionally charged about), LLMs (and their “agentic” offspring) are both deliberately and unwittingly programmed to be biased. There are real concerns about this particular set of technologies that posting a quote from an ancient tome does not dismiss.

                • Hackworth@piefed.ca
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  2 months ago

                  LLMs are both deliberately and unwittingly programmed to be biased.

                  I mean, it sounds like you’re mirroring the paper’s sentiments too. A big part of Clark’s point is that interactions between humans and generative AI need to take into account the biases of the human and the AI.

                  The lesson is that it is the detailed shape of each specific human-AI coalition or interaction that matters. The social and technological factors that determine better or worse outcomes in this regard are not yet fully understood, and should be a major focus of new work in the field of human-AI interaction. […] We now need to become experts at estimating the likely reliability of a response given both the subject matter and our level of skill at orchestrating a series of prompts. We must also learn to adjust our levels of trust

                  And as I am not, Clark is not really calling Plato a crank. That’s not the point of using the quote.

                  And yet, perhaps there was an element of truth even in the worries raised in the Phaedrus. […] Empirical studies have shown that the use of online search can lead people to judge that they know more ‘in the biological brain’ than they actually do, and can make people over-estimate how well they would perform under technologically unaided quiz conditions.

                  I don’t think anyone is claiming that new technology necessarily leads to progress that is good for humanity. It requires a great deal of honest effort for society to learn how to use a new technology wisely, every time.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I’ve been using Linux steadily for the last 30 years, and yes it’s still great. But doesn’t really fill the niche that AI does.

  • chunkystyles@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 months ago

    AI isn’t the only thing you can use a computer for now. If you ignore AI and corporate software, there’s loads of mind expanding activities in computing.

    Take a look at what you can self host with commodity hardware (barring the insane RAM prices right now).

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      I do lots of self hosting. But the issue is not what I will do but what the world will do and what we will be forced to do by our employers and pressure to work at an efficiency only possible with ai doing a lot of the work.

  • whotookkarl@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    Computers can and still do all that, you just need some mental discipline to avoid the cognitive equivalent of fast food being forced into your attention via AI slop and social media demagogues over corporate owned messaging systems.

  • barryamelton@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 months ago

    Unless you were a hard GNU fan when you were a kid, it was the same process of giving power to billionares. Just that now it sits on 50 years of wins for the billionares side. So it’s closer to the endgame.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      I’ve been a GNU fan since 1995. And yes, while buying software did make some billionaires, I never felt like it was taking away my abilities or autonomy or freedom until now. Back then I felt like it was giving me more of those things.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I don’t know. Looking back I don’t think I gave up my abilities or allowed billionaires to replace me by using tech until LLM’s came along.

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 months ago

    The LLM is absolutely not doing anything thinking for you. It’s can, at best surface someone else’s thinking based on a prompt.

    Anyone that confuses what these things do with thinking is on a path towards psychosis.

    Every 4 hours spent talking to one of these things is indistinguishable from talking to oneselves for 40 hours. It amplifies one’s inner thoughts in ways that prevoisly only a schizophrenic was able to enjoy.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      It absolutely can replace hours of research or programming or drawing with a quick prompt. It does this for me often, and as of the latest Gemini pretty much is always right too.

      • Clent@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 months ago

        None of that is it doing the thinking for you.

        LLMs can be used as a research tool but require a human apply critical thinking to the output to be useful.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          2 months ago

          It definitely replaced a lot of thinking with vibe coding. And research also requires thinking. Maybe not super intense thought, but it’s thought all the same. Artists would also be pretty annoyed to hear that they are doing a brainless activity.

          • Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 months ago

            Literally not thinking.

            Artists would also be pretty annoyed to hear that they are doing a brainless activity.

            I see you lack critical thinking skills so I understand the confusion. Unfortunately, I can’t fix stupid.

              • Clent@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                You failed to prove your assertions.

                Ask an LLM it will tell you it’s not capable of thinking, it’s approximating thinking.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      It’s definitely taking some jobs. Not a huge amount yet, but it’s unfortunately still getting better at a pretty good clip.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 months ago

          Graphic artists, translators, and copywriters are losing jobs in droves. It’s expanding. I sell contact center software and it’s just kicking off in my industry, but it’s picking up.

          • AmbiguousProps@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 months ago

            Yeah, I can see it happening there, especially for graphic artists (however actual graphic design is much better than anything a model can currently spit out). Translation is surprising to me, because in my experience, LLMs are actually kind of bad at actual translation especially when sounding natural according to local dialect. So I might consider that one to be a case of dumb bosses that don’t know any better.

            I’m a DevOps engineer and dumb bosses are absolutely firing in my industry. However, our products have suffered the consequences and they continue to get worse and less maintainable.

            • realitista@lemmus.orgOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              As someone who uses machine translation on a daily basis, I’ve watched it go from being barely usable to as good as human translation for most tasks. It’s really uncommon that I find issues with it any more. And even if there is one issue in 1000 words or whatever, you can just have a human proofread it instead of translating the whole thing, it will reduce your headcount my 90%. But I think for most things, no one calls translators any more, they just go to google translate. Translators now only do realtime voice translation, not documents which used to be most of their work.

              These things creep up on you. They aren’t good and you get comfortable that they don’t work that well, and then over time they start working as well or better than humans and suddenly there’s really no reason to employ someone for it.

    • solomonschuler@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      No its apart of the companies business strategy. These tech companies fire an unprecedented amount of employees (primarily from the mass hiring during 2020) make a post they fired these employees because of ai improvements, see their stock price rise ultimately inflating it and creating an economic bubble, and rinse and repeat with the next wave of potential hires who are sucking their employers dick a little to hard.

      It’s unethical, and it violates any and all job security and I don’t want to be apart of that toxic workspace. Its ironic im saying this because a few years ago if I got a job at Google I would say “fuck yea mother fucker count me in” and now I just don’t want to work for them. There are far better companies doing interesting and valuable work to benefit society than these hipster douche bags.