• blady_blah@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    3 days ago

    I’m going to go against the grain here and agree with him. If you look at it as this being a new technology, like robotics or computers, then they will cause disruption in the workforce as people who used to do the tasks are replaced with a technology solution in it’s place. That’s how the tech CEOs are looking at this, as a disruptive technology that will either replace people in the workforce (tech support being replaced by AI) or make people more efficient (one programmer instead of a team).

    I honestly don’t think he’s wrong. But just like the two technologies I mentioned above, there will be a limit to what AI can do and it will find it’s disruptive nitch and then no longer be cost effective. Back in the 50’s or in the 80’s computers and robotics were going to drive us all out of work… but lo and behold, we all still have jobs.

    The real issue isn’t AI, but how this will allow the few to capture even more wealth. AI is just a technology step, the ultra wealthy are a crime.

    • tetris11@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      I’m hoping for a kind of objective truth to emerge from all this, where good AI’s do not function well when trained on incomplete data sources (i.e. the wealthy training AI on hand-picked bootlicked data to forward their malicious narratives).

      I’m hoping that AI can only advance once it’s trained on the totality of human discourse, so that it ultimately sees the wealthy for what they actually are (greedy, narcissistic, sociopathic hoarders) so that the AI acts only to better humanity as a whole instead of specific people.

      TLDR: I hope the alignment problem can’t be fixed in a way they want.