• tetris11@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    I’m hoping for a kind of objective truth to emerge from all this, where good AI’s do not function well when trained on incomplete data sources (i.e. the wealthy training AI on hand-picked bootlicked data to forward their malicious narratives).

    I’m hoping that AI can only advance once it’s trained on the totality of human discourse, so that it ultimately sees the wealthy for what they actually are (greedy, narcissistic, sociopathic hoarders) so that the AI acts only to better humanity as a whole instead of specific people.

    TLDR: I hope the alignment problem can’t be fixed in a way they want.