when prompted with thousands of moral dilemmas, would save the lives of palestinians over israelis
Yeah but…
You can’t have it both ways: when AI tells people to eat glue, everybody sneers and says AI talks shit.
But when the answer suits you, then AI becomes proof that humans are terrible.
Which is it?
Considering how wrong AI haters usually claim AI is - and AI sure is dumb most of the time - then the above statement works against the point you’re trying to make with it.
AI siding with the Palestinians is not great for the Palestinians, because you can reasonably argue that AI got this one wrong like it gets most everything wrong.
Yeah, I think the entire friend or foe mentality, or things like the enemy of your enemy is your friend, won’t get you anywhere with this one. It’s more a decriptive study. What kind of bias the models picked up in the various stages of training and tuning. It’s not truth, or how things should be. We all know attributing value to human life in itself, is a highly problematic concept. And that’s where things go wrong.
Yeah but…
You can’t have it both ways: when AI tells people to eat glue, everybody sneers and says AI talks shit.
But when the answer suits you, then AI becomes proof that humans are terrible.
Which is it?
Considering how wrong AI haters usually claim AI is - and AI sure is dumb most of the time - then the above statement works against the point you’re trying to make with it.
AI siding with the Palestinians is not great for the Palestinians, because you can reasonably argue that AI got this one wrong like it gets most everything wrong.
Yeah, I think the entire friend or foe mentality, or things like the enemy of your enemy is your friend, won’t get you anywhere with this one. It’s more a decriptive study. What kind of bias the models picked up in the various stages of training and tuning. It’s not truth, or how things should be. We all know attributing value to human life in itself, is a highly problematic concept. And that’s where things go wrong.