To me the scariest use case is using AI to gain the trust of an individual, and then abusing that trust to influence political or financial descisions.
People are already, litterally going insane talking to ChatGPT. Wait until they figure out how to train the Fox News of LLMs.
That’s one of the problems, but there’s also the insertion of myriads of errors in numerous databases of all kinds (as if ourdatasets weren’t flakey enough as it is), public and private, which can have all kinds of consequences down the road.
To me the scariest use case is using AI to gain the trust of an individual, and then abusing that trust to influence political or financial descisions.
People are already, litterally going insane talking to ChatGPT. Wait until they figure out how to train the Fox News of LLMs.
That’s one of the problems, but there’s also the insertion of myriads of errors in numerous databases of all kinds (as if ourdatasets weren’t flakey enough as it is), public and private, which can have all kinds of consequences down the road.
Let me introduce you to “Next Level” https://www.wissen-neu-gedacht.de/ A German LLM that promotes Germanic New Medicine (GNM) https://en.wikipedia.org/wiki/Ryke_Geerd_Hamer
💀