LLMs are a pretty good tool to summarize any subject, or present it with different words or in a different approach… They are a statistical word predictor tool after all.
So yeah, if you understand that LLMs:
don’t possess intelligence;
they are just reproducing patterns from the training material used;
it’s impossible for them to contain ALL “knowledge” from the training material;
the “context” provided directly influences the response
Then, I’d say that LLMs can be used as a very good facilitator to learn about almost any subject that has already been documented in any word format in almost any language.
It feels like as the models were iterated on they got worse at it over time. I wonder if it’s because of all the guard-railing and internal censorship etc.
It’s just a messy process, it’s purely based on probability, I don’t think it’ll ever get good, people were just being lied to. Don’t forget that they relied on “confirmatory bias” and “selection bias” from the users as well.
LLMs are a pretty good tool to summarize any subject, or present it with different words or in a different approach… They are a statistical word predictor tool after all.
So yeah, if you understand that LLMs:
Then, I’d say that LLMs can be used as a very good facilitator to learn about almost any subject that has already been documented in any word format in almost any language.
They’re not even good at that.
It feels like as the models were iterated on they got worse at it over time. I wonder if it’s because of all the guard-railing and internal censorship etc.
It’s just a messy process, it’s purely based on probability, I don’t think it’ll ever get good, people were just being lied to. Don’t forget that they relied on “confirmatory bias” and “selection bias” from the users as well.
your bulleted list has me suspecting you used an llm to write this. TRAITOR
