

This. LLMs are great for information retrieval tasks, that is, they are essentially search engines. Even then, they can only retrieve information that they are trained on, so eventually, the data can get stale and the model will require retraining on more recent data. Also, they are not very good with tasks that require reasoning such as solving complex engineering problems.
That is because they don’t have any baked in concept of truth or lie. That would require labelling each statement as such. This doesn’t scale well for petabytes of data.