

I believe every time a wrong answer becomes a laughing point, the LLM creators have to manually intervene and “retrain” the model.
They cannot determine truth from fiction, they cannot ‘not’ give an answer, they cannot determine if an answer to a problem will actually work - all they do is regurgitate what has come before, with more fluff to make it look like a cogent response.
I’m surprised the Social Media moguls can’t see this becoming the norm. Good luck to them I guess.