Why Stopping ChatGPT From Lying Could Make It Useless
OpenAI thinks it’s found the root of AI “hallucinations” — and a way to fix them. The idea? Teach models to admit when they don’t know instead of bluffing. But here’s the catch: a more cautious ChatGPT might refuse answers so often that users lose patience, and costs could soar. The solution could strip away the bold confidence that made ChatGPT irresistible in the first place.