Most people had a hard enough time telling the difference between man made fact and fiction, now they have to tell the difference between AI fact and fiction on top.
Well AI fact, in this use has always been made up of a combination of man’s fact and fiction. Nobody’s been smart enough to make an AI that can reliably separate the two, to my knowledge.
Most people had a hard enough time telling the difference between man made fact and fiction, now they have to tell the difference between AI fact and fiction on top.
Well AI fact, in this use has always been made up of a combination of man’s fact and fiction. Nobody’s been smart enough to make an AI that can reliably separate the two, to my knowledge.
It’s all about cleaning datasets. For forecasting models, you need to occasionally remove certain historical data to increase accuracy.
The same could work here, but it’s obviously at a significantly larger scale and crosses into every interest and discipline.
I believe the solution is curated data models with the top members of the applicable field determining validity or a stack overflow model.
We should basically have a “clean” copy of the internet that is always 3-6 months behind as it is only added with quality data.