• Instigate@aussie.zone
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Therein lies the issue of using LLMs to answer broad or vague questions: they’re not capable of assessing the quality or value of the information they hold let alone whether or not it is objectively true or false, and that’s before getting into issues relating to hallucination. For extremely specific questions, where they have fewer but likely more accurate data to work with, they tend to perform better. Training LLMs on data whose value and quality hasn’t been independently tested will always lead to the results we’re seeing now.