• noneabove1182@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Hmm had interesting results from both of those base models, haven’t tried the combo yet, will start some exllamav2 quants to test

    What’s it doing well at?

    • undermine@lemmynsfw.comOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      I haven’t tried neural-chat, but the combined model seems to be better (anecdotally) than OH2.5/Mistral at following instructions, reasoning, some of the overall quirks with llama.cpp seem to be ironed out with it too.