Like if I type “I have two appl…” for example, often it will suggest “apple” singular instead of plural. Just a small example, but it is really bad at predicting which variant of a word should come after the previous

  • Knusper@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    I guess, the real question is: Could we be using (simplistic) LLMs on a phone for predictive text?

    There’s some LLMs that can be run offline and which maybe wouldn’t use enormous amounts of battery. But I don’t know how good the quality of those is…

    • SpooksMcDoots@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      Openhermes 2.5 Mistral 7b competes with LLMs that require 10x the resources. You could try it out on your phone.

    • 0x4E4F@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      I guess… why not… but the db is probably huge, like in the hundreds of GB (maybe even TB… who knows), can’t run that offline.

    • ashe@lemmy.starless.one
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      10 months ago

      You can run an LLM on a phone (tried it myself once, with llama.cpp), but even on the simplest model I could find it was doing maybe one word every few seconds while using up 100% of the CPU. The quality is terrible, and your battery wouldn’t last an hour.

    • Munkisquisher@lemmy.nz
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      A pre trained model isn’t going to learn how you type the more you use it. Though with Microsoft owning SwiftKey, I imagine they will try it soon