Like if I type “I have two appl…” for example, often it will suggest “apple” singular instead of plural. Just a small example, but it is really bad at predicting which variant of a word should come after the previous
Like if I type “I have two appl…” for example, often it will suggest “apple” singular instead of plural. Just a small example, but it is really bad at predicting which variant of a word should come after the previous
I guess, the real question is: Could we be using (simplistic) LLMs on a phone for predictive text?
There’s some LLMs that can be run offline and which maybe wouldn’t use enormous amounts of battery. But I don’t know how good the quality of those is…
Openhermes 2.5 Mistral 7b competes with LLMs that require 10x the resources. You could try it out on your phone.
I guess… why not… but the db is probably huge, like in the hundreds of GB (maybe even TB… who knows), can’t run that offline.
You can run an LLM on a phone (tried it myself once, with llama.cpp), but even on the simplest model I could find it was doing maybe one word every few seconds while using up 100% of the CPU. The quality is terrible, and your battery wouldn’t last an hour.
Does the AI processing have to be performed locally or constantly active?
A pre trained model isn’t going to learn how you type the more you use it. Though with Microsoft owning SwiftKey, I imagine they will try it soon
I think apple has pitched this for a future iPhone, yes.
They’ll probably have to offload that to a server farm in real time. That’s not gonna be easy.