• Barry Zuckerkorn@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Most of the normal apps on the phone are using AI on the edges.

    Image processing has come a long way using algorithms trained through those AI techniques. Not just the postprocessing of pictures already taken, like unblurring faces, removing unwanted background people, choosing a better frame of a moving picture, white balance/color profile or noise reduction, but also in the initial capture of the image: setting the physical focus/exposure on recognizable subjects, using software-based image stabilization in longer exposed shots or in video, etc. Most of these functions are on-device AI using the AI-optimized hardware on the phones themselves.

    On-device speech recognition, speech generation, image recognition, and music recognition has come a long way in the last 5 years, too. A lot of that came from training on models using big, robust servers, but once trained, executing the model on device only requires the AI/ML chip on the phone itself.

    In other words, a lot of these apps were already doing these things before on-device AI chips started showing up in 2013 or so. But the on-device chips have made all these things much, much better, especially in the last 5 years when almost all phones started coming with dedicated hardware for these tasks.