Consider this hypothetical scenario: if you were given $100,000 to build a PC/server to run open-source LLMs like LLaMA 3 for single-user purposes, what would you build?

  • Toes♀@ani.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    4 of whatever modern GPU has the most vram currently. (So I can run 4 personalities at the same time)

    Whatever the best amd epyc cpu currently is.

    As much ECC ram as possible.

    Waifu themes all over the computer.

    Linux, LTS edition.

    A bunch of nvme SSDs configured redundantly.

    And 2 RTX 4090s. (One for the host and one for me)

  • TechNerdWizard42@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I run it all locally on my laptop. Was about $30k new but you can get them used now years later for about $1k to $2k.

  • 0x01@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Why in the world would you need such a large budget? A mac studio can run the 70b variant just fine at $12k

      • slazer2au@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I’ll take ‘Someone got seed funding and now needs progress to unlock the next part of the package’ for $10 please Alex.

    • kelvie@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Depends on what you’re doing with it, but prompt/context processing is a lot faster on Nvidia GPUs than on Apple chips, though if you are using the same prefix all the time it’s a bit better.

      The time to first token is a lot faster on datacenter GPUs, especially as context length increases, and consumer GPUs don’t have enough vram.

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’m not an expert in any of this, so just wildly speculating in the middle of the night about a huge hypothetical AI-lab for 1 person:

    Super high-end equipment would probably quickly eat such a budget (2-5 * H100?), but a ‘small’ rack of 20-25 normal GPU’s (p40) with 8gb+ vram, combined with a local petals.dev setup, would be my quick choice.

    However, it’s hard to compete with the cloud on power efficiency, so the setup would quickly expend all future power expenses. All non-sensitive traffic should probably go to something like groq cloud, and the rest on private servers.

    An alternative solution is to go for a Npu setup (tpu,lpu, whatnotpu), and/or even a small power generator (wind, solar, digester/burner) to drive it. A cluster of 50 Opi5b (rk3588) 32gbram is within budget (50*6, 300Tops in theory, running with 1.6tb ram on 500w.). Afaik, the underlying software stack isn’t there yet for small npu’s, but more and more frameworks other than cuda pops up (cuda, rocm, metal, opencl, vulkan, ?) so one for Npu’s will probably pop up soon.

    Transformers use multiplications a lot, but bitnet doesn’t (only addition), so perhaps models will move to a less power intensive hardware and model frameworks in the future?

    Last on my mind atmo: You would probably also not spend all money on inference/training compute. Any descent cognitive architecture around a model (agent networks) need support functions. Tool servers, homeserved software for agents (fora/communication, scraping, modelling, codetesting, statistics etc). Basically versions of the tools we our selves use for different projects and communication/cooperation in an organization.