• turkishdelight@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    llama.cpp quantizes the heck out of language models, which allows consumer cpus to run them. my laptop can run most 7b or 13b LLMs with 4bit quantization (and they are trying to push the level of quantization even further to 2 or 1.5 bits!)

    The same will happen with stable diffusion. Most SD models are still around fp16 levels of quantization, and will soon be going lower. I expect we’ll all be running SDXL or larger models on our laptop CPUs without breaking a sweat at 4bit level.

    • idkman@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      What I dislike about lower quantization is quality degradation. In my small experience, i find 7b models dumb (I’ve only tested Q4KM GGUF), and needed to be provided proper context before moving forward with the constructive conversation (be chat or instruct).

      If this issue can be circumvented in lower quantization, I’m all in.

      In context of SD, going below fp16 would only make things faster at cost of quality, and I personally like to go in depth with my prompts. For simpler prompts sure, even lighting and turbo are good in that regard.

      • turkishdelight@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        You can’t shrink a model to 1/8 the size and expect it to run at the same quality. Quantization allows me to move from a cloud gpu to my laptops crappy cpu/igpu, so I’m ok with that tradeoff.