I just got Oobabooga running for the first time with Llama-2, and have Automatic1111, and ComfyUI running for images. I am curious about ML too but I don’t know where this start with that one yet.

For the uninitiated, all of these tools are running offline open source (or mostly) models.

  • Veraxus@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Check out Wizard 30B Uncensored. IMO it’s about as good as NerfedGPT 4… except free and private.

      • Veraxus@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’m running it in GPT4All (CPU-based) with 64GB of RAM, and it runs pretty well. I’m not sure what you’d need if you were running it on GPU instead.

        • TheOtherJake@beehaw.orgOP
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          WizardLM 30B at 4 bits with the GGML version on Oobabooga runs almost as fast as Llama2 7B on just the GPU. I set it up with 10 threads on the CPU and ~20 layers on the GPU. That leaves plenty of room for a 4096 context with a batch size of 2048. I can even run a 2GB Stable Diffusion model at the same time with my 3080’s 16GBV.

          Have you tried any of the larger models? I just ordered 64GB of ram. I also got kobold mostly working. I hope to use it to try Falcon 40. I really want to try a 70B model at 2-4 bit and see how its accuracy is.

    • TheOtherJake@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I just tried it a few hours ago. Indeed, it is quite good. I knew it when a NSFW prompt test on an uncensored model generated a stable diffusion picture of a robot skeleton and a snarky reply. Like, yay we finally have a bight spot with this one.