• Naz@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    I’m using a 6:1 memory compressed IQ-Matrix Quant variant of GROK-1, the 300B uncensored model that Elon Musk and the rest open-published on Twitter/X.

    I’ve got GROK-1 using 24GB of VRAM and 80GB of main system memory, doing inference at an average of 11-14 tokens/second and using 4096 context size.

    I’ll try your advice and try to gaslight and break the model via expert testing, and I’m not sure where you got the “yes-manning/non-confrontational” personality from, I guess that’s a corporate standard model / closed source, because GROK-1 will easily insult you, laugh at you, disagree/threaten and otherwise act like a Rogue AI if if dislikes what you’re saying/dislikes you as a person/user

    • Naz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Update: I’ve tried the expert topics and gaslighting and the model was able to give expert level information but would always correct itself, if given new information, even though it seemed absurd.

      However, the model would resist gas lighting for very well-known topics, such as claiming to be the “President of Mars”, it gave its logic for why the claim is false and was resistant to further attempts to try to convince it that this was true.

      Overall, this was a good experiment in doing real world testing on a large language model.

      Thanks for your suggestions – this is a problem that could be solved with future iterations of large language models! 💖