• natebluehooves@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Usually there is a massive VRAM requirement. local neural networking silicon doesn’t solve that, but using a more lightweight and limited model could.

    Basically don’t expect even gpt3, but SOMETHING could be run locally.