Pro@programming.dev to Technology@lemmy.worldEnglish · 9 days agoCursorAI "unlimited" plan rug pull: Cursor AI silently changed their "unlimited" Pro plan to severely rate-limited without notice, locking users out after 3-7 requestsconsumerrights.wikiexternal-linkmessage-square22linkfedilinkarrow-up1257arrow-down16
arrow-up1251arrow-down1external-linkCursorAI "unlimited" plan rug pull: Cursor AI silently changed their "unlimited" Pro plan to severely rate-limited without notice, locking users out after 3-7 requestsconsumerrights.wikiPro@programming.dev to Technology@lemmy.worldEnglish · 9 days agomessage-square22linkfedilink
minus-squareQuadratureSurfer@lemmy.worldlinkfedilinkEnglisharrow-up161·9 days agoSomeone just got the AWS bill.
minus-squarecrunchy@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up73·9 days agoThat’s got to be it. Cloud compute is expensive when you’re not being funded in Azure credits. One the dust settles from the AI bubble bursting, most of the AI we’ll see will probably be specialized agents running small models locally.
minus-squarefmstrat@lemmy.nowsci.comlinkfedilinkEnglisharrow-up14·9 days agoI’m still running Qwen32b-coder on a Mac mini. Works great, a little slow, but fine.
minus-squareAnd009@lemmynsfw.comlinkfedilinkEnglisharrow-up2·9 days agoI’m somewhat tech savvy, how do I run llm locally. Any suggestions? How to know if my local data is safe
minus-squareLlak@lemmy.worldlinkfedilinkEnglisharrow-up4·9 days agoCheckout lm studio https://lmstudio.ai/ and you can pair it with vs continue extension https://docs.continue.dev/getting-started/overview.
minus-squaredouglasg14b@lemmy.worldlinkfedilinkEnglisharrow-up2·7 days agoMore like they just got their Anthropic bill. Cloud compute is gonna be cheap compared to the API costs for LLMs they use/offer.
Someone just got the AWS bill.
That’s got to be it. Cloud compute is expensive when you’re not being funded in Azure credits. One the dust settles from the AI bubble bursting, most of the AI we’ll see will probably be specialized agents running small models locally.
I’m still running Qwen32b-coder on a Mac mini. Works great, a little slow, but fine.
I’m somewhat tech savvy, how do I run llm locally. Any suggestions? How to know if my local data is safe
Checkout lm studio https://lmstudio.ai/ and you can pair it with vs continue extension https://docs.continue.dev/getting-started/overview.
More like they just got their Anthropic bill.
Cloud compute is gonna be cheap compared to the API costs for LLMs they use/offer.