Gamer, rider, dev. Interested in anything AI.
I’m not sure either, Win 10/11 are pretty quick to get going and Ubuntu is not much longer than that. If I have to hard reset the mbp for work, it’s a nice block of slacker time :)
Halls of Torment. $5 game on steam that is like a Vampire Survivors clone, but with more rpg elements to it.
These are amazing. Dell, Lenovo and I think HP made these tiny things and they were so much easier to get than Pi’s during the shortage. Plus they’re incredibly fast in comparison.
I’ve got a background in deep learning and I still struggle to understand the attention mechanism. I know it’s a key/value store but I’m not sure what it’s doing to the tensor when it passes through different layers.
Subscribed. That last episode of AAA was heartbreaking.
My neighbour has that same beta, he loves it.
Technically this community would also be parasitic against !motorcycles which is growing like a weed right now. I’m amazed how well it did.
I had a 450L since 2019 and loved it. The RX is very tempting but I really need less hardcore bikes as I get older :)
Bad article title. This is the “Textbooks are all you need” paper from a few days ago. It’s programming focused and I think Python only. For general purpose LLM use, LLaMA is still better.
Any data sets produced before 2022 will be very valuable compared to anything after. Maybe the only way we avoid this is to stick to training LLMs on older data and prompt inject anything newer, rather than training for it.
Step 1) Have a bike that women want to talk about. I think that’s about it.
When I had a CRF250L, I’d regularly have women come up and ask how heavy it is, because they’re thinking of buying one. I’d put the bike on the ground and show them how to lift it. So… weirdest thing is dropping my bike intentionally to let women pick it up for me.
Yep, I’m using an RTX2070 for that right now. The LLMs are just executing on CPU.
Do you recommend this email provider? Lots of people looking to get off gmail lately.
Are you running your own mail server? I only ever integrated Spamassassin with postfix.
Stable Diffusion (Stability AI version), text-generation-webui (WizardLM), a text embedder service with Spacy, Bert and a bunch of sentence-transformer models, PiHole, Octoprint, Elasticsearch/Kibana for my IoT stuff, Jellyfin, Sonarr, FTB Minecraft (customized pack), a few personal apps I wrote myself (todo lists), SMB file shares, qBittorrent and Transmission (one dedicated to Sonarr)… Probably a ton of other stuff I’m forgetting.
Yup, mostly running pretrained models for text embedding and some generative stuff. No real fine tuning.
Yup, typically we get into it after upgrading an older PC or something and instead of selling the parts, just turn it into a server. You can also find all sorts of cheap/good stuff on ebay from office off-lease.
I hate these filthy neutrals…
The advancements in this space have moved so fast, it’s hard to extract a predictive model on where we’ll end up and how fast it’ll get there.
Meta releasing LLaMA produced a ton of innovation from open source that showed you could run models that were nearly the same level as ChatGPT with less parameters, on smaller and smaller hardware. At the same time, almost every large company you can think of has prioritized integrating generative AI as a high strategic priority with blank cheque budgets. Whole industries (also deeply funded) are popping up around solving the context window memory deficiencies, prompt stuffing for better steerability, better summarization and embedding of your personal or corporate data.
We’re going to see LLM tech everywhere in everything, even if it makes no sense and becomes annoying. After a few years, maybe it’ll seem normal to have a conversation with your shoes?