Companies are going all-in on artificial intelligence right now, investing millions or even billions into the area while slapping the AI initialism on their products, even when doing so seems strange and pointless.

Heavy investment and increasingly powerful hardware tend to mean more expensive products. To discover if people would be willing to pay extra for hardware with AI capabilities, the question was asked on the TechPowerUp forums.

The results show that over 22,000 people, a massive 84% of the overall vote, said no, they would not pay more. More than 2,200 participants said they didn’t know, while just under 2,000 voters said yes.

  • Kraiden@kbin.run
    link
    fedilink
    arrow-up
    71
    ·
    6 months ago

    someone tried to sell me a fucking AI fridge the other day. Why the fuck would I want my fridge to “learn my habits?” I don’t even like my phone “learning my habits!”

    • Zron@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      6 months ago

      Why does a fridge need to know your habits?

      It has to keep the food cold all the time. The light has to come on when you open the door.

      What could it possibly be learning

      • 1995ToyotaCorolla@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        6 months ago

        Hi Zron, you seem to really enjoy eating shredded cheese at 2:00am! For your convenience, we’ve placed an order for 50lbs of shredded cheese based on your rate of consumption. Thanks!

        • variants@possumpat.io
          link
          fedilink
          English
          arrow-up
          7
          ·
          6 months ago

          We also took the liberty of canceling your health insurance to help protect the shareholders from your abhorrent health expenses in the far future

      • njm1314@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        So I can see what you like to eat, then it can tell your grocery store, then your grocery store can raise the prices on those items. That’s the point. It’s the same thing with those memberships and coupon apps. That’s the end goal.

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago
        1. Know when you’re about to put groceries in so it makes the fridge colder so the added heat doesn’t make things go bad.
        2. Know when you don’t use it and let it get a tiny bit warmer to save a teeny bit of power. (The vast majority of power is cooling new items, not keeping things cold though.)
        3. Tell you where things are?
        4. Ummm… Maybe give you an optimized layout of how to store things?
        5. Be an attack vector on your home’s wifi
        6. Wait, no, uh,
        7. Push notifications
        8. Do you not have phones?
    • Ragnarok314159@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      And it would improve your life zero. That is what is absurd about LLM’s in their current iteration, they provide almost no benefit to a vast majority of people.

      All a learning model would do for a fridge is send you advertisements for whatever garbage food is on sale. Could it make recipes based on what you have? Tell it you want to slowly get healthier and have it assist with grocery selection?

      Nah, fuck you and buy stuff.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    72
    arrow-down
    2
    ·
    6 months ago

    …just under 2,000 voters said “yes.”

    And those people probably work in some area related to LLMs.

    It’s practically a meme at this point:

    Nobody:

    Chip makers: People want us to add AI to our chips!

    • ozymandias117@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      6 months ago

      The even crazier part to me is some chip makers we were working with pulled out of guaranteed projects with reasonably decent revenue to chase AI instead

      We had to redesign our boards and they paid us the penalties in our contract for not delivering so they could put more of their fab time towards AI

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        That’s absolutely crazy. Taking the Chicago School MBA philosophy to things as time consuming and expensive to setup as silicon production.

  • Godort@lemm.ee
    link
    fedilink
    English
    arrow-up
    41
    ·
    edit-2
    6 months ago

    This is one of those weird things that venture capital does sometimes.

    VC is is injecting cash into tech right now at obscene levels because they think that AI is going to be hugely profitable in the near future.

    The tech industry is happily taking that money and using it to develop what they can, but it turns out the majority of the public don’t really want the tool if it means they have to pay extra for it. Especially in its current state, where the information it spits out is far from reliable.

    • cheese_greater@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      6 months ago

      I don’t want it outside of heavily sandboxed and limited scope applications. I dont get why people want an agent of chaos fucking with all their files and systems they’ve cobbled together

      • FiveMacs@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 months ago

        NDA also legally prevent you from using this forced garbage too. Companies are going to get screwed over by other companies, capitalism is gonna implode hopefully

    • Tenthrow@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 months ago

      I have to endure a meeting at my company next week to come up with ideas on how we can wedge AI into our products because the dumbass venture capitalist firm that owns our company wants it. I have been opting not to turn on video because I don’t think I can control the cringe responses on my face.

    • TipRing@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 months ago

      Back in the 90s in college I took a Technology course, which discussed how technology has historically developed, why some things are adopted and other seemingly good ideas don’t make it.

      One of the things that is required for a technology to succeed is public acceptance. That is why AI is doomed.

      • SkyeStarfall@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        AI is not doomed, LLMs or consumer AI products, might be

        In industries AI is and will be used (though probably not LLMs, still, except in a few niche use cases)

        • TipRing@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          Yeah, I mean the AI being shoveled at us by techbros. Actual ML stuff is currently and will continue to be useful for all sorts on not-sexy but vital research and production tasks. I do task automation for my job and I use things like transcription models and OCR, my company uses smart sorting using rapid image recognition and other really cool uses for computers to do things that humans are bad at. It’s things like LLMs that just aren’t there - yet. I have seen very early research on AI that is trained to actually understand language and learns by context, it’s years away, but eventually we might see AI that really can do what the current AI companies are claiming.

  • BlackLaZoR@kbin.run
    link
    fedilink
    arrow-up
    39
    ·
    6 months ago

    There’s really no point unless you work in specific fields that benefit from AI.

    Meanwhile every large corpo tries to shove AI into every possible place they can. They’d introduce ChatGPT to your toilet seat if they could

    • br3d@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      6 months ago

      “Shits are frequently classified into three basic types…” and then gives 5 paragraphs of bland guff

      • Krackalot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        19
        ·
        6 months ago

        With how much scraping of reddit they do, there’s no way it doesn’t try ordering a poop knife off of Amazon for you.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        It’s seven types, actually, and it’s called the Bristol scale, after the Bristol Royal Infirmary where it was developed.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Someone did a demo recently of AI acceleration for 3d upscaling (think DLSS/AMDs equivilent) and it showed a nice boost in performance. It could be useful in the future.

      I think it’s kind of a ray tracing. We don’t have a real use for it now, but eventually someone will figure out something that it’s actually good for and use it.

      • NekuSoul@lemmy.nekusoul.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        6 months ago

        AI acceleration for 3d upscaling

        Isn’t that not only similar to, but exactly what DLSS already is? A neural network that upscales games?

        • fuckwit_mcbumcrumble@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          But instead of relying on the GPU to power it the dedicated AI chip did the work. Like it had it’s own distinct chip on the graphics card that would handle the upscaling.

          I forget who demoed it, and searching for anything related to “AI” and “upscaling” gets buried with just what they’re already doing.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            6 months ago

            That’s already the nvidia approach, upscaling runs on the tensor cores.

            And no it’s not something magical it’s just matrix math. AI workloads are lots of convolutions on gigantic, low-precision, floating point matrices. Low-precision because neural networks are robust against random perturbation and more rounding is exactly that, random perturbations, there’s no point in spending electricity and heat on high precision if it doesn’t make the output any better.

            The kicker? Those tensor cores are less complicated than ordinary GPU cores. For general-purpose hardware and that also includes consumer-grade GPUs it’s way more sensible to make sure the ALUs can deal with 8-bit floats and leave everything else the same. That stuff is going to be standard by the next generation of even potatoes: Every SoC with an included GPU has enough oomph to sensibly run reasonable inference loads. And with “reasonable” I mean actually quite big, as far as I’m aware e.g. firefox’s inbuilt translation runs on the CPU, the models are small enough.

            Nvidia OTOH is very much in the market for AI accelerators and figured it could corner the upscaling market and sell another new generation of cards by making their software rely on those cores even though it could run on the other cores. As AMD demonstrated, their stuff also runs on nvidia hardware.

            What’s actually special sauce in that area are the RT cores, that is, accelerators for ray casting though BSP trees. That’s indeed specialised hardware but those things are nowhere near fast enough to compute enough rays for even remotely tolerable outputs which is where all that upscaling/denoising comes into play.

  • TheEntity@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    6 months ago

    And what do the companies take away from this? “Cool, we just won’t leave you any other options.”

    • Wooki@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      6 months ago

      Plenty of companies offering sane normal solutions and make bank in the process

  • Cyborganism@lemmy.ca
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    6 months ago

    I don’t mind the hardware. It can be useful.

    What I do mind is the software running on my PC sending all my personal information and screenshots and keystrokes to a corporation that will use all of it for profit to build user profile to send targeted advertisement and can potentially be used against me.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I would pay less, and then either use it for dumb stuff or just not use it at all.

  • bitwolf@lemmy.one
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 months ago

    No, but I would pay good money for a freely programmable FPGA coprocessor.

    If the AI chip is implemented as one, and is useful for other things I’m sold.

    • profdc9@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      I think manufacturers need to get a lot more creative about simplified computing. The RPi Pico’s GPIO engine is powerful yet simple, and a good example of what is possible with some good application analysis and forethought.

      • bruhduh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I have few pi pico but i didn’t knew about it, can you please elaborate, because I’ve been using them just like any other esp32 stm32 esp8266 i have

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Whichnoart of the pico are you referring to specifically? Never heard the term “GPIO engine” before. Is that sort of like the USB stack but for GPIO?

  • Lost_My_Mind@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    6 months ago

    84% said no.

    16% punched the person asking them for suggesting such a practice. So they also said no. With their fist.

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 months ago

    Its bad enough they shove it on you in some websites. Really not interested in being their lab rats

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    edit-2
    6 months ago

    That’s kind of abstract. Like, nobody pays purely for hardware. They pay for the ability to run software.

    The real question is, would you pay $N to run software package X?

    Like, go back to 2000. If I say “would you pay $N for a parallel matrix math processing card”, most people are going to say “no”. If I say “would you pay $N to play Quake 2 at resolution X and fps Y and with nice smooth textures,” then it’s another story.

    I paid $1k for a fast GPU so that I could run Stable Diffusion quickly. If you asked me “would you pay $1k for an AI-processing card” and I had no idea what software would use it, I’d probably say “no” too.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      6 months ago

      Yup, the answer is going to change real fast when the next Oblivion with NPCs you can talk to needs this kind of hardware to run.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        6 months ago

        I’m still not sold that dynamic text generation is going to be the major near-term application for LLMs, much less in games. Like, don’t get me wrong, it’s impressive what they’ve done. But I’ve also found it to be the least-practically-useful of the LLM model categories. Like, you can make real, honest-to-God solid usable graphics with Stable Diffusion. You can do pretty impressive speech generation in TortoiseTTS. I imagine that someone will make a locally-runnable music LLM model and software at some point if they haven’t yet; I’m pretty impressed with what the online services do there. I think that there are a lot of neat applications for image recognition; the other day I wanted to identify a tree and seedpod. Someone hasn’t built software to do that yet (that I’m aware of), but I’m sure that they will; the ability to map images back to text is pretty impressive. I’m also amazed by the AI image upscaling that Stable Diffusion can do, and I suspect that there’s still room for a lot of improvement there, as that’s not the main goal of Stable Diffusion. And once someone has done a good job of building a bunch of annotated 3d models, I think that there’s a whole new world of 3d.

        I will bet that before we see that becoming the norm in games, we’ll see LLMs regularly used for either pre-generated speech synth or in-game speech synthesis, so that the characters say text (which might be procedurally-generated, aren’t just static pre-recorded samples, but aren’t necessarily generated from an LLM). Like, it’s not practical to have a human voice actor cover all possible phrases with static recorded speech that one might want an in-game character to speak.

  • t00l@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 months ago

    They want you to buy the hardware and pay for the additional energy costs so they can deliver clippy 2.0, the watching-you-wank-edition.

  • OhmsLawn@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 months ago

    I honestly have no Idea what AI does to a processor, and would therefore not pay extra for the badge.

    If it provided a significant speed improvement or something, then yeah, sure. Nobody has really communicated to me what the benefit is. It all seems like hand waving.

    • originalucifer@moist.catsweat.com
      link
      fedilink
      arrow-up
      10
      ·
      6 months ago

      what they mean is that they are putting in dedicated processors or other hardware just to run an LLM . it doesnt speed up anything other than the faux-AI tool they are implementing.

      LLMs require a ton of math that is better suited to video processors than the general purpose cpu on most machines.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      6 months ago

      I honestly have no Idea what AI does to a processor

      Parallel processing capability. CPUs historically worked with mostly-non-massively-parallelizable tasks; maybe you’d use a GPU if you wanted that.

      I mean, that’s not necessarily “AI” as such, but LLMs are a neat application that uses them.

      On-CPU video acceleration does parallel processing too.

      Software’s going to have to parallelize if it wants to get much by way of performance improvements, anyway. We haven’t been seeing rapid exponential growth in serial computation speed since the early 2000s. But we can get more parallel compute capacity.

  • snek_boi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    6 months ago

    I agree that we shouldn’t jump immediately to AI-enhancing it all. However, this survey is riddled with problems, from selection bias to external validity. Heck, even internal validity is a problem here! How does the survey account for social desirability bias, sunk cost fallacy, and anchoring bias? I’m so sorry if this sounds brutal or unfair, but I just hope to see less validity threats. I think I’d be less frustrated if the title could be something like “TechPowerUp survey shows 84% of 22,000 respondents don’t want AI-enhanced hardware”.