Investors are barely breaking even as the venture is hardly making any profits due to a shortage of chips, divided interests, and more.

… OpenAI has already seen a $540 million loss since debuting ChatGPT.

… OpenAI uses approximately 700,000 dollars to run the tool daily.


⚠️ First off, apologies as I didn’t cross check. Take it w/ a grain of salt.


This piece of news, if true, somehow explains why OpenAI has been coming up w/ weird schemes for making $$$ like entering the content moderation space.

On a similar note, I wonder if this had been a key driver (behind the scenes) in the recent investment in open source AI initiatives (Haidra comes to my mind?) Perhaps some corporations who haven’t got enough $$$ to fund their own dedicated research group are looking to benefit from an open source model?

  • Clymene@lemmy.ml
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    2
    ·
    1 year ago

    Too much is made of the shrinking user base. I’m sure they’ll come back with a vengeance come the start of the school year in the northern hemisphere.

    Also, maybe a tool like this shouldn’t be privately funded? Most of the technology is based on university funded research we all paid for. mRNA vaccine research was similarly funded with public money in mostly universities, and now we have to pay some private company to sell it back to us. How is that efficient? AI should be common property.

    • Uranium3006@kbin.social
      link
      fedilink
      arrow-up
      31
      ·
      1 year ago

      honestly I’d rather open source AI I can run locally. even for something like GPT4 an enterprise-scale operation could afford the hardware

    • Ubermeisters@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      1 year ago

      If it’s made from all of us it should be free for all of us.

      I’m fine with these researchers going out and scraping the social networks to train models, it’s incredibly advantageous to society in general. But it’s gotta be crystal clear transparency and it’s gotta be limitlessly free to all who want to.

      It’s the only way that any of this won’t result in another massive boundary between the 1% and us pod living grunts. It’s already a devisively powerful technology when harnessed adversarially, that power is reduced when everyone has access to it as well.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        If you look at how much they spend per day (poster quoted $700,000 daily but said unverified), how would it make any sense to provide the service for free? I won’t argue for/against releasing the model to the public, since honestly that argument can go both ways and I don’t think it would make much of a difference anyway except benefit their competitors (other massive companies).

        However, let’s assume they did release it publicly, what use would that be for the smaller business/individual? Running these models takes some heavy and very expensive hardware. It’s not like buying a rack and building a computer, these models are huge. Realistically, they can’t provide that as a free service, they’d fail as a company almost immediately. Most businesses can’t afford to run these models themselves, the upfront and maintenance costs would obliterate them. Providing it as a service like they have been means they recoup some of the cost of running the models, while users can actually afford to use these models without needing to maintain the hardware themselves.

        • Clymene@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Less than a million dollars a day for everyone who wants to in the whole world to use AI right now? That’s peanuts. A single city bus costs $5-800k to buy. Even if costs goes up to several tens of million a day for access for the whole world that’s incredibly affordable.

          It’s crazy that something so useful and so cheap to run can’t be sustained in the current system. This seems like an argument against a market based solution to AI.

          • TehPers@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Less than a million dollars a day for everyone who wants to in the whole world to use AI right now?

            You’re ignoring the fact that the cost scales with usage. Increasing its availability will also increase the cost, hardware requirements (which can’t really scale since there’s a shortage), and environmental cost due to power usage.

            • Clymene@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              No, I am not ignoring that. I specifically said:

              Even if costs goes up to several tens of million a day for access for the whole world that’s incredibly affordable.

              With how many people are already using AI, it’s frankly mind boggling that they’re only losing $700k a day.

              You’re also ignoring the fact that costs don’t scale proportionally with usage. Infrastructure and labor can be amortized over a greater user base. And these services will get cheaper to run per capita as time goes on and technology improves.

              Finally, there are positive economic externalities to public AI availability. Imagine the improvements to the economy, education and health if everyone in the world had free access to high quality AI in their native language, no matter how poor or how remote. Some things, like schools, roads and healthcare, are not ideally provisioned under a free market. AI is looking to be another.

              • TehPers@beehaw.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                Finally, there are positive economic externalities to public AI availability.

                There are positive economic externalities to public everything availability. We don’t live in this kind of world though, someone will always try to claim a larger share due to human nature. That being said, I’m not really interested in arguing about the political feasibility (or lack thereof) of having every resource being public.

                With how many people are already using AI, it’s frankly mind boggling that they’re only losing $700k a day.

                There are significant throttles in place for people who are using LLMs (at least GPT-based ones), and there’s also a cost people pay to use these LLMs. Sure you can go use ChatGPT for free, but the APIs cost real money, they aren’t free to use. What you’re seeing is the money they lost after all the money they made as well.

                You’re also ignoring the fact that costs don’t scale proportionally with usage. Infrastructure and labor can be amortized over a greater user base. And these services will get cheaper to run per capita as time goes on and technology improves.

                I don’t disagree that the services will get cheaper and that costs don’t scale proportionally. You’re most likely right - generally speaking, that’s the case. What you’re missing though is that there is an extreme shortage of components. Scaling in this manner only works if you actually have the means to scale. As things stand, companies are struggling to get their hands on the GPUs needed for inference.

                • Clymene@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  There are positive economic externalities to public everything availability. We don’t live in this kind of world though, someone will always try to claim a larger share due to human nature.

                  Saying “Things are inevitably bad because of human nature” is just very weird, since we obviously do have good policies and we try to solve other problems like crime and poverty. It sounds like you already agree that this is good policy? You’re just saying it’s not politically feasible? OK, sure, we probably don’t disagree then.

                  That being said, I’m not really interested in arguing about the political feasibility (or lack thereof) of having every resource being public.

                  I am obviously NOT arguing that every resource should be public. This discussion is about AI, which was publicly funded, trained on public data, and is backed by public research. This sleight of hand to make my position sound extreme is, frankly, intellectually dishonest.

                  there’s also a cost people pay to use these LLMs.

                  OK, keep the premium subscription going then.

                  What you’re missing though is that there is an extreme shortage of components.

                  There’s a shortage, but it’s not “extreme”. ChatGPT is running fine. I can use it anytime I want instantly. You’d be laughed out of the room if you told AI researchers that ChatGPT can’t scale because we’re running out of GPUS. You seem to be looking for reasons to be against this, but these reasons don’t make sense to me, especially since this particular problem would exist whether it’s publicly owned or privately owned.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    8
    ·
    1 year ago

    Open AI died the moment Meta’s Llama model weights were replicated completely open source. The outcome is guaranteed. It does not matter how much better the enormous proprietary model can be, people will never be okay with the level of intrusive data mining required for OpenAI or Google’s business model. Personal AI tech must be open source and transparent with offline execution. AI is the framework of a new digital economy, not the product.

    • TheEntity@kbin.social
      link
      fedilink
      arrow-up
      86
      arrow-down
      1
      ·
      1 year ago

      people will never be okay with the level of intrusive data mining required for OpenAI or Google’s business model

      Where do you meet these people? I need more of such people in my life.

    • griD@feddit.de
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      1 year ago

      AI is the framework of a new digital economy, not the product.

      That is one interesting sentence. Thanks.

    • krellor@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I don’t think it’s as much that the meta model was replicated as much as they fully open sourced it with a license for research and commercial use.

      I actually think the market demand will be fairly small for fully offline AI. The largest potential customers might be government who require full offline hosting, and there is a small group of companies servicing that niche. But even government customers who require that their data is segmented are simply having enclaves setup by the big cloud platforms where they guarantee that inputed data isn’t fed into the training process and doesn’t leave the customer environment.

      I fully support folks who sustain open source AI frameworks, but in terms of commercial customers that will drive industry trends with dollars, I expect there will be demand for hosted solutions that use proprietary models.

        • AngrilyEatingMuffins@kbin.social
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          1 year ago

          The courts aren’t going to side with copyright holders. As much as the US loves its ridiculous copyright laws it loves profit and being on the bleeding edge of tech more. There is absolutely ZERO chance that the United States will let China, who does not care about IP, carry the keys to the AI kingdom

    • Tigbitties@kbin.social
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      $700k/dsy. That’s $260 million per year. There are 360 million paid seats of MS360. So they’d have to raise their prices $0.73 per year to cover the cost. No one will notice.

    • The Prompt:
      How OpenAI can generate more revenue?

      The Response:
      OpenAI, the organization behind the development of AI models like GPT-3, has a few strategies to generate revenue:

      1. Licensing and Subscription Models: OpenAI can license its AI models and technologies to businesses, developers, and organizations for various applications. This could involve offering different pricing tiers based on usage, features, and support.

      2. API Access: OpenAI can offer access to its AI models through APIs (Application Programming Interfaces). Developers and companies can integrate these APIs into their products, applications, or services, paying for the amount of usage.

      3. Customized Solutions: OpenAI can collaborate with specific industries or companies to develop customized AI solutions tailored to their needs. These solutions can range from customer support bots to content creation tools.

      4. Premium Features: OpenAI could offer advanced features, improved performance, or specialized functionality as part of premium packages, enticing users to upgrade their subscriptions.

      5. Partnerships and Collaborations: OpenAI can collaborate with other tech companies, research institutions, or industries to jointly develop and commercialize AI applications.

      6. Consulting and Services: OpenAI could provide consulting services, training, and expertise in AI technology to help organizations integrate AI into their operations effectively.

      7. Training and Workshops: OpenAI can offer training programs, workshops, and educational resources to help individuals and businesses better understand and utilize AI.

      8. Data Services: OpenAI can offer datasets, annotations, and data-related services to researchers, businesses, and developers working on machine learning projects.

      9. Vertical-Specific Solutions: Developing AI solutions tailored for specific industries, such as healthcare, finance, or marketing, and offering them as industry-specific packages.

      10. Community Contributions: OpenAI can build a community around its AI models and offer a platform for users to share and monetize their own creations, benefiting from the broader ecosystem.

      It’s important to note that OpenAI’s approach to generating revenue will likely be influenced by factors such as market demand, ethical considerations, user feedback, and the evolving landscape of AI technologies and regulations. OpenAI aims to strike a balance between generating revenue and advancing its mission of ensuring AI benefits all of humanity.

  • saud@lemmy.ml
    link
    fedilink
    arrow-up
    33
    ·
    1 year ago

    I really don’t understand how this is possible when Microsoft has invested billions into OpenAI

    • atticus88th@lemmy.world
      link
      fedilink
      arrow-up
      20
      arrow-down
      3
      ·
      1 year ago

      All it takes is a couple of dudes to buy a couple of yachts, private planes maybe another home or two. And poof. Its gone.

    • Uncle_Bagel@midwest.social
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      Burning through billions of investors money isnt the same as being profitable. The Silicon Valley gravy train is over, and investors are actually demanding to start seeing returns on their investments.

      • Peanut@sopuli.xyz
        link
        fedilink
        arrow-up
        13
        arrow-down
        3
        ·
        1 year ago

        And you are the only voice of reason in this thread.

        “Make up shit that makes OpenAI look bad” is like tech article gold right now. The amount of times i am seeing “look what ChatGPT said!!!” As if prompter intention is completely irrelevant to model output.

        Objectivity doesn’t exist anymore. It’s just really popular to talk shit about ai right now.

        Like when Altman effectively said “we should only regulate models as big or bigger than ours, we should not regulate small independent or open source models and businesses” to Congress, which was followed by endless articles saying “Sam Altman wants to regulate open source and stamp out smaller competition!”

        I have no love for how unopen they’ve become, but at least align criticisms with reality please.

  • donuts@kbin.social
    link
    fedilink
    arrow-up
    31
    arrow-down
    2
    ·
    1 year ago

    AI as a business is already running on fumes, and it’s going to become even more expensive once intellectual property law catches up to them. We can only hope that the AI bubble bursting doesn’t take the entire market economy down with it…

      • donuts@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I mean, I get you, but personally I don’t really like the idea of millions of innocent people losing their homes and most of their savings because some fucking dweebs decided to put all of our collective wealth in legally dubious automatic junk “content” generators. I’ve lived through enough crashes to know that it’s never the big guys that get fucked when everything goes tits up, it’s us, our parents, our grandparents, etc.

        • borlax@lemmy.borlax.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yeah status quo is the only reason to not throw caution to the wind and burn the whole thing down. It’s why nothing will ever get better.

    • 👁️👄👁️@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Well it doesn’t help that ChatGPT is unoptimized as fuck with like 185b parameters for 3.5, and somewhere in the trillions for 4

    • Fat Tony@discuss.online
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I foresee that in the future we use A.I. to start a business, run the business and also declare bankruptcy. All on the same day.

  • DefinitelyNotAPhone [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    Company whose business model is based entirely on running an enormously massive and expensive LLM and then serving content with it publicly for free with no greater ideas on actually turning that into a business is going under. In other news, water still wet.

    I’ll admit I thought the AI bubble was going to last longer than a few months (and inevitably FAANG will probably artificially extend it until even they have to admit there’s not a ton of productive real world uses for it), but I suppose late stage capitalism has to speedrun the boom-bust cycle as it gets increasingly desperate for profit.

    • RubberDucky@programming.dev
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      And instead of trying to make it use less resources to run, unlike Llama tries, openai just makes a new gpt that needs even more resources

      • Durotar@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        openai just makes a new gpt that needs even more resources

        If they have investors who are paying for that, I see no problem. Operating at loss is not newsworthy nowadays, this is new reality.

    • somename [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Well, AI is still going to be a buzzword for capitalists to throw around, as it does actually have uses and big profit usages in certain fields. Just, like, it’s certain fields. Then the grifters will continue to try to extrapolate that success to increasingly far removed use cases, with increasingly stupid promises.

    • Kayn@dormi.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It has found a few legitimate uses though, hasn’t it? GitHub Copilot comes to mind, although the legal implications of it are up in the air.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    They also didn’t design ChatGPT to be power efficient at all, so that’s bloating up their operating costs a ton.

  • boyi@lemmy.sdf.org
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    1 year ago

    Sorry to say, I would take this with grain of salt. Not making profits is part of business model of these pioneering companies. Google, Amazon and Uber (etc) were in the negatives for so many years and they absorbed the losses in order to be the dominant brands where at the end users become dependent on them. At that point they’ll start to charge exorbitantly and forcefully add unneeded features that will exert more control upon their users but there’s nothing that they can do but pay, for the simple fact that they can’t do without them.

    • Sinonatrix [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 year ago

      be “”“worth”“” 2.7T

      unable to afford world’s most hyped research project despite it burning less than 1b

      Is this IBMification or whatever tech bros are calling late capitalism now

  • roguetrick@kbin.social
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    High interest rates baby. I noted this was happening when people were complaining about lowered quality because they were using less resource intensive operations.