Tech vendors have also been falling over each other to tell the world how they are including GenAI in their offerings as the leading AI companies attract feverish attention from investors.
Because you can’t hype it up for investors if you call it what it actually is. Fancy auto complete. And don’t get me wrong, I love me some of the tools out there. But this stuff is being absolutely way over hyped.
It’s good to go into this stuff with realistic views. Will it do all your work? Absolutely not. But what it will do is do a lot of heavy lifting for you so that you can get more things that require your specific attention done.
The level of “sky is falling and we’re all going to be enslaved by AI” is literal bullshit to sell more stocks and create a bubble that will absolutely pop.
Exactly, they’re tools like any other and like any other tool it’s not going to do the whole job for you and you’re going to need to learn how to use it well to get the most out of it
It’s neither magic, like the AI/tech bros would like you to think nor is it the harbinger of doom and some evil thing that need to be squashed like the anti-AI bros want you to think
Anecdotally, with Gen AI I’ve been able to get 30 billable hours of work done in about 12 this week. I had to break down a detailed 320 page document. The thing is, I am good enough at my job that I can do that on my own. The difference with AI is that the final product is neater, and I’m not as mentally drained/carpal tunneled afterwards. Bottom up automation, for the worker’s benefit only, is the only kind I like.
I’ve seen a junior using chatGPT to do the job while not really understanding what’s going on and the end it was a big mess that didn’t work. After I told him to read a “for dummies” book and he started to think for himself he got something decent out of it. It’s no replacement for skill and thinking.
exactly what I expected. It only will be worse. Since those juniors don’t know what is good or wrong code for example. So they just assume whatever ChatGPT is saying is correct. They have no benchmark to compare.
Had a very similar experience in pretty niche-use cases. LLMs are great if you understand the what you are dealing with, but they are no magical automation tool (at least in somewhat niche, semi-technical use cases where seemingly small errors can have serious consequences).
That’s been my experience so far, that it’s largely useless for knowledge based stuff.
In programming, you can have it take “pseducode” and have it output actionable code for more tedious languages, but you have to audit it. Ultimately I find traditional autocompletion just as useful.
I definitely see how it helps cheat on homework, and extends “stock photography” to the point of really limiting the market for me photography or artists for bland business assets though.
I see how people find it useful for their “professional” communications, but I hate it because people that used to be nice and to the point are staying to explode their communication into a big LLM mess.
Does the author think LLMs are Artificial General Intelligence? Because they’re definitely not.
AGI is, at minimum capable of taking input and giving output from any domain that a human can, which no generative neural network is currently capable of. For one thing, generative networks are incapable of reciting facts reliably, which immediately disqualifies them.
At a quick glance I’m not seeing anywhere in the article that they think that’s what this is… If you’re responding to them calling it “GenAI”, that’s a shortening of “Generative AI”, not “General AI”
Yes; I misunderstood what the author meant. Ty for letting me know.
For one thing, generative networks are incapable of reciting facts reliably
Neither are humans, for what it’s worth…
It’s interesting, when you ask a LLM something that it doesn’t know, it will tend to just spew out words that sound like they make sense, but are wrong.
So it’s much more useful to have a human that will admit that they don’t have a response for it. Or the human acts like the LLM spewing stupid stuff that sounds right and gets promoted instead.
I don’t know how many times industries are going to be tricked by tech bros middlemaning to insert their ‘brand new’ ideas into already functioning systems with the claim that if you buy it its gonna conpletely revolutionize the process. But I know it’s gonna happen again at least once more. So much of it is digital snake oil.
The problem is our upper management are paranoid of time theft (even though its opposite, wage theft, costs the economy more money than all petty crime combined). And out of sheer paranoia, they’re going to be susceptible to technological snake oil, especially of the sort that tightens the collars on labor and makes them that much more miserable.
Yeah because they are WELL AWARE of how guilty they are of wage-theft. Accuse your enemy of that which you are guilty of
wahhh, tech bros!!! It’s all the tech bros fault!!! Not the pieces of shit that misuse and abuse it but those that made it!!!
Let me guess, you also want to sue car manufacturers for deaths brought by the things they created. Goddamn auto bros
That’s not the point. The point is our industrialist and upper-management managers have tipped their hands. They have demonstrated beyond doubt that they’d totally replace their workforce if they could even when doing so means families or entire neighborhoods go hungry or are driven out of their homes.
And that includes creatives and experts. It even eventually includes, with a nod to The Brain Center at Whipple’s , the upper management who aren’t principle shareholders. The massive population correction at the end of capitalism is revealed at last.
Your own job is forfeit as soon as it becomes cheaper over a few years to automate your position.
industrialist and upper-management managers have tipped their hands
you needed AI-capitalists to show you this???
As the Twilight Zone episode shows, this has been a known issue for a while. But we haven’t yet seized the means of production even a century since the Great Depression.
This title is absurd and only trying to push a narrative. Successful utilization of AI doesn’t include replacing workers but supplementing them. Yes, the majority of AI applications are round pegs pushed through square holes riding the hype but there is irrevocable evidence of positive impact.
That doesn’t get clicks tho so whatever I guess
This is the best summary I could come up with:
Despite some astronomical vendor valuations and predictions that it will transform society, the impact of GenAI in the workplace has yet to materialize, according to a recent global survey.
“The vast majority of GenAI uses have been as a personal productivity tool, helping with research, speeding up the creation of documents or marketing literature, and supporting office admin,” according to the report.
Tech vendors have also been falling over each other to tell the world how they are including GenAI in their offerings as the leading AI companies attract feverish attention from investors.
Microsoft, for example, continues to run proof of concept programs to convince customers of the productivity benefits of Copilot.
The Nash Squared survey found that companies had yet to prove the business case for mass investment in GenAI, according to 54 percent of respondents.
In a prepared statement, Bev White, CEO of Nash Squared, said: “Although the ‘replace jobs’ impact of GenAI is headline-grabbing news, in fact the Pulse Survey indicates that organizations with company-wide implementations of GenAI are in fact more likely to be increasing tech headcount in the next year than the average.”
The original article contains 392 words, the summary contains 188 words. Saved 52%. I’m a bot and I’m open source!
Good Ai
this is the only thing AIs can do for now…
What is buzz, what is biz and what the fuck is denting a job?
Experts - AI will come for the majority of our jobs over the next decade or two. Three at the outside.
Journalists - hey we’ve had this AI thing for five minutes and the experts were clearly wrong.