normalize calling what AIs do “hallucinating”
An AI chatbot hallucinated nonexistent laws
that is the actual technical term for ai’s doing that.
I think some people are trying to make confabulating the technical term.
that’s stupid.
normalize calling what AIs do “hallucinating”
the people using AI to break rules have already been “hallucinating” various ways to claims rules and laws don’t exist or don’t apply them for centuries.
“once-in-a-generation opportunity to more effectively deliver for New Yorkers”
twelve words that say absolutely nothing at all. this is why these idiots mistake LLMs for sentience; it uses the same empty language they do.
ah, wonderful! this really brings together two of my favorite things in the world, AI and landlords!
hope we can burn the two of them together
enough of this “AI” nonsense. the only “ai” that will solve all of our issues is 爱
ai am picking up a gun
There’s a reason frame breaking is still a capital offense in Britain.
Clown mayor for a clown city.
“I don’t know why it’s doing this, we trained it extensively on /r/legalAdvice!”
you can’t have interacted with a gippity for five minutes and still think it would work in this application
Blueliani
I don’t know shit about AI, but I strongly feel like we are steps away from an irreconcilable tautology. This probably makes 0 sense but if you just keep growing and growing you’re going to reach a limit, especially considering how much you’ve already exploited to make said AI
i think based on the way things work right now you are right. There will have to be major advances in software and hardware for this to be more than a pretty slow, massively expensive, error prone chat bot.
And I’m supposed to be mad that the Chinese are punishing scamming businesses with their social credit scores
All hail king Ludd
Imagine getting locked out of your apartment because your AI chatbot landlord noticed your rent is .01 seconds late.