Michael Cohen, the former lawyer for Donald Trump, admitted to citing fake, AI-generated court cases in a legal document that wound up in front of a federal judge, as reported earlier by The New York Times. A filing unsealed on Friday says Cohen used Google’s Bard to perform research after mistaking it for “a super-charged search engine” rather than an AI chatbot.

I… don’t even. I lack the words.

  • fine_sandy_bottom@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    105
    arrow-down
    1
    ·
    11 months ago

    I’m genuinely amazed at the calibre of people running the US. More so that aparently half the nation thinks its the best choice.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      26
      ·
      11 months ago

      Michael Cohen was working for Trump precisely because he couldn’t get a proper lawyer job elsewhere. Good lawyers will steer clear of a client that will ask them to commit crimes for them.

    • Carighan Maconar@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      25
      ·
      11 months ago

      Yeah, the mental acumen on display is truly terrifying. Just not in the way Cohen would love to understand that sentence as. 😅

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      11 months ago

      I’m not.

      We’ve seen little other than the loss of economic and social liberty in the last 40 years.

      99% of voters still choose the same two parties in charge of it like clockwork.

      Instead of amazement, I feel cynical resignation.

      • nilloc@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        28
        ·
        11 months ago

        The impending doom of the fascist right is the only thing keeping me voting for the dems. If we had rank choice I’d be so much happier voting every election.

        • FlashMobOfOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          1
          ·
          11 months ago

          That’s the thing. I look around and have no reason to think fascism is impending. It’s here.

          Women are getting jailed for miscarriage, cops are hanging out lackadaisically outside a school shooting on their phones with zero consequences, homeless jumped 12% in one year, and the big issue is sending hundreds of billions more off to other countries’ wars.

          The only plus is that things have gotten so bad it’s forced unions to become more aggressive and unyielding, which has effected more positive change for workers than the ruling parties have achieved in decades.

          • TheaoneAndOnly27@kbin.social
            link
            fedilink
            arrow-up
            10
            ·
            11 months ago

            I don’t know if you’re into podcasts, but Adam conover’s podcast Factually is really great. They have an episode called " what’s the left gets wrong about the right" and it dives into how the right is primarily a reactionary movement to social and economic progress to try to maintain power for the owning and ruling class. It’s a really great episode and it hits a lot on some of the similar points that you were mentioning with the unions. It’s definitely worth checking it out If you’ve got like an hour to kill, It’s super dope.

          • nilloc@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            11 months ago

            I didn’t say it’s impending. The doom is what’s impending from the rise of fascism. You’re right though, things are already fucked badly. I’m just voting to keep democracy around long enough to grave a shot at fixing it someday for my child.

            There are days when it feels like arms are going to be the only way out and that’ll be a fucking nightmare that will ruin millions more lives.

          • rayyy@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            the big issue is sending hundreds of billions more off to other countries’ wars. the income inequality between billionaires and working America

            • Snoozemumrik@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              11 months ago

              Sure, you can argue she got arrested for “abuse of a corpse”, but

              In September, when Watts went to the hospital in pain and passing large blood clots, doctors told her that despite some fetal cardiac activity, her roughly 22-week pregnancy was not viable. She was in and out of the hospital over the next three days, including a lengthy wait for a hospital ethics panel to determine whether her preterm pregnancy, which was on the borderline of Ohio’s abortion limit, could be induced without legal liability for the doctors. Watts eventually went home against medical advice and experienced the miscarriage on the toilet

              • pearsaltchocolatebar@discuss.online
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                11 months ago

                You’re leaving out the part where she tried to shove it down the toilet, then left it there for an extended period of time when that didn’t work.

                • LemmysMum@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  11 months ago

                  Yes, people afraid of going to jail for their bodily function will behave in unpredictable manners.

    • littleblue✨@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      If you lived here, you might begin to understand the level of nationally fucked the literate half is aware of daily. Then again, you seem like a decent person, so I wouldn’t wish that on you. 😅

  • CommanderCloon@lemmy.ml
    link
    fedilink
    English
    arrow-up
    71
    ·
    11 months ago

    That’s the second time a lawyer has made this mistake, though the previous case wasn’t at such a high level

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      57
      ·
      11 months ago

      Not even close to the second time. It’s happening constantly but is getting missed.

      Too many people think LLMs are accurate.

      • PriorityMotif@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 months ago

        Problem is that these llm answers like this will find their way onto search engines like Google. Then it will be even more difficult to find real answers to questions.

        • ghurab@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 months ago

          Some LLMs are already generating answers based on other llm generated contant. We’ve gone full circle.

          I was using phind to get some information about edrum sensors, (not the intended usecase, but I was just messing around) and one of the sources was a very obvious AI generating article from a contant mill.

          Skynet is going to be so inbred

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            Model collapse is going to be a big deal and it doesn’t take too much poisoned content to cause model collapse.

        • huginn@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Have found, not will find.

          There are so many spam sites with LLM content.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      1
      ·
      11 months ago

      I work for a law firm, and yeah, this happens a lot. The stupidity and laziness of our clients’ in-house attorneys is making us a lot of money.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        11 months ago

        Why is there not an automated check for any cases referenced in a filing, or required links? It would be trivial to require a clear format or uniform cross-reference, and this looks like an easy niche for automation to improve the judicial system. I understand that you couldn’t interpret those cases or the relevance, but an existence check and links or it doesn’t count.

        I assume that now it doesn’t happen unless the other side sys a paralegal for a few hours of research

        • ridethisbike@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I think the issue is we’re still in pretty uncharted territory here. It’ll take time for stuff like that to become the norm. That said… The lawyers should be doing those kind of checks anyways. They’re idiots if they don’t.

  • CareHare@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    31
    ·
    11 months ago

    This is what you get when the political system favours lies above truth

    The more these people lie and get away with it, the more it will become the culture. China levels of big brother oppression are only a decade or so away if this keeps on going.

  • rsuri@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    11 months ago

    The problem is breathless AI news stories have made people misunderstand LLMs. The capabilities tend to get a lot of attention, but not so much for the limitations.

    And one important limitation of LLM’s: they’re really bad at being exactly right, while being really good at looking right. So if you ask it to do an arithmetic problem you can’t do in your head, it’ll give you an answer that looks right. But if you check it with a calculator, you find the only thing right about the answer is how it sounds.

    So if you use it to find cases, it’s gonna be really good at finding cases that look exactly like what you need. The only problem is, they’re not exactly what you need, because they’re not real cases.

  • JeeBaiChow@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    11 months ago

    And this is the guy they want testifying about 45?

    ‘my honor, I object on the grounds that the prosecution witness is incompetent’.

  • Gerowen@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    11 months ago

    While the individuals have a responsibility to double check things, I think Google is a big part of this. They’re rolling “AI” into their search engine, so people are being fed made up, inaccurate bullshit by a search engine that they’ve trusted for decades.

    • Carighan Maconar@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      That’s not what they’re talking about here. Unless this so different in the US, only Microsoft so far shows LLM “answer” next to search results.

      • Gerowen@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        11 months ago

        Google may not be showing an “AI” tagged answer, but they’re using AI to automatically generate web pages with information collated from outside sources to keep you on Google instead of citing and directing you to the actual sources of the information they’re using.

        Here’s an example. I’m on a laptop with a 1080p screen. I went to Google (which I basically never use, so it shouldn’t be biased for or against me) and did a search for “best game of 2023”. I got no actual results in the entire first screen. Instead, their AI or other machine learning algorithms collated information from other people and built a little chart for me right there on the search page and stuck some YouTube (also Google) links below that, so if you want to read an article you have to scroll down past all the Google generated fluff.

        I performed the exact same search with DuckDuckGo, and here’s what I got.

        And that’s not to mention all the “news” sites that have straight up fired their human writers and replaced them with AI whose sole job is to just generate word salads on the fly to keep people engaged and scrolling past ads, accuracy be damned.

        • thisisnotgoingwell@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          11 months ago

          I mean I kind of see your point but calling those results AI is not accurate unless you’re just calling any kind of data collation/wrangling or even just basic programming logic “AI”. What Google is doing is taking the number of times a game is mentioned in the pages that are in the gaming category and trying to spoon feed you what it thinks you want. But that isn’t AI. the point of the person you were replying to is that it wasn’t as if he had intended to perform a Google search and was misled, you have to go to Google bard or chatgpt or whatever and prompt it, meaning it’s on you if you’re a professional who’s going to cite unverified word salad. The YouTube stuff is pretty obvious, it’s a part of their platform. What was done has nothing to do with web searches.

        • xx3rawr@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          11 months ago

          It was kinda funny to me when everyone freaked out about misinformation and “death of search” when I see a lot of people already never leave Google and treat Instant Answers as the truth, like they do with Chat-GPT, despite being very innacurate and out of context a lot of times.

          • LemmysMum@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            11 months ago

            Never expect the bottom 80% of the bell curve to have self awareness. That’s a bet you lose 9 times out of 10.

            • fsmacolyte@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Funny how “self awareness” has two meanings here. It’s the essence of what makes humans the smartest animals, but the problem you’re referring to—lack of self reflection—is one of the most common problems amongst people today. Common sense ain’t so common.

    • Aniki 🌱🌿@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      You mean like what happening in Gaza right now? You think all those weapons of war made the last half decade don’t have AI routines programmed into them? You think the Iron curtain works like space invaders with people clacking buttons, or an aim bot shooting wildly before you can even comprehend there’s a target to shoot at?

    • TheGalacticVoid@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      11 months ago

      All AI is doing is amplifying problems that already exist. Too many people lack media literacy, and too many people resort to anger and opposition when they don’t understand something.