Hermansson logged in to Google and began looking up results for the IQs of different nations. When he typed in “Pakistan IQ,” rather than getting a typical list of links, Hermansson was presented with Google’s AI-powered Overviews tool, which, confusingly to him, was on by default. It gave him a definitive answer of 80.

When he typed in “Sierra Leone IQ,” Google’s AI tool was even more specific: 45.07. The result for “Kenya IQ” was equally exact: 75.2.

Hmm, these numbers seem very low. I wonder how these scores were determined.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 months ago

    "garbage in, garbage out" my beloathed

    Not the first time this has happened Google’s own AI overviews have misinterpreted u/fucksmith, eaten rocky onions and hallucinated cats on the moon before) but this is probably the worst such incident

    Anyways, sidenote time:

    Right now, there’s no legal precedent determining whether or not “AI overviews” like Google’s are protected under Section 230, but between shit like this and the recent lawsuit against character.ai, I suspect there’s gonna be plenty of effort to deny them Section 230 protection.

    If that happens, I expect it will put an immediate end to public-facing autoplag like this, as such products immediately become legal timebombs waiting to go off. I suspect it will also kill any future attempts at AI for the foreseeable fututre, for similar reasons.

    As for AI as a concept, which I’ve discussed previously, I expect this incident will help further a public notion of “artificial intelligence” being an oxymoronic concept, and of intelligence being something that either cannot be replicated by artificial means, or something which should not be replicated by artificial means.