• Contramuffin@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    8 days ago

    The issue is that chatgpt’s logic works backwards - they take the prompts as fact, then find sources to back up the things stated in the prompt. And additionally, chatgpt will frame the argument in a seemingly reasonable and mild tone so as to make the argument appear unbiased and balanced. It’s like the entirety of r/relationshipadvice rolled into one useless, billion-dollar spambot

    If someone isn’t aware of the sycophant nature of chatgpt, it’s easy to interpret the response as measured and reliable. When the topic of using chatgpt as relationship advice comes up (it happens concerningly often), I make a point to show that you can get chatgpt to agree with virtually anything you say, even in hypothetical cases where it’s absurdly clear that the prompter is in the wrong. At least Google won’t tell you that beating your wife is OK