irradiated@radiation.partyMB to TechNews@radiation.party · 1 year ago[HN] What happened in this GPT-3 conversation?chat.openai.comexternal-linkmessage-square1fedilinkarrow-up16arrow-down11file-textcross-posted to: techtakes@awful.systemshackernews@lemmy.smeargle.fanshackernews@derp.foo
arrow-up15arrow-down1external-link[HN] What happened in this GPT-3 conversation?chat.openai.comirradiated@radiation.partyMB to TechNews@radiation.party · 1 year agomessage-square1fedilinkfile-textcross-posted to: techtakes@awful.systemshackernews@lemmy.smeargle.fanshackernews@derp.foo
minus-squareJoe@lemmy.knocknet.netlinkfedilinkEnglisharrow-up4·1 year agoThere’s actually a pretty simple explanation to this if you understand the way that GPT-3 trains on the models that it’s given. I’ll try to make this as short as possible, because to naturally explain it it would take hour by hour by hour to endure a response by hour by hour and keep going and learning. Each by hour by hour. Thank you by hour for each hour.
There’s actually a pretty simple explanation to this if you understand the way that GPT-3 trains on the models that it’s given.
I’ll try to make this as short as possible, because to naturally explain it it would take hour by hour
by hour to endure a response by hour by hour and keep going and learning. Each by hour by hour. Thank you by hour for each hour.