She knows enough about strawberries to get them from the store for the picture. She should also know they don’t grow on trees. And there was probably one or more additional people there to handle the photography. Any one of them would know strawberries don’t grow on trees.
AI on the other hand would easily make that assumption, especially art AI vs text AI.
You’d think that, but I’ve met plenty of people who are wholly ignorant about where food comes from in general. Sure it requires only one person to be ignorant if it was generated, but it is entirely plausible that both model and photographer didn’t know. I don’t have the chance to test it, but I would imagine that there are many pictures of people picking strawberries realistically in the training data and AI would probably only generate this if you were very specific about it being a tree.
To me, that’s the sort of thing people notice.
She knows enough about strawberries to get them from the store for the picture. She should also know they don’t grow on trees. And there was probably one or more additional people there to handle the photography. Any one of them would know strawberries don’t grow on trees.
AI on the other hand would easily make that assumption, especially art AI vs text AI.
You’d think that, but I’ve met plenty of people who are wholly ignorant about where food comes from in general. Sure it requires only one person to be ignorant if it was generated, but it is entirely plausible that both model and photographer didn’t know. I don’t have the chance to test it, but I would imagine that there are many pictures of people picking strawberries realistically in the training data and AI would probably only generate this if you were very specific about it being a tree.