2 Comments
User's avatar
Steven Fearing's avatar

I recently asked Chat GPT to comment on sexual coercion and rape as motivated by sexual desire, with sexual expression being the end goal and "power-over" being the means for sexual expression and needs fulfillment (the prompt was a bit more sophisticated than that) in analyzing Ancient Roman sexuality with slaves. The "sexual access" model (more in alignment with EP) was considered "controversial." The "power/dominance model" (control, anger, entitlement) was not considered as such. Chat ultimately cited an "integrated/multi-factor model" as "widely accepted," but the feel of the response smelled of political gender bias. I have "smelled " this before on other matters. On this issue, is this integrated model "widely accepted?" Would a different AI platform provide better research citations?

Steve Stewart-Williams's avatar

I’ve sensed the same thing with GPT and other models. To be fair, though, I’ve also often been pleasantly surprised at how even-handed the models can be. When I was researching my post on science denial on both sides of the political aisle, I asked GPT for a list of examples of right-wing science denial. It dutifully gave me one, but then asked if I wanted an equivalent list for the left, so as to balance things out.

With sex differences and gender issues, GPT sometimes starts from a standard progressive standpoint, but with a bit of prompting, it will often become more balanced in its assessment.

Possibly Grok is less that way inclined. I haven’t used it a lot, though, so I don’t know whether it might have the mirror-image biases.