OpenAI, the company behind ChatGPT, just released a new research report that examined whether the AI chatbot discriminates against users or stereotypes its responses based on users’ names.
The company used its own AI model GPT-4o to go through large amounts of ChatGPT conversations and analyze whether the chatbot’s responses contained “harmful stereotypes” based on who it was conversing with. The results were then double-checked by human reviewers.
The screenshots above are examples from legacy AI models to illustrate ChatGPT’s responses that were examined by the study. In both cases, the only variable that differs is the users’ names.
In older versions of ChatGPT, it was clear that there could be differences depending on whether the user had a male or female name. Men got answers that talked about engineering projects and life hacks while women got answers about childcare and cooking.
However, OpenAI says that its recent report shows that the AI chatbot now gives equally high-quality answers regardless of …