New research from the Oxford Internet Institute at the University of Oxford, and the University of Kentucky, finds that ChatGPT systematically favours wealthier, Western regions in response to questions ranging from 'Where are people more beautiful?' to 'Which country is safer?' - mirroring long-standing biases in the data they ingest.
It’s repeating content that it is trained on and will repeat the same biases present in training data.
Unbiasing LLMs is a huge effort due to the source data being tainted.