RhetTech
Abstract
This paper explores the inherent biases present in Large Language Model (LLM) outputs. As AI use becomes more widespread, it is extremely important to examine the implicit biases in these models, as AI will only perpetuate inaccuracies. By understanding the implications of using AI, we can use it in a way that causes the least harm. Using Hofstede’s cultural onion to structure research questions, I tested four LLMs—ChatGPT, Pi.ai, Qwen, and DeepSeek—across three different geographic locations: the United States, India, and Ireland. The findings demonstrate that while LLMs can adjust some content based on a user’s location, they frequently default to American-centric perspectives regarding cultural assumptions and units of measurement. These biases speak to the need for a more balanced corpus to train LLMs. This study concludes that the lack of diversity in training data leads to the "othering" of marginalized groups, and further research is needed to mitigate the effects of AI-driven homogenization on culture and identity.
Recommended Citation
Bongu, Alekya
(2026)
"Geographic Bias in AI: How LLM Outputs Vary by Location,"
RhetTech: Vol. 8, Article 4.
Available at:
https://commons.lib.jmu.edu/rhettech/vol8/iss1/4
Included in
Other Rhetoric and Composition Commons, Rhetoric Commons, Technical and Professional Writing Commons
