•  
  •  
 

RhetTech

RhetTech

Abstract

This paper explores the inherent biases present in Large Language Model (LLM) outputs. As AI use becomes more widespread, it is extremely important to examine the implicit biases in these models, as AI will only perpetuate inaccuracies. By understanding the implications of using AI, we can use it in a way that causes the least harm. Using Hofstede’s cultural onion to structure research questions, I tested four LLMs—ChatGPT, Pi.ai, Qwen, and DeepSeek—across three different geographic locations: the United States, India, and Ireland. The findings demonstrate that while LLMs can adjust some content based on a user’s location, they frequently default to American-centric perspectives regarding cultural assumptions and units of measurement. These biases speak to the need for a more balanced corpus to train LLMs. This study concludes that the lack of diversity in training data leads to the "othering" of marginalized groups, and further research is needed to mitigate the effects of AI-driven homogenization on culture and identity.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.