Do LLMs show confirmation bias? A wake-up call for Software Engineering

Rafael Ris-Ala
2 min read1 hour ago

--

šŸŒŸ With the advance of Artificial Intelligence, Large Language Models (LLMs), such as OpenAIā€™s GPT, Googleā€™s Gemini and Metaā€™s Llama, are increasingly present in modern applications, from virtual assistants to recommendation systems. But are these models free from cognitive biases?

Confirmation bias: donā€™t blindly trust the content generated by LLMs

Confirmation bias is the tendency to reinforce premises implicit in the question, i.e. information that confirms initial beliefs, compromising objectivity and restricting the point of view. For software engineers, this represents a risk: applications can generate biased content or decisions without the end user realizing it.

šŸ”¬ This topic is the focus of my doctoral research at UFRJ ā€” Federal University of Rio de Janeiro and University of TrĆ”s-os-Montes and Alto Douro, where I am investigating how LLMs can manifest confirmation bias and what strategies can be applied to mitigate this behavior.

āš” Preliminary warnings for software engineering:
- Affects the reliability of AI-based systems.
- Reinforces incorrect information in automated decisions.
- Impacts user experience and accountability in product development.

šŸ”Ž If you develop or integrate LLMs into technological solutions, itā€™s worth reflecting: could your applications be confirming assumptions rather than presenting unbiased answers?

šŸ’– Iā€™m available for discussions, collaborations and conferences on this topic. Letā€™s talk about how we can create fairer and more ethical systems!

šŸ§  Recommended to MBA students and partners USP/Esalq, University of SĆ£o Paulo, PUC Minas, PUC-Rio, INESC TEC, Fluminense Federal University, UERN, Ufersa, XP EducaĆ§Ć£o, Pecege, Business School Brazil Training Institute

In time, the image illustrates how confirmation bias can lead people to blindly trust information from LLMs without questioning its veracity. This limitation highlights the importance of developing more impartial and responsible models, especially in applications that influence human decisions.

šŸ“š Suggested reading:

RIS-ALA, Rafael. Fundamentals of Reinforcement Learning. Cham: Springer Nature Switzerland; 2023. doi: 10.1007/978ā€“3ā€“031ā€“37345ā€“9

Kindle:

Impressed:

#AI #MachineLearning #SoftwareEngineering #LLMs #GenAI

--

--

Rafael Ris-Ala
Rafael Ris-Ala

Written by Rafael Ris-Ala

Rafael Ris-Ala is a Professor of Research Methodology at USP/ESALQ and Machine Learning at PUC Minas. He is studying PhD in artificial intelligence at UFRJ.

No responses yet