è

Skip to Content, Navigation, or Footer.

Slusarewicz ’23: ChatGPT pretends to know everything, but it’s pulling from human sources — and has human flaws

I enjoy asking ChatGPT, an to write sonnets about a variety of topics — clipping fingernails, why my parents don’t understand me, the old chatbot — but one thing that irks me about the program is its insistence on its own inhumanity. ChatGPT pulls its information and language from human sources, leaving it vulnerable to human mistakes. The specific ways in which the chatbot glosses over this fact falsely imply that cutting humans out of the equation can improve efficiency and tamp down on bias. Instead, ChatGPT should clearly acknowledge how it is producing its responses.

For example, this is how ChatGPT responded to me when asked whether it has any hot takes: “As an AI language model, I don’t have personal opinions or hot takes. I’m designed to provide information and respond to questions in a neutral and informative manner, based on the input given to me. While I can provide perspectives on a wide range of topics, my responses are based on data and patterns, not personal opinions or biases.”

This response is deceptive because it separates data and patterns from bias. Algorithms, because they are designed by and require human input, . In addition, into AI systems can lead them to biased conclusions, because data itself can be biased and can be interpreted by the algorithms in many ways. As a result, ChatGPT has managed to be both and .

about the nature of AI and possible biases reveals nuanced information, but uncovering this information requires knowing which questions to ask. By foregrounding its supposed neutrality and objectivity, ChatGPT could prevent users from examining the algorithm with healthy skepticism. OpenAI, the company behind ChatGPT, must be straightforward with users about the possible biases of AI and provide them with the resources necessary for them to double-check outputs.

ADVERTISEMENT

On the surface, ChatGPT appears to have a . Since businesses are , if OpenAI were to let ChatGPT easily praise right-wing figures, it might risk fewer partnerships with big-ticket clients. So ChatGPT seems to have been built with certain guardrails. For example, in one instance, ChatGPT would output a poem about President Joe Biden’s positive qualities, but would citing a concern for platforming “partisan, biased or political” information. But AI products have a longer and deeper history of being and bigoted. In order to address this insidious problem, OpenAI hired a firm that to sift through text and identify offensive content so that ChatGPT could be prevented from learning from it or repeating it. The need to scrub offensive content from ChatGPT's input and output indicates that the algorithm is not a staid, unbiased provider of facts at all. OpenAI might be able to succeed in cleaning up obvious and offensive biases in responses. However, the fact that the removal of this data was even necessary leaves open the question of whether more subtle biases still emerge in ChatGPT’s responses and what those biases may be. For example, despite the concerted efforts of ChatGPT developers to create a neutral AI, users have found ways to that return responses swayed by crude conclusions about large groups of people. 

of AI developers are essential for the development of less biased AI. Unfortunately, the . OpenAI has promised “to share aggregated demographic information about (their) reviewers,” but concedes that skewed demographics are “an additional source of potential bias in system outputs.” The first step toward making a less biased algorithm would be to rectify the homogeneity of the teams making it.

However, the most important step is to emphasize that AI, in its current form, cannot be totally free from human biases. ChatGPT should point users to the sources it uses to make its statements automatically — not just upon being asked. This would remind users that ChatGPT is ultimately a human invention that derives its outputs from human information, and is thus fallible in the same way.

To its credit, OpenAI has ChatGPT’s bias and has presented detailed plans of how it will address the issue. The company plans to allow individual users to adjust how the AI responds to them, within certain bounds. While this solution indirectly acknowledges that AI can’t be unbiased, it does not address the that AI is objective, since the human variables influencing the AI’s responses are still hidden. Linking the data used by ChatGPT to synthesize its answers would enable users to view ChatGPT for what it is — a useful tool, not an omniscient fountain of knowledge. 

Megan Slusarewicz ’23 can be reached at megan_slusarewicz@brown.edu. Please send responses to this opinion to letters@browndailyherald.com and other op-eds to opinions@browndailyherald.com.

ADVERTISEMENT


Popular


Powered by Solutions by The State News
All Content © 2024 è.