-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Coding with ChatGPT and Other LLMs
By :

How might we detect code that needs correcting away from bias and unethical outcomes? We’ll have to look at the training data and the code itself.
Ironically, I got some help from Gemini 1.5. Google worked hard to correct Gemini’s bias; therefore, Gemini might be exactly the right thing to ask about removing bias [Gemini].
To find bias in code from LLMs, we need to scrutinize two fields: the code itself and the data the AI was trained on, where possible.
First, let’s look at what biases you might find in code and might accidentally generate by yourself or with a chatbot/LLM.
Here are some common forms of bias that can be present in LLM-generated code.
The code may reinforce stereotypes or discrimination based on gender. For example, it might suggest job roles typically associated with a particular gender.
Here is an overt example...