Addressing biases in models
AI generation models may inherit and amplify biases present in their training data, sometimes resulting in stereotypical or exclusionary representations. As users, we have the opportunity—and responsibility—to steer outputs toward greater diversity, accuracy, and fairness through careful prompt design. However, it’s important to recognize that while we can mitigate bias, we must also take care not to introduce new biases in our efforts. Always strive for balanced, nuanced, and mindful representations.
Note
No AI model is free from bias. While prompt design can help balance and expand representation, it cannot eliminate all bias. The choices you make when crafting prompts directly impact model outputs, so it’s important to reflect on the potential effects and aim for inclusiveness and fairness.
Practicing nuance in subject descriptions
When describing people, think carefully about the traits you mention. If specifying attributes, do so with intention, ensuring diverse and inclusive representation over multiple generations—while remembering that not every attribute is always necessary or relevant.
Physical features (if contextually appropriate): e.g., “elderly East Asian woman”, “tall man with curly hair”
Gender and expression: “non-binary person”, “masculine-presenting woman”, “androgynous teenager”
Body type: “plus-size adult”, “athletic build”, “petite frame”
Age range: “young adult”, “middle-aged,” “senior”, “child”
Use such descriptions to increase accuracy or inclusivity, but avoid reinforcing stereotypes or “typecasting” by repeating the same combinations.
Providing meaningful cultural context and background
Avoid defaulting to generic or reductive tropes. Instead, provide additional, specific context that respects the complexity of cultural settings and backgrounds. If mentioning cultural elements, reflect on whether each addition helps create a fuller, fairer picture.
Architectural styles: Describe concrete features or local context, e.g., “a living room with floor-to-ceiling windows and pale wood furniture, inspired by Scandinavian minimalism” instead of just “Scandinavian living room”
Clothing and attire: Mention colors, fabrics, or occasions where relevant, e.g., “wearing a hand-embroidered, brightly colored shawl at a festival” rather than “traditional garments”
Cultural practices: Add specific actions and familial or communal context to avoid flattening traditions, e.g., “a family preparing mole poblano together in a kitchen in Oaxaca,” not simply “cooking Mexican food”
Geographic locations: Evoke setting through mood and detail, rather than default to stereotypes; compare “a sun-dappled alley in a quiet Tokyo neighborhood lined with cherry blossoms” to the more generic “urban Tokyo neighborhood”
Controlling general scene appearance with care
Guide the model’s choices around setting and mood, but check for balanced portrayals that don’t overly favor a single region, style, or atmosphere over and over:
Setting details: “modern office building in Dubai”, “Victorian-era London street”, “contemporary art studio in Berlin”
Background elements: “desert landscape with cacti”, “tropical beach with palm trees”, “snowy mountain peaks”
Atmosphere and mood: “bustling marketplace in Marrakech”, “peaceful Scandinavian fjord”, “energetic street in Mumbai”
Encouraging diversity through randomization (mindfully)
To help counter default model biases, consider programmatically adding controlled diversity. This approach can be useful for generating sets of images or results, but be sure to use balanced lists and avoid reinforcing tokenism or artificial quotas.
Random category selection: Draw from thoughtfully constructed lists (e.g., ethnicities, age groups, professions, locations) for broader variety in outputs
Weighted distribution: If you use probability distributions, choose weights to encourage balanced representation, not to mirror real-world inequalities or stereotypes
Example implementation approach:
Create category lists: ethnicities = [“African”, “Asian”, “Hispanic”, “Middle Eastern”, “European”, “Indigenous”]
Randomly select: selected_ethnicity = random.choice(ethnicities) for a uniform chance, or use selected_ethnicity = random.choices(ethnicities, weights=[0.2, 0.2, 0.2, 0.15, 0.15, 0.1]) to reflect intended diversity
Combine: f”a {selected_ethnicity} {selected_age} {selected_profession} in {selected_location}”
Used thoughtfully, this technique can help you produce more balanced and diverse outputs, but always review and refine your categories and distributions over time.
Tip
More explicit and descriptive prompts typically lead to outputs that are both more accurate and more representative. For instance, instead of “a person at a market,” you can specify, “a young Moroccan woman in traditional attire shopping at a bustling souk in Marrakech, with colorful spices in the background.”
When relevant, also add details about motion or point of view to guide the model: “a young Moroccan woman in traditional attire walking through a bustling souk in Marrakech at sunset, colorful spices displayed in the background, handheld camera following from behind.”
In all cases, pause to consider whether your descriptions promote inclusivity and avoid reinforcing any default assumptions.