Impact on Society

Writing always came easy to Riley, who is now a second-year student majoring in screenwriting. They came to college because they dreamed of writing a script to captivate audiences, but that pressure occasionally led to writer's block. One night, when Riley was experiencing writer's block, they found a new tool powered by AI that promised to generate storylines like Shakespeare. Riley was intrigued, and they began experimenting with this new app. They put in a plot point that was a half-baked idea, and then the tool transformed that plot point into a long, flowery narrative with impressive characters and even a couple of twists.

Riley was shocked by this. While they were impressed by the AI tool's ability to produce a lot of text, they had a lot of ethical questions about what it means for an app to produce such extensive writing so quickly. And Riley's mine, there are a lot of social implications that she wanted answers to. One issue was the authenticity of the work produced. Riley wondered, "If the AI is generating the core of my story, can I truly consider them my own?" This dilemma reflected concern about copyright protection in the age of AI, topics thoroughly discussed in a range of articles that Riley had found, where the blurring of lines between human and machine-generated content raised essential questions about authorship and creative rights.

Riley also noticed potential biases in the AI's narratives, which made her curious about the references the AI used to produce the narrative. They were burdened by this new responsibility to evaluate whether the text produced by the AI system was 100% accurate and didn't negatively affect society.

Riley began researching the role that creative AI tools’ have in shaping narratives in society. One debate they found include comments from those who believe that stories generated by creative AI tools could be a positive force which include with diverse and inclusive narratives. These individuals believed AI could be a positive and that they can reduce stereotypes while introducing important new perspectives. 

Others in the debate warned against allowing creative AI tools to influence societal values. They argued that these tools lack the kind of real experiences necessary to improve society, and they actuallly might create unrealistic expectations about how humans relate with one another. One person pointed out that creative AI tools could perpetuate harmful stereotypes and even suggested they could help to upend societal structures.

Many of Riley's classmates were also beginning to notice these new AI tools, and it became a point of discussion in one of her screenwriting classes. When discussing their future profession if creative AI tools are dominant, Riley also considered a world where AI might even overshadow their creativity.

Riley is not sure how to reconcile all of these conflicting thoughts. They realized that using AI tools to break through creative blocks wasn't only necessary. Still, it was additionally essential to map out their ethical landscape by the importance of being authentic and original, avoiding biases in the text, And making sure that her profession includes humans in the future, not just AI tools.

What do you think?


Questions for Discussion

  • How can AI tools enrich the creative process for screenwriters?
  • What are the problems in using AI if it's good at generating storylines and characters?
  • Who is the author of a screenplay that is mostly written by AI: the human writer, the AI, or both?
  • What ethical issues arise when using AI in creative writing?
  • How can writers assess and figure out if an AI output adds to or hurts culture and society?
  • What specific skills do writers and screenwriters need to address the potential ethical concerns of AI
  • How should writers, technologists, and ethicists work together to improve how AI is developed for screenwriting?

List of resources that, in part, focus on this topic

  • Frontier AI Ethics, by Seth Lazar (2024)
  • Shifting AI controversies: How do we get from the AI controversies we have to the controversies we need?, Shaping AI (2023)
  • Designing Neural Media, by K Allado-McDowell (2023)
  • Communication, by Finn Brunton, Mercedes Bunz, Paula Bialski (2018)
  • Race After Technology, by Ruha Benjamin (2019)
  • Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, by Kate Crawford (2021)
  • Design Justice: Community-Led Practices to Build the Worlds We Need, by Sasha Costanza-Chock (2020)
  • Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, by Virginia Eubanks (2018)
  • Sex, Race, and Robots: How to Be Human in the Age of AI, by Ayanna Howard (2019)
  • Algorithms of Oppression: How Search Engines Reinforce Racism, by Safiya Umoja Noble (2018)
  • Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, by Cathy O'Neil (2016)
  • The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, by Shoshana Zuboff (2019)
  • Coded Bias, by Joy Buolamwini (2020)

Cultural Appropriation

Ali has been making clothes since she was a kid. Now, she’s a graduate student at the best fashion design school in the country, where she has been experimenting with how to incorporate AI into making a new clothing line. She draws from cultures around the world and wants to create a collection that genuinely represents all cultures and respects all body types. Her professors tell her that it's impossible, but she persists. Ali programmed her AI tool to include many data sets representing traditional garments and patterns and a broad range of fashion history.

When Ali’s AI tool began generating design renderings, she was initially impressed by the range of patterns and styles. However, one of her close friends in the program pointed out that some of the renderings looked similar to garments that some consider sacred. They represented cultures that Ali knew little about and was not represented by.

Her critique sparked an inquiry about cultural appropriation in her collection and in AI-generated art. She quickly realized that the model she trained needed to include the kind of nuanced understanding of different cultures and symbolism that those cultures would not want to reproduce. In her research, she read articles about cultural appropriation in fashion and academic papers, such as those about digital cultural heritage, hoping to find insights to help navigate the ethics of these issues.

Ali also reached out to a few friends in her program. A casual afternoon coffee conversation became a disagreement about the benefits and pitfalls of representing cultures with AI tools in the fashion industry. One friend said that AI’s ability to process large amounts of data might create more authentic and respectful cultural fusions. “AI tools could break cultural barriers through fashion,” she said. Another friend argued that algorithms could not fully capture all of the nuances contained in culture and mentioned that AI-generated designs might trivialize cultural symbols and elements.

Ali organized a panel of experts to discuss this. It took place at her school, and three experts debated the issue: a fashion designer, a cultural anthropologist, and a professor who teaches AI ethics. The discussion revealed the complexity of the issue. While one of the panelists mentioned that human designers struggled with cultural appropriation and questioned whether an AI tool could be programmed with more comprehensive cultural knowledge than a human, another pointed out that simply feeding cultural designs into an AI tool required separating the designs from their cultural context and was, therefore, a form of appropriation. Ali was left with a deeper appreciation for her responsibility as a fashion designer.

Ali revised the tool to include specific ways of filtering out imagery that might be culturally sensitive but couldn’t fully address some of the issues the panelists brought up. The experience made her consider the importance of being culturally aware as she designed, but she remained frustrated by the process.

What do you think?


Questions for Discussion

  • How can the creative process in fashion design be enhanced by AI tools?
  • What kind of problems can emerge when clothes are designed by AI tools?
  • How can AI-generated fashion designs cause cultural appropriation?
  • What can designers do to gaurantee their work accurately represents the specific cultures they draw inspiration from?
  • Why is it so important for AI tools used by the fashion industry to understand the culture of the designs it produces?
  • How can AI filters prevent cultural insensitivity in the designs they output?
  • What should the fashion industry do to ensure that AI designs are culturally accurate?

List of resources that, in part, focus on this topic

Bias & Representation

Devon arrived at college already having established a photography business. He took photographs of weddings, graduation ceremonies, and even his friend's band. So, when he arrived in college, he knew exactly how he wanted to further his photography career. However, in his second-year photography studio course, his professor introduced them to an AI program that could generate photographs based simply on a short text description. Devon has always been interested in how photography and tech intersect and was eager to try out this new way of expanding his creativity.

Devon signed up for a pro account on the AI tool and spent long hours generating photographs based on the themes he was working on in his course. He was initially wowed by what the program was producing. Each one was a slight variation on his theme, and he could produce variations on those variations. It sparked his creativity and helped him generate ideas for future projects.

As his project advanced, he spotted reoccurring patterns in the images that were being produced. They didn't allow him to create the kind of diverse imagery he had in mind for his project, no matter how many times he changed his prompts. Also, the photos, especially those that included people in them, look like stereotypical caricatures that didn't provide a variety of races, genders, and backgrounds that were culturally distinct from each other. Devon aimed to demonstrate how rich human experiences could be, so this limitation was frustrating, and he couldn't figure out a way around it. For that reason, Devon began researching why the AI tool produced this kind of bias.

Devon also heard others debating this issue. Some argued that AI-generated images expose the inherent prejudices in our society and the art world. They saw this as an opportunity to spark meaningful discussion about these topics. Others pointed out that these biased tools inadvertently allow artists to perpetuate harmful stereotypes. This group advocated for boycotting AI tools until they could be more representative.

Devon contacted some photographers from often-underrepresented groups to learn their perspectives on how datasets are created and their feelings about how they were being represented, not represented, and misrepresented. Some were offended, while others wanted to use AI tools to create fantastical and non-realistic images that would, in effect, challenge the conventional notions of identity and representation. These conversations made Devon realize that the issue was far more complex than he initially thought, involving questions of artistic freedom, social responsibility, and the very nature of representation in art.

He found several helpful articles that reviewed how AI-generated content produces bias and diverse representation. At the core of this problem lies the fact that AI algorithms need to learn from image datasets that contain massive amounts of photographs that often represent only certain and often privileged groups in society. Devon learned even more about this on several podcasts focusing on AI's ethical implications in art. The fact that these data sets contained many more images from specific demographics led to the images being produced being an overrepresentation of those groups.

He was determined to focus on this issue. He experimented with different ways of generating images that reflected more balanced and diverse data sets, especially those representing a white spectrum of humanity.

Devon presented his final project at the end of this semester. It was unique among his classmates. It showed AI-augmented photographs and a written essay about the importance of adding more diversity to the datasets that are part of these AI tools. He hoped his classmates and other photographers would follow ethical standards as they produced photographs with AI tools.

What do you think?


Questions for Discussion

  • In what ways can an AI tool improve a photographer's creativity?
  • Are there any risks in using an AI tool to create a photo?
  • Why do we need to call out and eliminate the biases produced by AI-generated content?
  • How does dataset content that is used to train AI algorithms impact who becomes represented in the images that are generated?
  • How can the creative community ensure that the future of AI in art is diverse and inclusive?
  • What role can photographers and other creatives play in shaping how AI tools are ethically used in their fields?
  • What new skills should a photographer acquire to avoid producing new biases when they use an AI tool?
  • What did you learn from this scenario that you can apply to your process?

Incomplete list of resources that deal with bias