AI's Tangible Impacts: Sustainability, Copyright, and Bias
AI is rapidly evolving, and while futuristic doomsday scenarios capture headlines, it's crucial to address the real, presentday impacts of AI on our society and planet. As an AI researcher for over a decade, I've seen firsthand how AI's influence extends far beyond abstract risks, affecting everything from climate change to artistic rights and social biases.
The Environmental Cost of AI: A Melting Iceberg
The computational power required to train and run AI models has a significant environmental footprint. The infrastructure powering AI relies on metal, plastic, and vast amounts of energy. Every query to an AI model contributes to this cost.
Bloom: An Ethical and Transparent Approach
I was part of the BigScience initiative, which created Bloom, an open large language model focused on ethics and transparency. Our research revealed that training Bloom consumed as much energy as 30 homes in a year, emitting 25 tons of carbon dioxide. While this is significant, other models like GPT3 emit 20 times more carbon.
The Need for Measurement and Disclosure
The problem is that many tech companies **aren't accurately measuring or disclosing these environmental costs**. This lack of transparency means we're likely only seeing the tip of the iceberg. As AI models grow exponentially, so do their environmental impacts. Smaller, more efficient models can drastically reduce carbon emissions compared to their larger counterparts.
CodeCarbon: A Tool for Sustainable AI
To address this, I helped develop CodeCarbon, a tool that estimates the energy consumption and carbon emissions of AI training code. This allows us to make informed decisions, like choosing more sustainable models or deploying AI on renewable energy sources.
Copyright and AI: Protecting Artists' Rights
Another critical area of concern is the use of copyrighted material to train AI models without consent. It's been challenging for artists and authors to prove that their work has been used in this way.
Have I Been Trained?: Empowering Artists
Spawning.ai, founded by artists, created a tool called “Have I Been Trained?” which allows users to search massive datasets and see if their work has been used. For artists like Karla Ortiz, this provides crucial evidence for copyright infringement claims, which she and other artists used to file a classaction lawsuit against AI companies.
OptIn and OptOut Mechanisms
Spawning.ai has partnered with Hugging Face to develop optin and optout mechanisms for creating AI training datasets, emphasizing that humancreated artwork shouldn't be treated as a free resource for AI development.
AI Bias: Unveiling Hidden Stereotypes
AI bias occurs when models encode and perpetuate stereotypes or discriminatory beliefs. This can have serious consequences, especially when deployed in sensitive areas like law enforcement.
The Impact of Facial Recognition Bias
Dr. Joy Buolamwini's research revealed that facial recognition systems often perform significantly worse for women of color compared to white men. This can lead to false accusations and wrongful imprisonment, as seen in the case of Porcha Woodruff, who was wrongfully accused of carjacking due to biased AI.
Stable Bias Explorer: Visualizing Bias in Image Generation
To better understand bias in image generation, I created the Stable Bias Explorer, a tool that visualizes how AI models portray different professions. Our findings revealed a significant overrepresentation of whiteness and masculinity across various professions, perpetuating harmful stereotypes.
Making AI Accessible and Understandable
It's crucial to make AI accessible and understandable to everyone, regardless of their technical expertise. Tools like the Stable Bias Explorer can empower individuals to engage with AI and identify potential biases.
Building a Responsible AI Future
There's no single solution to complex issues like bias, copyright, or climate change. However, by creating tools to measure AI's impact, we can gain insights into the extent of these problems and start addressing them proactively.
Towards Transparency and Accountability
This information can empower companies to choose more sustainable and ethical AI models. Legislators can use it to develop new regulations and governance frameworks. And individuals can make informed choices about the AI models they trust.
Focus on Tangible Impacts
**Focusing on AI's future existential risks distracts from its current, tangible impacts.** We need to prioritize addressing these immediate challenges now. AI is still in development, and we have the opportunity to shape its direction collectively. By building the road as we walk it, we can ensure a more responsible and beneficial AI future for all.