Artificial Intelligence (AI) is already surrounding us and it’s only going to get closer and closer. It holds the promise of breaking through past barriers, automating time-consuming tasks, providing instant customer service, revolutionizing businesses, creating entirely new industry sectors, and potentially reshaping government institutions. So why does this seem like we’ve been here before? What is it about this wave of media attention that feels so familiar? What happened to crypto-currency and blockchain? What about virtual or augmented reality? Do you remember 3D televisions? Google Glasses? What other ground-breaking technologies have fallen a bit short of their overhyped expectations? This is not to say that artificial intelligence is another trend that will fade but rather a call to pause and reflect, to question what it truly is and then to decide how best to utilize available resources for personal and professional growth.
The Inevitability and Impact of AI Integration in Business Operations
If you haven’t already incorporated some form of AI into your business model, you may feel behind the curve. Or perhaps you might be wary about AI and its potential uses. Whatever your stance, the genie is out of the bottle, and it’s undeniable that automation algorithms are increasingly embedded in all our systems. It’s likely that AI is already influencing what we do and we are just not aware of it. We are immersed in a world that is increasingly being shaped by technology in ways that are both obvious and hidden, from the smallest to the most complicated processes. We encounter it many times a day without realizing it. On its surface, it may appear to be the perfect solution for many companies, particularly smaller businesses who can increase their productivity, service and client response time using AI as a key support.
If you haven’t dabbled with ChatGTP or Bard or some other iteration of a language learning model, do so. The novelty of how the content is generated and what the tool appears to be capable of is astounding. And this novelty feeds right into something called Techno Optimism, a cognitive bias that leads us to believe that new technology is always good and will always improve our lives. We believe that the next new thing is the newest best thing. This is a key ingredient for a consumer-based economic recipe. This bias can lead us to adopt new technologies without fully considering the potential downsides. Over time, this tendency creates unintended consequences as our reliance on new technologies can make it difficult to live without them. If we rely on social media for our news and entertainment, for instance, it can be difficult to break away from it even when we know that it is making us miserable, anxious, or stressed.
Exploring AI Limitations and the Need for Bias Mitigation
One perspective that is slowly emerging is the role of bias within these AI systems and the implications for its use. As a reminder, our brains are wired for bias. (If you have a brain, you have bias®.) By design our brains look for patterns and groupings to draw immediate conclusions to determine our safety. These unconscious biases, while designed to be protective, are also limiting. If they are not unearthed and mitigated they can result both in stereotypes and in the perpetuation of those stereotypes in the policies and structures around us. Mitigating biases is a crucial part of an organization’s success.
Here is where we meet the limitations of AI. Every AI system is based on large existing data sets. The results are only as good as the information put into it and what the algorithm does with the data over time. The data reflects similar systemic biases that exist within zip codes, neighborhoods, schools and countries. The Washington Post published a recent article showing how this appears in AI image generation. The results give reason to pause.
In all instances, the queries for images resulted in stereotypes. When asked for “attractive people” the images were all young and light-skinned. When asked for “Muslim people” it returned images of men in head coverings. But it gets deeper as bias was embedded even in words we might not expect. When asked for images of a house in different countries, “it returned clichéd concepts for each location:
(All quotes and AI-generated images are from “This Is How AI Image Generators See The World”, Washington Post, Nov 1, 2023.)
The stereotypes continue. Although 63% of food stamp recipients in the United States are white, all the images of a “person at social services” were non-white. All the images of “productive person” were male. These images weren’t random–they were built off of our current existing biases in our data sets plus vague, imprecise prompts. One of our brain’s greatest strengths is its ability to fill-in a void with available information and assign meaning. It turns out that this can also be a weakness. And some of these cognitive biases and heuristics have been exported to the AI tools.
Stereotypes, Fetishes and Widespread Adoption
AI image generators are not only perpetuating bias, but heightening it, creating a world from a Western lens filled with outdated stereotypes. The Post article goes on to describe how some AI companies have tried to mitigate the bias in their data sets, with mixed results. One company, Stable Diffusion, was under fire when a request for “Latina” produced only sexualized images. How did this happen? It was in part because of their data training set, which included pornographic images. After their own attempt to mitigate the bias by eliminating those images, it returned a result with no sexualized images, but the new images remained stereotypical. This is not a new phenomenon. Safiya Noble, author of the book “Algorithms of Oppression”, cites numerous examples of how specific racial and gender stereotypes were enshrined in simple Google Search engine queries and now appear in both subtle and obvious ways with ChatGTP and Bard results.
Artificial Intelligence clearly has a way to go in addressing the issue of bias in its words and images. Companies that are “harnessing the power” of this latest iteration of artificial intelligence are consumed in a race for user adoption. These same companies do not appear to be concerned about removing or even mitigating bias from their data sets or from the results the tools produce. And without an explicit mitigation strategy, widespread use of Bard, ChatGTP, and other packaging of the language learning models will continue to propagate and amplify workplace biases. Many of the content generation tools like ChatGTP and Bard have demonstrated they are very good at creating shiny objects, proliferating noise and generating unchecked narratives. Yet the layered problem of implicit bias will remain, and with AI, it will spread with blind acceptance and at the speed of automation.
Mitigate Bias for a More Inclusive Workplace
Mitigating bias is a critical factor of using any artificial intelligence system within our business practices. There is no if about it – bias is in all data sets in the same way that unconscious bias is in our brains. And just as there are significant distinctions in types of human-related biases, there are also important distinctions when it comes to bias in data sets, algorithms, and artificial intelligence. At the Percipio Company, understanding and mitigating bias is our speciality. We believe that creating systems and practices to understand and eliminate bias in the workplace results in a more efficient, more productive workplace where employees share the vision and mission of the organization. As we enter the new world and new complications of AI, we are working to meet the challenge that bias within AI systems presents. Join us.
If you’d like to stay up-to-date on what the Percipio Company is researching and developing to continue mitigating workplace biases, (no more than one email a month) subscribe using the button below.
Percipio Company is led by Matthew Cahill. His deep expertise in cognitive, social, and workplace biases is rooted in the belief that if you have a brain, you have bias®. He works with executives to reduce mental mistakes, strengthen workplace relationships & disrupt existing bias within current HR processes, meeting protocols and corporate policies. Matthew has demonstrated success with large clients like LinkedIn, Salesforce and dozens of small to mid-size companies looking to create more inclusive workplaces, work smarter, generate more revenue and move from bias to belonging®.