The Entrepreneurs Weekly
No Result
View All Result
Sunday, June 22, 2025
  • Login
  • Home
  • BUSINESS
  • POLITICS
  • ENTREPRENEURSHIP
  • ENTERTAINMENT
Subscribe
The Entrepreneurs Weekly
  • Home
  • BUSINESS
  • POLITICS
  • ENTREPRENEURSHIP
  • ENTERTAINMENT
No Result
View All Result
The Entrepreneurs Weekly
No Result
View All Result
Home Business

AI Can Be Racist, Sexist and Creepy. Here Are 5 Ways You Can Counter This In Your Enterprise. | Entrepreneur

by Brand Post
July 13, 2023
in Business
0
AI Can Be Racist, Sexist and Creepy. Here Are 5 Ways You Can Counter This In Your Enterprise. | Entrepreneur
152
SHARES
1.9k
VIEWS
Share on FacebookShare on Twitter


Opinions expressed by Entrepreneur contributors are their own.

I started my career as a serial entrepreneur in disruptive technologies, raising tens of millions of dollars in venture capital, and navigating two successful exits. Later I became the chief technology architect for the nation’s capital, where it was my privilege to help local government agencies navigate transitioning to new disruptive technologies. Today I am the CEO of an antiracist boutique consulting firm where we help social equity enterprises liberate themselves from old, outdated, biased technologies and coach leaders on how to avoid reimplementing biased in their software, data and business processes.

The biggest risk on the horizon for leaders today in regard to implementing biased, racist, sexist and heteronormative technology is artificial intelligence (AI).

Today’s entrepreneurs and innovators are exploring ways to use to enhance efficiency, productivity and customer service, but is this technology truly an advancement or does it introduce new complications by amplifying existing cultural biases, like sexism and racism? 

Soon, most — if not all — major enterprise platforms will come with built-in AI. Meanwhile, employees will be carrying around AI on their phones by the end of the year. AI is already affecting workplace operations, but marginalized groups, people of color, LGBTQIA+, neurodivergent folx, and disabled people have been ringing alarms about how AI amplifies biased content and spreads disinformation and distrust.

To understand these impacts, we will review five ways AI can deepen racial bias and social inequalities in your enterprise. Without a comprehensive and socially informed approach to AI in your organization, this technology will feed institutional biases, exacerbate social inequalities, and do more harm to your company and clients. Therefore, we will explore practical solutions for addressing these issues, such as developing better AI training data, ensuring transparency of the model output and promoting ethical design. 

Related: These Entrepreneurs Are Taking on Bias in Artificial Intelligence

Risk #1: Racist and biased AI hiring software

Enterprises rely on AI software to screen and hire candidates, but the software is inevitably as biased as the people in human resources (HR) whose data was used to train the algorithms. There are no standards or regulations for developing AI hiring algorithms. Software developers focus on creating AI that imitates people. As a result, AI faithfully learns all the biases of people used to train it across all data sets.

Reasonable people would not hire an HR executive who (consciously or unconsciously) screens out people whose names sound diverse, right? Well, by relying on datasets that contain biased information, such as past hiring decisions and/or criminal records, AI inserts all these biases into the decision-making process. This bias is particularly damaging to marginalized populations, who are more likely to be passed over for employment opportunities due to markers of race, gender, sexual orientation, disability status, etc.

How to address it:

  • Keep socially conscious human beings involved with the screening and selection process. Empower them to question, interrogate and challenge AI-based decisions.
  • Train your employees that AI is neither neutral nor intelligent. It is a tool — not a colleague.
  • Ask potential vendors whether their screening software has undergone AI equity auditing. Let your vendor partners know this important requirement will affect your buying decisions.
  • Load test resumes that are identical except for some key altered equity markers. Are identical resumes in Black zip codes rated lower than those in white majority zip codes? Report these biases as bugs and share your findings with the world via Twitter.
  • Insist that vendor partners demonstrate that the AI training data are representative of diverse populations and perspectives.
  • Use the AI itself to push back against the bias. Most solutions will soon have a chat interface. Ask the AI to identify qualified marginalized candidates (e.g., Black, female, and/or queer) and then add them to the interview list.

Related: How Racism is Perpetuated within Social Media and Artificial Intelligence

Risk #2: Developing racist, biased and harmful AI software

ChatGPT 4 has made it ridiculously easy for information technology (IT) departments to incorporate AI into existing software. Imagine the lawsuit when your chatbot convinces your customers to harm themselves. (Yes, an AI chatbot has already caused at least one suicide.)

How to address it:

  • Your chief information officer (CIO) and risk management team should develop some common-sense policies and procedures around when, where, how, and who decides what AI resources can be deployed now. Get ahead of this.
  • If developing your own AI-driven software, stay away from public internet-trained models. Large data models that incorporate everything published on the internet are riddled with bias and harmful learning.
  • Use AI technologies trained only on bounded, well-understood datasets.
  • Strive for algorithmic transparency. Invest in model documentation to understand the basis for AI-driven decisions.
  • Do not let your people automate or accelerate processes known to be biased against marginalized groups. For example, automated facial recognition technology is less accurate in identifying people of color than white counterparts.
  • Seek external review from Black and Brown experts on diversity and inclusion as part of the AI development process. Pay them well and listen to them.

Risk #3: Biased AI abuses customers

AI-powered systems can lead to unintended consequences that further marginalize vulnerable groups. For example, AI-driven chatbots providing customer service frequently harm marginalized people in how they respond to inquiries.  AI-powered systems also manipulate and exploit vulnerable populations, such as facial recognition technology targeting people of color with predatory advertising and pricing schemes.

How to address it:

  • Do not deploy solutions that harm marginalized people. Stand up for what is right and educate yourself to avoid hurting people.
  • Build models responsive to all users. Use language appropriate for the context in which they are deployed.
  • Do not remove the human element from customer interactions. Humans trained in cultural sensitivity should oversee AI, not the other way around.
  • Hire Black or Brown diversity and technology consultants to help clarify how AI is treating your customers. Listen to them and pay them well.

Risk #4: Perpetuating structural racism when AI makes financial decisions

AI-powered banking and underwriting systems tend to replicate digital redlining. For example, automated loan underwriting algorithms are less likely to approve loans for applicants from marginalized backgrounds or Black or Brown neighborhoods, even when they earn the same salary as approved applicants.

How to address it:

  • Remove bias-inducing demographic variables from decision-making processes and regularly evaluate algorithms for bias.
  • Seek external reviews from experts on diversity and inclusion that focus on identifying potential biases and developing strategies to mitigate them. 
  • Use mapping software to draw visualizations of AI recommendations and how they compare with marginalized peoples’ demographic data. Remain curious and vigilant about whether AI is replicating structural racism.
  • Use AI to push back by requesting that it find loan applications with lower scores due to bias. Make better loans to Black and Brown folks.

Related: What Is AI, Anyway? Know Your Stuff With This Go-To Guide.

Risk #5: Using health system AI on populations it is not trained for

A pediatric health center serving poor disabled children in a major city was at risk of being displaced by a large national health system that convinced the regulator that its Big Data AI engine provided cheaper, better care than human care managers. However, the AI was trained on data from Medicare (mainly white, middle-class, rural and suburban, elderly adults). Making this AI — which is trained to advise on care for elderly people — responsible for medication recommendations for disabled children could have produced fatal outcomes.

How to address it:

  • Always look at the data used to train AI. Is it appropriate for your population? If not, do not use the AI.

Conclusion

Many people in the AI industry are shouting that AI products will cause the end of the world. Scare-mongering leads to headlines, which lead to attention and, ultimately, wealth creation. It also distracts people from the harm AI is already causing to your marginalized customers and employees.

Do not be fooled by the apocalyptic doomsayers. By taking reasonable, concrete steps, you can ensure that their AI-powered systems are not contributing to existing social inequalities or exploiting vulnerable populations. We must quickly master harm reduction for people already dealing with more than their fair share of oppression.



Source link

Tags: Artificial IntelligenceBiasesbusiness solutionsCounterCreepyCultureDEIDiscriminationdiversityDiversity equity inclusionDiversity TrainingenterpriseentrepreneurGrowing a BusinessinnovationLeadershipRacistScience & TechnologySexistTechnologyWaysWorkplace Diversity

Related Posts

This 0 Chromebook Offers Flexibility and Performance for On-the-Go Entrepreneurs | Entrepreneur
Business

This $180 Chromebook Offers Flexibility and Performance for On-the-Go Entrepreneurs | Entrepreneur

June 21, 2025
How to Turn Bad Reviews Into Great News For Your Business | Entrepreneur
Business

How to Turn Bad Reviews Into Great News For Your Business | Entrepreneur

June 21, 2025
Tackle Decision Fatigue With This CEO-Worthy AI Tool | Entrepreneur
Business

Tackle Decision Fatigue With This CEO-Worthy AI Tool | Entrepreneur

June 21, 2025
  • Trending
  • Comments
  • Latest
Meet Amir Kenzo: A Well Known Musical Artist From Iran.

Meet Amir Kenzo: A Well Known Musical Artist From Iran.

August 21, 2022
Behind the Glamour: Bella Davis Opens Up About Overcoming Adversity in Modeling

Behind the Glamour: Bella Davis Opens Up About Overcoming Adversity in Modeling

April 20, 2024
Dr. Donya Ball: Pioneering Leadership Solutions for Tomorrow’s Challenges

Dr. Donya Ball: Pioneering Leadership Solutions for Tomorrow’s Challenges

May 10, 2024
Nasiyr Bey’s Journey from Brooklyn to Charlotte: The Entrepreneurial Path to Owning a Successful Cigar Lounge

Nasiyr Bey’s Journey from Brooklyn to Charlotte: The Entrepreneurial Path to Owning a Successful Cigar Lounge

August 8, 2024
Augmented.City Startup Developers Appeal To US Politicians With An Open Letter

Augmented.City Startup Developers Appeal To US Politicians With An Open Letter

0
U.S. High Court Snubs Challenge To State And Local Tax Deduction Cap

U.S. High Court Snubs Challenge To State And Local Tax Deduction Cap

0
GOP Lawmaker Blames Biden For Russia-Ukraine War: Putin ‘Could never have Invaded’

GOP Lawmaker Blames Biden For Russia-Ukraine War: Putin ‘Could never have Invaded’

0
Brad Winget’s Tips and Tricks on Having a Career in Real Estate

Brad Winget’s Tips and Tricks on Having a Career in Real Estate

0
This 0 Chromebook Offers Flexibility and Performance for On-the-Go Entrepreneurs | Entrepreneur

This $180 Chromebook Offers Flexibility and Performance for On-the-Go Entrepreneurs | Entrepreneur

June 21, 2025
Tackle Decision Fatigue With This CEO-Worthy AI Tool | Entrepreneur

Tackle Decision Fatigue With This CEO-Worthy AI Tool | Entrepreneur

June 21, 2025
How to Turn Bad Reviews Into Great News For Your Business | Entrepreneur

How to Turn Bad Reviews Into Great News For Your Business | Entrepreneur

June 21, 2025
The Best Defense Against Uncertainty Isn’t a Single Strategy — It’s a Mindset | Entrepreneur

The Best Defense Against Uncertainty Isn’t a Single Strategy — It’s a Mindset | Entrepreneur

June 20, 2025

The EW prides itself on assembling a proficient and dedicated team comprising seasoned journalists and editors. This collective commitment drives us to provide our esteemed readership with nothing short of the most comprehensive, accurate, and captivating news coverage available.

Transcending the bounds of Chicago to encompass a broader scope, we ensure that our audience remains well-informed and engaged with the latest developments, both locally and beyond.

NEWS

  • Business
  • Politics
  • Entrepreneurship
  • Entertainment
Instagram Facebook

© 2024 Entrepreneurs Weekly.  All Rights Reserved.

  • About Us
  • Advertise
  • Contact Us
No Result
View All Result
  • ENTREPRENEURSHIP
  • ENTERTAINMENT
  • POLITICS
  • BUSINESS
  • CONTACT US
  • ADVERTISEMENT

Copyright © 2024 - The Entrepreneurs Weekly

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In