fbpx

Blog Page

Uncategorized

AI biased against women, people of colour – Winnipeg Free Press – Winnipeg Free Press

Winnipeg
19° C, A few clouds
Full Forecast
© 2022 Winnipeg Free Press
Quick Links
Ways to support us
Replica E-Edition
Business
Arts & Life
Sports
Opinion
Media
Homes
Canstar Community news
Coupons
About Us
Advertisement
Advertise with us
Artificial intelligence (AI) offers numerous benefits, but training with robust data is essential to developing quality AI systems.
Read this article for free:

Already have an account? Log in here »
To continue reading, please subscribe with this special offer:
$1.50 for 150 days*
*Pay $1.50 for the first 22 weeks of your subscription. After 22 weeks, price increases to the regular rate of $19.00 per month. GST will be added to each payment. Subscription can be cancelled after the first 22 weeks.
Opinion
Artificial intelligence (AI) offers numerous benefits, but training with robust data is essential to developing quality AI systems.
Our underwater database assists in developing recognition systems for ocean/sea search and rescue operations; our dental database helps train new dentists in accurate diagnosis of dental diseases; and our facial-recognition database assists law enforcement in the prevention and recovery of victims of human trafficking.
AI has the power to save lives and transform our world.
But such power also has its limitations. Often, we use AI without questioning if it’s impartial or accurate. Through my work, I’ve learned that AI does have limitations, and they must be monitored, corrected and actively changed.
Most of today’s facial-recognition software is trained by examining thousands of thermal or RGB images from security cameras. As my team set out to create a database to recognize criminals and victims of human trafficking, I discovered these databases primarily contained people of colour and males.
The facial recognition software assisting in security and threat assessment identifies people of colour, more than other groups, as risks. This is because, over time, AI software generalizes the traits of people in its database and applies them to new images.
The software was trained with a database that lacked diversity. As a result, men of colour are more likely to be singled out and interrogated more often than anyone else.
AI software also commonly shows bias while screening resumés. The software learns what to look for by examining a database of past hires. If this database is not diverse, future hires will look exactly the same as those hired before.
In many cases, job candidates of colour and women are eliminated immediately, because the AI was trained with a database composed mainly of candidates from the same cohort of brand-name institutions, who also happened to be male.
Human-resources departments regularly use AI in their performance reviews of employees. In my field of academia, women are passed over for promotions time after time. People claim these decisions are unbiased because they are made by impartial AI technology, but if a woman has never been department chair before, the AI software will not recognize a woman as a potentially successful candidate — ignoring the fact that a departmental culture that has prevented a woman from ever being in that role compounds the bias.
I see people placing blind faith in AI technology, but it is critical for people who use this software to understand its limitations. If one can’t explain how an AI makes its decisions and the user doesn’t understand the context of the data use, AI shouldn’t be used to make decisions that affect people’s health or livelihood. AI should do no harm.
I learned at a young age to take a software’s recommendations with a grain of salt. My best friend and I took a high school career assessment program together. Since we both enjoyed math and science, we expected similar career suggestions. The software told him he could be a professor or engineer — and it told me to be a chef or cosmetics saleswoman.
When I asked my guidance counsellor about the results, I was told, “Computers don’t make mistakes.”
Obviously, the programmer who coded that career software embedded specific careers for women and others for men — an example of AI software with built-in bias. Today, AI technology is more complicated, but it’s still trained by programmers. If AI is trained with biased data, it delivers biased decisions.
AI acquires its bias from public opinion. I work to change the bias people encounter in AI software, but it has become my lifelong mission to work even harder to transform bias in the people around me.
I can’t count the times I’ve been told, “But you don’t look like an engineer.” To that, I say, “What does an engineer look like?” I may not look like it, but I was the first woman to work my way up from visiting to tenured professor in the electrical and computer engineering department at Tufts University.
To change bias in AI software, we need to have more diversity in the data used in training AI databases, and standards similar to an “FDA approval for AI.” Increasing women and people of colour in science, technology, engineering and mathematics (STEM) in leadership positions and holding current leaders responsible for the implications of technology are two steps in the right direction.
Technology, such as AI, is changing our world, but we must ensure we’re changing it for the better.
Karen Panetta is a fellow of the National Academy of Inventors, dean of graduate education for the School of Engineering at Tufts University, and Nerd Girls founder.
© Troy Media

Advertisement Advertise With Us
Advertisement
Advertise With Us
LOAD MORE ANALYSIS

source

× How can I help you?