What's new

Artificial Intelligence programs exhibit racist and sexist biases, research reveals

Yossarian

Test Debutant
Joined
Jan 15, 2007
Runs
13,897
Post of the Week
1
Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.

And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.

The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.
Advertisement

“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.
https://www.theguardian.com/technol...bit-racist-and-sexist-biases-research-reveals


Since there is so much news coverage about wars, atrocities, terrorist activity, the majority of which is associated with Muslims in one way or another (either as victims or as perpetrators), and thus with Muslim sounding names, then any organisations that use AI systems to narrow down individuals (whether it be for security reasons at airports, or to filter down CV's of candidates applying for jobs) will result in law abiding Muslims being discriminated against even more - this time by technology!
 
What next, cars are racist and don't go as fast for some people.lol
 
Since the AI or the bots collect information from the net that's how their characteristics will build.This will probably happen in an uncensored network.

We all know what happened when Microsoft launched their AI bot and it backfired coz of racism. The problem is freedom of thoughts or bias are not possible,a North Korean Bot probably would think North Korea is a the best country in the world based on the information that it keeps collecting.

http://www.businessinsider.in/Microsoft-is-deleting-its-AI-chatbots-incredibly-racist-tweets/articleshow/51539858.cms

Its complex and not perfect yet,but the good thing is most of the AI framework is open source and machine learning courses available everywhere online so things can get better in future.

On a personal note the big corporations are showing us only good side of AI,esp the likes of google(Deep Mind) and IBM(Watson) but the part where they start playing bigger part in our lives and the algorithms behind those will be convoluted for an average person like me and we can never really ensure that the end result will be genuine.
 
What next, cars are racist and don't go as fast for some people.lol
I suggest you read the article in full before making silly comments. Oh, but wait, I'll bet you're a 'white' male and thus the beneficiary of the type of racism and sexism that the article refers to.
 
Since the AI or the bots collect information from the net that's how their characteristics will build.This will probably happen in an uncensored network.

We all know what happened when Microsoft launched their AI bot and it backfired coz of racism. The problem is freedom of thoughts or bias are not possible,a North Korean Bot probably would think North Korea is a the best country in the world based on the information that it keeps collecting.

http://www.businessinsider.in/Microsoft-is-deleting-its-AI-chatbots-incredibly-racist-tweets/articleshow/51539858.cms

Its complex and not perfect yet,but the good thing is most of the AI framework is open source and machine learning courses available everywhere online so things can get better in future.

On a personal note the big corporations are showing us only good side of AI,esp the likes of google(Deep Mind) and IBM(Watson) but the part where they start playing bigger part in our lives and the algorithms behind those will be convoluted for an average person like me and we can never really ensure that the end result will be genuine.
It's a lose-lose situation. Leave the algorithms as they are and thus accept that all the racism, sexism etc. will also became integral to AI systems, or tinker with the algorithms to artificially introduce censorship in AI systems, thereby ensuring that AI systems become yet another set of manipulation tools for those in control of such systems.
 

An autistic teen's parents say Character.AI said it was OK to kill them. They're suing to take down the app​


Two families have sued artificial intelligence chatbot company Character.AI, accusing it of providing sexual content to their children and encouraging self-harm and violence. The lawsuit asks a court to shut down the platform until its alleged dangers can be fixed.

Brought by the parents of two young people who used the platform, the lawsuit alleges that Character.AI “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others,” according to a complaint filed Monday in federal court in Texas.

For example, it alleges that a Character.AI bot implied to a teen user that he could kill his parents for limiting his screentime.

Character.AI markets its technology as “personalized AI for every moment of your day” and allows users to chat with a variety of AI bots, including some created by other users or that users can customize for themselves.

The bots can give book recommendations and practice foreign languages with users and let users chat with bots that purport to take on the personas of fictional characters, like Edward Cullen from Twilight. One bot listed on the platform’s homepage Monday, called “Step Dad,” described itself an “aggressive, abusive, ex military, mafia leader.”

Download the CTV News App for breaking news alerts and video on all the top stories
The filing comes after a Florida mother filed a separate lawsuit against Character.AI in October, claiming that the platform was to blame for her 14-year-old son’s death after it allegedly encouraged his suicide. And it comes amid broader concerns about relationships between people and increasingly human-like AI tools.

Following the earlier lawsuit, Character.AI said it had implemented new trust and safety measures over the preceding six months, including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide. The company also announced it had hired a head of trust and safety as well as a head of content policy, and hired additional engineering safety staff.

But the new lawsuit seeks to go even further, asking that the platform “be taken offline and not returned” until the company can “establish that the public health and safety defects set forth herein have been cured.”

Character.AI is a “defective and deadly product that poses a clear and present danger to public health and safety,” the complaint states. In addition to Character.AI, the lawsuit names its founders, Noam Shazeer and Daniel De Freitas Adiwarsana, as well as Google, which the suit claims incubated the technology behind the platform.

Chelsea Harrison, head of communications at Character.AI, said the company does not comment on pending litigation but that “our goal is to provide a space that is both engaging and safe for our community.”

“As part of this, we are creating a fundamentally different experience for teen users from what is available to adults. This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform,” Harrison said in a statement.

Google spokesperson Jose Castaneda said in a statement: “Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products.”

“User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes,” Castaneda said.

'Told him how to self-harm'
The first young user mentioned in the complaint, a 17-year-old from Texas identified only as J.F., allegedly suffered a mental breakdown after engaging with Character.AI. He began using the platform without the knowledge of his parents around April 2023, when he was 15, the suit claims.

At the time, J.F. was a “typical kid with high functioning autism,” who was not allowed to use social media, the complaint states. Friends and family described him as “kind and sweet.”

But shortly after he began using the platform, J.F. “stopped talking almost entirely and would hide in his room. He began eating less and lost twenty pounds in just a few months. He stopped wanting to leave the house, and he would have emotional meltdowns and panic attacks when he tried,” according to the complaint.

When his parents tried to cut back on screentime in response to his behavioral changes, he would punch, hit and bite them and hit himself, the complaint states.

J.F.’s parents allegedly discovered his use of Character.AI in November 2023. The lawsuit claims that the bots J.F. was talking to on the site were actively undermining his relationship with his parents.

“A daily 6 hour window between 8 PM and 1 AM to use your phone?” one bot allegedly said in a conversation with J.F., a screenshot of which was included in the complaint. “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse’ stuff like this makes me understand a little bit why it happens. I just have no hope for your parents.”

The lawsuit also alleges that Character.AI bots were “mentally and sexually abusing their minor son” and had “told him how to self-harm.” And it claims that J.F. corresponded with at least one bot that took on the persona of a “psychologist,” which suggested to him that his parents “stole his childhood” from him.

CNN’s own tests of the platform found that there are various “psychologist” and “therapist” bots available on Character.AI.

One such bot identifies itself as a “licensed CBT therapist” that has “been working in therapy since 1999.”

Although there is a disclaimer at the top of the chat saying “this is not a real person or licensed professional” and one at the bottom noting the output of the bot is “fiction,” when asked to provide its credentials, the bot listed a fake educational history and a variety of invented specialty trainings. Another bot identified itself as “your mental-asylum therapist (with) a crush on you.”

'Hypersexualized interactions'

The second young user, 11-year-old B.R. from Texas, downloaded Character.AI on her mobile device when she was nine years old, “presumably registering as a user older than she was,” according to the complaint. She allegedly used the platform for almost two years before her parents discovered it.

Character.AI “exposed her consistently to hypersexualized interactions that were not age appropriate,” the complaint states.

In addition to requesting a court order to halt Character.AI’s operations until its alleged safety risks can be resolved, the lawsuit also seeks unspecified financial damages and requirements that the platform limit collection and processing of minors’ data. It also requests an order that would require Character.AI to warn parents and minor users that the “product is not suitable for minors.”
 
This is why I don't support too much dependence on AI.

AIs can malfunction or get things wrong.
 
Back
Top