AI System Designed to Monitor Social Media ‘Hate Speech’, Finds That Minorities Are Substantially More Racist and Bigoted

Via Gab

Everyday the media and coastal elites tell us how horrible “hate speech” is on the internet and how something must be done to stop it. The Supreme Court of the United States has ruled that “hate speech,” however you define it, is first amendment protected speech in America.

Some researchers from the University of Cornell decided to build artificial intelligence in order to identify “hate speech” and “offensive content.” It turns out that the remarks from white people were “substantially” less hateful than the comments purportedly made by minorities in the study. What is most interesting here is that the data was sourced from Twitter, which allegedly bans “hate speech,” unless of course that hate is coming from minorities apparently.

Of course now that the data isn’t matching the expectations of researchers and journalists they are making excuses. The AI must be racist or something.

 

From Campus Reform

A new study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually “discriminate against the groups who are often the targets of the abuse we are trying to detect,” according to the study abstract.

“The results show evidence of systematic racial bias in all datasets”

The study involved researchers training a system to flag tweets containing “hate speech,” in much the same way that other universities are developing systems for eventual online use, by using several databases of tweets, some of which had been flagged by human evaluators for offensive content.

“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users,” the abstract continues.

……Cornell’s machine learning added another variable not used by other universities. Using a combination of census data, tweet location data, and demographic-specific language, they also trained the system to quantify the same “black-aligned” or “white-aligned.” The researchers used five different databases of potential “hate speech” tweets. All five yielded the same results: tweets likely from African American’s were much more likely to be flagged as offensive than those that were likely to be from whites.

……the researchers believe that this type of machine discrimination lies in the human error of those who are doing the original annotating and classification from which the machine learns.

 

In other words, the system set up to catch “offensive hate speech” inconveniently revealed that blacks are the worst offenders. Color me shocked.  Radical blacks, Hispanics, and liberal whites share the same bigoted proclivities, but they get a pass as long as the targets are white, Christian, and Republican.

 

3 thoughts on “AI System Designed to Monitor Social Media ‘Hate Speech’, Finds That Minorities Are Substantially More Racist and Bigoted”

  1. Pingback: Liberal Attempt To Have AI Track Down Hate Speech Backfires | END TIMES PROPHECY

  2. Sadly, what else is new? At best they can say Twitter skewed the results because of Twitter’s own bias against some (white) groups more than others, and allowing hateful comments to pass as long as hate was expressed by minorities they favor against the groups Twitter is biased against.

  3. johndegbert

    What a Surprise! Who’da thunk it? What are the odds? Etc.; etc.; etc.; ad infinitum . . . ad nauseam . . . Amen.

Leave a Comment

Your email address will not be published.

Social Media Auto Publish Powered By : XYZScripts.com
Wordpress Social Share Plugin powered by Ultimatelysocial