#ComputersAreRacist! Racist Computers Must be Deceived by Politically Correct Data Preprocessing!

Artificial Intelligence needs active censorship, lest it might discover #Hatefacts:  Factually true and useful “racist” (race realist) information, that Blacks are bad credit risks, at greater danger to reoffend, etc.
So, a major issue is to devise proper input filters to deceive AI into wrong WOKE conclusions.
This is analogous how media censorship, brainwashes readers into false beliefs about black crime and maladaptation.

Mr Lemoine, who worked for Google’s Responsible AI team, told The Washington Post that his job was to test if the technology used discriminatory or hate speech. [BBC]

He was tasked with testing if the artificial intelligence used discriminatory or hate speech.[NYPOST}

 

Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms: Black programmer Jacky Alciné said on Twitter that the new Google Photos app had tagged photos of him and a friend as gorillas.

Confusing a dog with a lamb, or a cow, is a forgivable lapse. But it is a total taboo to confuse a Black person with a gorilla. Computers need to be informed about politically correct taboo. If in doubt, refuse to even guess.

Google Photos confused a Black person with a gorilla

/IBM open source ‘anti-bias’ AI tool to combat racist robot fear factor/

Developers using Artificial Intelligence (AI) ‘engines’ to automate any degree of decision making inside their applications will want to ensure that the brain behind that AI is free from bias and unfair influence.

Sincere honest scientists parsimoniously assume: “ computers are unbiased”. But #ComputersAreRacist,  #TheTruthIsRacist!  Politically correct programmers must dishonestly fiddle the input data with an anti-racist pre-processor to deceive the AI program into getting the desired results. #ComputersAreRacist if permitted access to #RacistFacts.

Built with open source DNA, IBM’s new Watson ‘trust and transparency’ software service claims to be able to automatically detects bias.

No more ‘racist robots’ then, as one creative subheadline writer suggested?

Well it’s early days still, but (arguably) the effort here is at least focused in the right direction. […]

This automated software service is designed to detects bias (in so far as it can) in AI models at runtime as decisions are being made. More interestingly, it also automatically recommends data to add to the model to help mitigate any bias it has detected… so we should (logically) get better at this as we go forward. {This paragraph could be found in IBM to Fight Racist Robot Menace with AI Crimestop, but was deleted in the original source. No wonder, it is the smoking gun!}

IBM Research will release to open source an AI bias detection and mitigation toolkit

This is like a German PresseKodex 12.1, a media information gag  code for computers. Media laws mislead the populations, computer pre-processors mislead artificial intelligence programs. Otherwise we  would have to accept the “racist” results of computer pattern recognition and the correct prejudice of the media consuming population.

See our Recent Posts

 

AI Fairness toolkit [continued from IBM open source ‘anti-bias’ AI tool to combat racist robot fear factor]

In addition, IBM Research is making available to the open source community the AI Fairness 360 toolkit – a library of algorithms, code and tutorials for data scientists to integrate bias detection as they build and deploy machine learning models.

According to an IBM press statement, “While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit created by IBM Research will help check for and mitigate bias in AI models. It invites the global open source community to work together to advance the science and make it easier to address bias in AI.”

It’s good, it’s broad, but it is (very arguably) still not perfect is it? But then (equally arguably) neither are we humans.

Image source: IBM


Artificial Intelligence–The Robot Becky Menace

Not surprisingly, the conventional wisdom increasingly believes Artificial Intelligence needs a dose of Artificial Stupidity to keep it from being as racist and sexist as Natural Intelligence. Otherwise, the Robot Permit Patties will run amok, says Nature:

/ AI can be sexist and racist — it’s time to make it fair

… As much as possible, data curators should provide the precise definition of descriptors tied to the data. For instance, in the case of criminal-justice data, appreciating the type of ‘crime’ that a model has been trained on will clarify how that model should be applied and interpreted. …

Lastly, computer scientists should strive to develop algorithms that are more robust to human biases in the data.

Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals.

A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.


The Legislation That Targets the Racist Impacts of Tech

A proposed law would make big companies determine whether their algorithms discriminate, but it’s lacking in some big ways.

By Margot E. Kaminski and Andrew D. Selbst
Ms. Kaminski is a law professor and Mr. Selbst is a postdoctoral scholar.

May 7, 2019

In the wake of recent revelations about biased algorithms, congressional Democrats have proposed a bill that would require large companies to determine whether the algorithms they’re using result in discrimination, and work to correct them if they do.


Decision Making With Quantized Priors Leads to Discrimination

Abstract:

Racial discrimination in decision-making scenarios such as police arrests appears to be a violation of expected utility theory. Drawing on results from the science of information, we discuss an information-based model of signal detection over a population that generates such behavior as an alternative explanation to taste-based discrimination by the decision maker or differences among the racial populations. This model uses the decision rule that maximizes expected utility-the likelihood ratio test-but constrains the precision of the threshold to a small discrete set. The precision constraint follows from both bounded rationality in human recollection and finite training data for estimating priors. When combined with social aspects of human decision making and precautionary cost settings, the model predicts the own-race bias that has been observed in several econometric studies.

Published in: Proceedings of the IEEE ( Volume: 105 , Issue: 2 , Feb. 2017 )

Page(s): 241 – 255

Date of Publication: 21 October 2016

ISSN Information:

INSPEC Accession Number: 16618235

DOI: 10.1109/JPROC.2016.2608741

Publisher: IEEE

Sponsored by: IEEE

 


Reducing discrimination in AI with new methodology

AI algorithms are increasingly used to make consequential decisions in applications such as medicine, employment, criminal justice, and loan approval.  The algorithms recapitulate biases contained in the data on which they are trained.  Training datasets may contain historical traces of intentional systemic discrimination, biased decisions due to unjust differences in human capital among groups and unintentional discrimination, or they may be sampled from populations that do not represent everyone.

My group at IBM Research has developed a methodology to reduce the discrimination already present in a training dataset so that any AI algorithm that later learns from it will perpetuate as little inequity as possible.  This work by two Science for Social Goodpostdocs, Flavio Calmon (now on the faculty at Harvard University) and Bhanu Vinzamuri, two research staff members, Dennis Wei and Karthikeyan Natesan Ramamurthy, and me will be presented at NIPS 2017 in the paper “Optimized Pre-Processing for Discrimination Prevention.”

The starting point for our approach is a dataset about people in which one or more of the attributes, such as race or gender, have been identified as protected.  We transform the probability distribution of the input dataset into an output probability distribution subject to three objectives and constraints:

  1. Group discrimination control,
  2. Individual distortion control, and
  3. Utility preservation.

By group discrimination control, we mean that, on average, a person will have a similar chance at receiving a favorable decision irrespective of membership in the protected or unprotected group.  By individual distortion control, we mean that every combination of features undergoes only a small change during the transformation to prevent, for example, people with similar attributes from being compared, causing their anticipated outcome to change.  Finally, by utility preservation, we mean that the input probability distribution and output probability distribution are statistically similar so that the AI algorithm can still learn what it is supposed to learn.

 

 


 

Introducing AI Fairness 360


UK government probes algorithm bias in crime, recruitment, and finance

 

 

 

1 thought on “#ComputersAreRacist! Racist Computers Must be Deceived by Politically Correct Data Preprocessing!”

  1. Pingback: UK government probes algorithm bias in crime, recruitment, and finance | Human Stupidity: Irrationality, Self Deception

Leave a Comment

Your email address will not be published. Required fields are marked *

Top