Racist Computers Deceived by Politically Correct Data Preprocessing

Spread the love

/IBM open source ‘anti-bias’ AI tool to combat racist robot fear factor/

    Developers using Artificial Intelligence (AI) ‘engines’ to automate any degree of decision making inside their applications will want to ensure that the brain behind that AI is free from bias and unfair influence.

    Sincere honest scientists parsimoniously assume: “ computers are unbiased”. But #TheTruthIsRacist!  Politically correct programmers must dishonestly fiddle the input data with an anti-racist pre-processor to deceive the AI program into getting the desired results.

    Built with open source DNA, IBM’s new Watson ‘trust and transparency’ software service claims to be able to automatically detects bias.

    No more ‘racist robots’ then, as one creative subheadline writer suggested?

    Well it’s early days still, but (arguably) the effort here is at least focused in the right direction. […]

    This automated software service is designed to detects bias (in so far as it can) in AI models at runtime as decisions are being made. More interestingly, it also automatically recommends data to add to the model to help mitigate any bias it has detected… so we should (logically) get better at this as we go forward. {This paragraph could be found in IBM to Fight Racist Robot Menace with AI Crimestop, but was deleted in the original source. No wonder, it is the smoking gun!}

    IBM Research will release to open source an AI bias detection and mitigation toolkit

    This is like a German PresseKodex 12.1, a media information gag  code for computers. Media laws mislead the populations, computer pre-processors mislead artificial intelligence programs. Otherwise we  would have to accept the “racist” results of computer pattern recognition and the correct prejudice of the media consuming population.

    See our Recent Posts

    AI Fairness toolkit [continued from IBM open source ‘anti-bias’ AI tool to combat racist robot fear factor]

      In addition, IBM Research is making available to the open source community the AI Fairness 360 toolkit – a library of algorithms, code and tutorials for data scientists to integrate bias detection as they build and deploy machine learning models.

      According to an IBM press statement, “While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit created by IBM Research will help check for and mitigate bias in AI models. It invites the global open source community to work together to advance the science and make it easier to address bias in AI.”

      It’s good, it’s broad, but it is (very arguably) still not perfect is it? But then (equally arguably) neither are we humans.

      Image source: IBM


      Artificial Intelligence–The Robot Becky Menace

      Not surprisingly, the conventional wisdom increasingly believes Artificial Intelligence needs a dose of Artificial Stupidity to keep it from being as racist and sexist as Natural Intelligence. Otherwise, the Robot Permit Patties will run amok, says Nature:

      / AI can be sexist and racist — it’s time to make it fair

      … As much as possible, data curators should provide the precise definition of descriptors tied to the data. For instance, in the case of criminal-justice data, appreciating the type of ‘crime’ that a model has been trained on will clarify how that model should be applied and interpreted. …

      Lastly, computer scientists should strive to develop algorithms that are more robust to human biases in the data.

      Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals.

      A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.


      Decision Making With Quantized Priors Leads to Discrimination

      Abstract:

      Racial discrimination in decision-making scenarios such as police arrests appears to be a violation of expected utility theory. Drawing on results from the science of information, we discuss an information-based model of signal detection over a population that generates such behavior as an alternative explanation to taste-based discrimination by the decision maker or differences among the racial populations. This model uses the decision rule that maximizes expected utility-the likelihood ratio test-but constrains the precision of the threshold to a small discrete set. The precision constraint follows from both bounded rationality in human recollection and finite training data for estimating priors. When combined with social aspects of human decision making and precautionary cost settings, the model predicts the own-race bias that has been observed in several econometric studies.

      Published in: Proceedings of the IEEE ( Volume: 105 , Issue: 2 , Feb. 2017 )

      Page(s): 241 – 255

      Date of Publication: 21 October 2016

      ISSN Information:

      INSPEC Accession Number: 16618235

      DOI: 10.1109/JPROC.2016.2608741

      Publisher: IEEE

      Sponsored by: IEEE


      Reducing discrimination in AI with new methodology

      AI algorithms are increasingly used to make consequential decisions in applications such as medicine, employment, criminal justice, and loan approval.  The algorithms recapitulate biases contained in the data on which they are trained.  Training datasets may contain historical traces of intentional systemic discrimination, biased decisions due to unjust differences in human capital among groups and unintentional discrimination, or they may be sampled from populations that do not represent everyone.

      My group at IBM Research has developed a methodology to reduce the discrimination already present in a training dataset so that any AI algorithm that later learns from it will perpetuate as little inequity as possible.  This work by two Science for Social Goodpostdocs, Flavio Calmon (now on the faculty at Harvard University) and Bhanu Vinzamuri, two research staff members, Dennis Wei and Karthikeyan Natesan Ramamurthy, and me will be presented at NIPS 2017 in the paper “Optimized Pre-Processing for Discrimination Prevention.”

      The starting point for our approach is a dataset about people in which one or more of the attributes, such as race or gender, have been identified as protected.  We transform the probability distribution of the input dataset into an output probability distribution subject to three objectives and constraints:

      1. Group discrimination control,
      2. Individual distortion control, and
      3. Utility preservation.

      By group discrimination control, we mean that, on average, a person will have a similar chance at receiving a favorable decision irrespective of membership in the protected or unprotected group.  By individual distortion control, we mean that every combination of features undergoes only a small change during the transformation to prevent, for example, people with similar attributes from being compared, causing their anticipated outcome to change.  Finally, by utility preservation, we mean that the input probability distribution and output probability distribution are statistically similar so that the AI algorithm can still learn what it is supposed to learn.


      Introducing AI Fairness 360

      Leave a Reply

      Leave a Comment

      Specify Facebook App ID and Secret in Super Socializer > Social Login section in admin panel for Facebook Login to work

      Specify Twitter Consumer Key and Secret in Super Socializer > Social Login section in admin panel for Twitter Login to work

      Specify LinkedIn Client ID and Secret in Super Socializer > Social Login section in admin panel for LinkedIn Login to work

      Specify Google Client ID and Secret in Super Socializer > Social Login section in admin panel for Google Login to work

      Specify Instagram Client ID in Super Socializer > Social Login section in admin panel for Instagram Login to work

      Your email address will not be published. Required fields are marked *