Blog | 07/17/2019

Algorithmic Accountability Act Introduced to Protect Against Bias in AI Systems

Team Contact: Isaac Slutsky

  • Artificial Intelligence
  • artificial-intelligence
Share

AI systems are being used for many applications, including facial recognition, determination of recidivism, and operation of autonomous vehicles.  Some of the hardest problems with these systems are not in use of a neural network, but in the gathering the data that correlates with the outcomes to be predicted.  In some cases, deficiencies in the datasets may result in outcomes that are biased against subgroups of people.

On April 10, US Senators Wyuden and Booker introduced the Algorithmic Accountability Act.  This act intends to require companies to study automated decision systems to identify issues resulting in or contributing to inaccurate, unfair, biased, or discriminatory decisions impacting consumers.  A copy of the Act is available here.

Overseas, the European Commission recently created a High-Level Expert Group on Artificial Intelligence (AI HLEG) to provide strategic advice on strategy for the implementation of AI systems.  That group has produced a draft of ethical guidelines that, among other things, mentions the need for diverse training data to avoid unintentional harm in the implementation of AI algorithms.  Last week, the European Commission announced a pilot program to test application of these rules.  Further information on the draft guidelines is available here.

Bias in AI models is not a new topic.  IBM, for example, exposed their work last year to improve facial recognition training datasets to improve error rates across gender and skin tone. Additionally, papers have been written that discuss how to expose algorithmic bias in AI models through algorithmic auditing. See an example here.

Whether a legal issue or a technical issue, we should expect to hear more about what can be done to address discriminatory decision-making being performed by AI systems.

Keep Reading