NIST PR of 15 March 2022

NIST updates its publication on Identifying and Managing Bias in Artificial Intelligence

The National Institute of Standards and Technology (NIST), a part of the U.S. Department of Commerce, in March 2022 updated its special publication “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence”.

As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people's lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI).

While there are many approaches for ensuring the technology we use every day is safe and secure, there are factors specific to AI that require new perspectives. AI systems are often placed in contexts where they can have the most impact. Whether that impact is helpful or harmful is a fundamental question in the area of Trustworthy and Responsible AI. Harmful impacts stemming from AI are not just at the individual or enterprise level, but are able to ripple into the broader society. The scale of damage, and the speed at which it can be perpetrated by AI applications or through the extension of large machine learning MODELs across domains and industries requires concerted effort.

Current attempts for addressing the harmful effects of AI bias remain focused on computational factors such as representativeness of datasets and fairness of machine learning algorithms. These remedies are vital for mitigating bias, and more work remains. Yet human and systemic institutional and societal factors are significant sources of AI bias as well, and are currently overlooked. Successfully meeting this challenge will require taking all forms of bias into account. This means expanding our perspective beyond the machine learning pipeline to recognize and investigate how this technology is both created within and impacts our society.

Trustworthy and Responsible AI is not just about whether a given AI system is biased, fair or ethical, but whether it does what is claimed. Many practices exist for responsibly producing AI. The importance of transparency, datasets, and test, evaluation, validation, and verification (TEVV) cannot be overstated. Human factors such as participatory design techniques and multi-stakeholder approaches, and a human-in-the-loop are also important for mitigating risks related to AI bias. However none of these practices individually or in concert are a panacea against bias and each brings its own set of pitfalls. What is missing from current remedies is guidance from a broader SOCIO-TECHNICAL perspective that connects these practices to societal values. Experts in the area of Trustworthy and Responsible AI counsel that to successfully manage the risks of AI bias we must operationalize these values and create new norms around how AI is built and deployed.

The intent of the updated Standards is to surface the salient issues in the challenging area of AI bias, and to provide a first step on the roadmap for developing detailed socio-technical guidance for identifying and managing AI bias. Specifically, this special publication:

  • describes the stakes and challenge of bias in artificial intelligence and provides examples of how and why it can chip away at public trust;
  • identifies three categories of bias in AI - systemic, statistical, and human – and describes how and where they contribute to harms;
  • describes three broad challenges for mitigating bias - datasets, testing and evaluation, and human factors - and introduces preliminary guidance for addressing them.

Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system. NIST intends to develop methods for increasing assurance, GOVERNANCE and practice improvements for identifying, understanding, measuring, managing, and reducing bias. To reach this goal, techniques are needed that are flexible, can be applied across contexts regardless of industry, and are easily communicated to different stakeholder groups. To contribute to the growth of this burgeoning topic area, NIST will continue its work in measuring and evaluating computational biases, and seeks to create a hub for evaluating socio-technical factors. This will include development of formal guidance and standards, supporting standards development activities such as workshops and public comment periods for draft documents, and ongoing discussion of these topics with the stakeholder community.
 

 



Verlag Dr. Otto Schmidt vom 25.03.2022 15:11
Quelle: NIST PR of 15 March 2022

zurück zur vorherigen Seite


Test subscription

 

Computer Law Review International

Subscribe now to CRi and secure the advantages of legal comparison for your practice: state-of-the-art approaches and solutions from other jurisdictions – every second month, six times a year.

Print (ordering option in German)

eJournal as PDF at De Gruyter