Algorithmic Transparency and Accountability

The educational technology and digital learning wiki
Revision as of 17:55, 12 January 2017 by Daniel K. Schneider (talk | contribs) (→‎In education)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

Algorithmic Transparency and Accountability deals with the consequences and risks of algorithms that take decisions affecting humans and how to deal with these problems.

“Software and algorithms have come to adjudicate an ever broader swath of our lives, including everything from search engine personalization and advertising systems, to teacher evaluation, banking and finance, political campaigns, and police surveillance. But these algorithms can make mistakes. They have biases. Yet they sit in opaque black boxes, their inner workings, their inner “thoughts” hidden behind layers of complexity. We need to get inside that black box, to understand how they may be exerting power on us, and to understand where they might be making unjust mistakes.” (Nick Diakopoulos,2016, retrieved Jan 2017)

The USACM statement

In January 2017, the ACM US Public Policy Council of the Association for Computing Machinery (ACM) published Statement on Algorithmic Transparency and Accountability:

Principles for Algorithmic Transparency and Accountability

1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.

2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.

3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.

4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.

5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.

6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.

7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

In education

Danger areas:

  • Institutional automatic profiling of students with learning analytics
  • Student profiling with "big data" (e.g. social network mining)
  • Automated evaluation and grading of text productions
  • Search engines that serves contents that the user likes to hear or that agencies want to be favored
  • Automated teacher evaluation systems
  • ...

Links

Organizations and people

In education