Notes from SHERPA meeting on AI Ethics

In SHERPA project, we do an ethical investigation of AI and Big Data. My second meeting in Brussels is now over, and as a project’s stakeholder, I wanted to briefly share high-level information, before writing a long article about current issues, TODOs, alignment with Open Ethics initiative and my POV on the progress.

Recently three prominent groups published their AI Ethics guidelines. The IEEE published its Ethically Aligned Design guidelines, the OECD published its Principles on AI, and the High-Level Expert Group on AI of the European Commission (HLEG) published its Ethics guidelines for trustworthy AI.  We reviewed these guidelines, and, in addition, reviewed some 20+ more existing initiatives. Particularly, the IEEE, OECD, HLEG guidelines are quite consistent with each other, and with SHERPA group’s own ethical analysis.

As an EU-funded project, SHERPA takes a close look at HLEG’s guidelines for trustworthy AI. While the HLEG guidelines are more of a generic nature, we see the extreme need in operationalizing (e.g. answering the “How?” question) them and building a distinct pair of recommendations: first for developers, and second for technology adopters.

Today within SHERPA we’re heavily exploring 6 major sets of ethical requirements (value categories):

  1. Autonomy: Human agency, liberty and dignity
  2. Privacy and data governance
  3. Transparency: Including traceability, explainability and communication
  4. Diversity, non-discrimination and fairness: Avoidance and reduction of bias, ensuring fairness, and avoidance of discrimination
  5. Well-being: Individual, societal and environmental well-being
  6. Accountability: auditability, minimization and reporting of negative impact, internal and external governance frameworks, redress, and human oversight

The meeting occurred on July 3rd-4th 2019 in Brussels, at CEN, European Committee for Standardization. As a part of the event, we have had presentations from several representatives (NEN/ISO, IEEE) about how standards are built, and how their adoption/normalization usually happens.

The workshop was framed as a set of breakout sessions, mainly around three discussions themes:

  • Value alignment, definitions, approaches
  • Applicability of existing development and adoption frameworks to incorporate ethical requirements in the processes (COBIT, ITIL, CRISP-DM, Agile)
  • Special topics in ethics of AI and Big Data (Surveillance, Applications in media and politics, Military and Defense, Covert and deceptive AI, Ethically aware AI, Decision support systems)

Soon, I’ll be writing an article about the event itself, standards, guidelines, and regulatory spectrum for AI Ethics. My notes about the workshop are about 15 pages already. I guess, It’s going to be quite difficult to shed light on all topics equally. So if you want to hear about something specific, please let me know.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

manage cookies