Everybody talks about trustworthy AI - but what could this mean in practice?

Nicolas Zahn & Jessica Espinosa • January 2023

AI Image

Cover image: Image was created by Midjourney, an artificial intelligence program that generates images from textual descriptions

Artificial intelligence (AI) has become a strategically important technology that can bring various health, economic and social benefits. Nevertheless, Machine Learning and/or Deep Learning carry specific risks and challenges that bring unexpected consequences and impacts. As AI capabilities have been growing faster than human understanding of them, it is becoming more difficult to determine if these models, algorithms or systems are executed in a fair, transparent, secure and ethical way, or are meeting the goals of improving human rights and welfare.

Nowadays, trustworthy AI has become a key topic for governance and technology impact assessment efforts and has increased the need of identifying both the ethical and legal principles around it. This has not only led to various international standards being proposed in this dynamic ecosystem, see e.g. https://publications.jrc.ec.europa.eu/repository/handle/JRC131155

It has also created the need for closer international cooperation as can be seen by the joint roadmap for trustworthy AI and risk management between the European Union and the United States of America (https://digital-strategy.ec.europa.eu/en/library/ttc-joint-roadmap-trustworthy-ai-and-risk-management) which aims at better coordination of policy activities from sharing frameworks such as the NIST AI Risk Management Framework (https://www.nist.gov/itl/ai-risk-management-framework) to agreeing on certain definitions and wordings.

What remains challenging, however, is operationalization of lofty principles and abstract values. In this context, Z-Inspection® comes into play as a potentially useful tool for co-design, self-assessment, or as an auditing method to foster the highest levels of trustworthiness in AI systems. Z-Inspection® is a holistic process meant to evaluate and audit new technologies, where ethical issues need to be discussed and managed through the elaboration of socio-technical scenarios.

Based on the EU Framework for Trustworthy Artificial Intelligence,  Z-Inspection® has built a systematic assessment of AI systems’ ethical, technical, domain-specific and legal implications within given contexts. As the potential concerns regarding effectiveness, unintended impacts, and inequities require more than a one-size-fits-all evaluation, Z-Inspection® develops an independent interdisciplinary experts evaluation. In this sense, Z-inspection® process consists of three phases shown in the diagram below

Taken from How to Assess Trustworthy AI in Practice (https://arxiv.org/abs/2206.09887 )

  • The Set Up phase consists of the validation of the pre-conditions of the assessment, the set-up of an interdisciplinary team of experts who will work with the stakeholders owning the specific AI use case, and finally the definition of the boundaries and context where the assessment takes place.
  • The Assess phase is an iterative process where socio-technical scenarios are created and analysed, the ethical issues and tensions are identified and there are validations of claims with valid evidence. In this phase, mapping from “open to closed vocabulary” is used as a consensus-based approach.
  • The Resolve phase addresses the ethical tensions identified during the Assess phase. In this, possible trade-off solutions are proposed, potential risks and remedies are identified, and recommendations are made to the key stakeholders.

Moreover, the Z-Inspection® Process was developed in a way that can be applied to the entire AI Life Cycle, meaning that it can be carried out during the design, development, deployment, monitoring, and/or decommissioning stage. For example, in the design phase, the process can provide insight into how to design a trustworthy system. In contrast, during the development phase, the process can be used to specify test cases (e.g., to verify the absence of certain biases). Nevertheless, since AI systems evolve over time due to updated models, algorithms, data, or environments, trustworthiness needs to be assessed as part of a continuous monitoring process. The Z-inspection® process has a certain degree of plasticity, as the assessment will be tailored to the specific use cases.

Similarly to our  Digital Trust Label, Z-Inspection® is about creating trustworthy technology and digital services, through multidisciplinary and multistakeholder approaches. The objective is to ensure that a variety of expert methodologies, cultural ontologies, and disciplinary interpretations are represented when assessing the trustworthiness of digital service, and in the specific case of Z-inspection, of an AI system.

As technology is rapidly evolving, AI algorithms are becoming a part of our daily life but remain largely misunderstood. In this context, Z-inspection and other approaches can provide an important framework for the validation and ethical considerations for AI guidelines and regulations to guarantee algorithms’ trustworthiness. However, the challenges remain in creating frameworks and a smart governance auditing structure that enables the assessment of a variety of AI systems.

In contrast, the Digital Trust Label is building confidence by increasing transparency where it matters. By putting digital services through an extensive auditing process based on 35 criteria within four main categories: Security, Data Protection, Reliability, and Fair User Interaction -which assesses the fair use of AI-based algorithms-, determines the trustworthiness of a specific digital service. The DTL aims at empowering users everywhere to feel safe and secure when they use digital services.

Learn more about Z-Inspection®

Learn more about the Digital Trust Label https://digitaltrust-label.swiss/