The Era of Generative AI: My trust becomes even more important

Jessica Espinosa & Nicolas Zahn • November 2023

Ai Blog 2
2023 has been the “boom” year of Generative Artificial Intelligence (AI). This type of AI focuses on generating original and creative outputs, such as texts, images, music, and much more. Already in use in multiple digital apps, this technology will massively increase everyone’s exposure to the opportunities, capabilities, and challenges of AI, affecting all of us in many aspects of our everyday lives.

From academia and education to social media and the music industry, generative AI sets out to change the world. Presenting immense opportunities for innovation, spurring human and industry development, generative AI also raises delicate questions about privacy and intellectual property rights, inclusivity, transparency, and accountability. In addition, the probability of spreading disinformation and misinformation (including the creation of new songs by replicating artists’ voices) has triggered heightened political attention and pressure to govern and regulate AI.

Just last week, between October 30th and November 3rd, G7 leaders welcomed the Hiroshima AI Process Comprehensive Policy Framework as international guiding principles for AI, the Bletchley Declaration from the AI Safety Summit was adopted by 28 countries and the European Union and the United States presented and Executive Order that explicitly mentions the intent to “Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy”. Those developments together with an overview of the current global discourse, challenges, and opportunities for AI use from the public sector to education and governance were also discussed at the AI Policy Summit at ETH Zürich that brought together practitioners, academics, and policymakers from over 100 countries.

Although all of these developments are positive frameworks aiming to encourage transparency and accountability from AI developers to measure, monitor, and mitigate potential harm, the multiplicity of initiatives and policy proposals also leads to uncertainty for companies developing and implementing AI. Likewise, customers of digital products and services using AI are still mostly in the dark about what AI means for them. Customers and users of the digital space are demanding more and more transparency in the technology they use. They want to be aware of the privacy policies, the algorithms behind the processes and, in general, how trustworthy the technology is.

At the Swiss Digital Initiative, we have developed a practical solution to address transparency and trustworthiness in digital technology: the Digital Trust Label. Developed as a tool for companies to denote that they are transparent about the technology employed, particularly when it comes to the use of AI algorithms, the Digital Trust Label builds trust between the users and digital technology providers. When it comes to AI, a recent study by the University of Basel confirms that “the presence of a certification label significantly increases participants’ trust and willingness to use AI”. Hence, the Label is both a solution to increase trust in the digital world and a way to exert the development of ethical, safe, and inclusive AI algorithms.

Learn more about the Digital Trust Label https://digitaltrust-label.swiss/