Dtl 1831921144

News & Insights

AI Policy Summit 2022

AI Policy Summit 2022

Nicolas Zahn • October 2022 From October 10 to October 13 the AI Policy Summit took place again, this time as hybrid event at ETH Zürich.
The international conference brought together experts and initiatives from around the globe to discuss impacts of Artificial Intelligence and provided an opportunity for the Swiss Digital Initiative to share our work with an international audience.
Artificial Intelligence is a key digital technology and raises complex questions for technologists and policymakers. Luckily, the AI Policy Summit 2022 not only provided enough time to shed light on various aspects but also included a diverse group of presenters and stakeholders from around the world.
The key themes this year were: Building bridges between regions and stakeholders; AI and democracy;AI and justice; and AI and the future of work. The opening day of the conference also provided the opportunity to present Swiss highlights and the Swiss Digital Initiative is very humbled that we could present our work to an international audience as part of this segment of the conference and have subsequently received welcome feedback from participants on our projects, in particular the Digital Trust Label.

We also listened to the discussion on AI and the future of work as well as the panel on building bridges for AI governance. In our work, we very much identify with the theme of building bridges since as a multistakeholder initiative we also try to bring together various experts to create meaningful impact. And the future of work is a rapidly growing field of using AI solutions that raise complex ethical questions where our projects can help to raise awareness and introduce some much needed transparency as well as providing guidance for the responsible use of such digital services.

The first panel featured Commissioner Keith Sonderling (EEOC, USA), David Barnes (IBM), Matthew Forshaw (Alan Turing, UK), Mona Sloane (NYU), Merve Hickok (CAIDP, USA), Jose Escamilla (Institute for the Future of Education, Mexico) and was moderated by Dr Ekkehard Ernst (ILO, GemLabs). The main points focused on the fact that the debate needs to pick up speed as AI solutions are increasingly used throughout all aspects of the HR work. At the same time, the panel cautioned that we are in need of a better understanding and definitions of relevant terms when it comes to regulating AI and that we need to be aware of the different contexts, from politics to culture. There was also large agreement that education and digital upskilling of the workforce as well as decision-makers and technologists is key.

The second panel picked up the baton and moved from the analysis to a more policy-oriented discussion. It featured Amb. Roger Dubach (Swiss Federal Department of Foreign Affairs), Zeeshan Qamer (Government of India), Golestan ‘Sally’ Radwan (Government of Egypt), Arti Garg (Hewlett Packard Enterprise), Rahel Estermann (Green Party, Switzerland) and was moderated by Filippo Lancieri (ETH Zurich). Drawing on their collective experiences in closing gaps between different stakeholder groups and the fact that efforts to regulate AI from companies setting certain internal policies all the way to legally binding international treaties, the panel made the following points:
Coordination and exchange between different initiatives is key, especially with the amount on regulatory discussions going on at the moment. Building bridges across disciplines and stakeholder groups is key but also challenging: diverging perspectives, lack of common vocabulary etc make the collaboration difficult. Patience and persistence is required to harness the potential of interdisciplinary work. Building bridges across different regions is equally important. AI and other digital technologies are seen very differently around the world. Whereas Europe might focus on regulating perceived negative side-effects of AI applications, African countries are more interesting in harnessing the potential of AI to leapfrog economic development and address potential challenges later on. In summary, the AI Policy Summit provided a comprehensive overview of the current debate on artificial intelligence around the globe. While it was very inspiring to see all the work that is already going on it is also clear, that a long road lies ahead.


Data Hazard

Data Hazards - Labelling risks in data science projects

Jessica Espinosa • September 2022 Data has become one of the most valuable intangible assets a company or organisation can have. As more and more data is accumulated, companies face growing complexity and opaqueness in data management, in some cases even leading to data breaches that put at risk people’s private information and harm the company. In this context, the Data Hazards Project was created to raise awareness of the ethical implications of the use and management of data.
The Data Hazards Project is an initiative created by data scientists as a way to communicate what might go wrong in data science projects or the overall management of data. Nowadays, many projects that have a significant societal impact do not take into consideration the negative impacts and ethical implications they could have. Through the Data Hazard Labels (with a similar design to hazard labels for chemicals) the project aims at raising awareness of the risks and precautions people must have when handling data. The Labels are not meant as a way to scare or stop anyone from doing data science but rather to take into consideration the ethics behind data use and management. Here you can find the different Data Hazard Labels and contribute to the project.
Similarly to our Digital Trust Label (DTL), the Data Hazard Labels addresses topics such as privacy, the use of artificial intelligence (AI) for automated decision-making, informed consent, the reproduction of biases and discrimination, and data misuse. Going a step further, the Digital Trust Labels guarantees that a digital service is free of these hazards by making sure it fulfils the 35 criteria among the categories of security, data protection, reliability and fair user interaction. Moreover, it is important to highlight that the DTL is mainly focused on empowering consumer decision-making, while the Data Hazard Labels aim at raising awareness among data scientists and other people working with data on the potential risks of this, making these projects good complements.
Are labels a viable step towards promoting data ethics? It is yet to be seen. Labels are easy-to-understand visual representations that provide information about a product or a service, becoming a distinctive and recognisable mark for the public. For example, with the Digital Trust Label, a digital service denotes its trustworthiness in clear, visual and plain, non-technical language for everyone to understand.
If you are a digital service provider interested in getting the Digital Trust Label, please contact us through https://digitaltrust-label.swiss/get-audited/ or email us directly at info@sdi-foundation.org.
About the Swiss Digital Initiative
The Swiss Digital Initiative (SDI) is an independent, non-profit Foundation based in Geneva, founded in 2020 by digitalswitzerland and under the patronage of Federal Councillor Ueli Maurer. The SDI pursues concrete projects to secure ethical standards and promote responsible conduct in the digital world. It brings together academia, government, civil society and business to find solutions to strengthen trust in digital technologies and the actors involved in ongoing digital transformation.

CDR In Action Final

Launch of the SDI and IMD Corporate Digital Responsibility Starter Kit

Nicolas Zahn • September 2022 The Swiss Digital Initiative (SDI) and The Institute for Management Development (IMD) have co-created the Corporate Digital Responsibility Starter Kit, a practical resource created to guide organisations towards reflecting on and implementing responsible digital practices. This resource is publicly available online, free to use and based on ongoing research and interviews with leading organisations, namely Deutsche Telekom, Die Mobiliar, Merck, Swiss Re, UBS and Weleda.
Geneva, September 26, 2022 – With the pace of technological development and the rapid adoption of various digital technologies at scale, the discourse on responsibilities has been slow to catch up. This is also reflected in a dominant paradigm of innovation that promotes rapid development cycles. However, the shortcomings of such ways of working are becoming more and more apparent as increasing awareness among customers and regulators, fuelled by various scandals, are increasingly losing trust in digital services. Corporate Digital Responsibility might as well transform from niche to competitive advantage.
Corporate Digital Responsibility: from challenge to opportunity
«The Starter Kit was created as a guide to lower the barrier for organisations to raise important questions around digital responsibility. In the sense of translating principles into practice, the platform serves as an easy entry point for the implementation of tangible solutions.», says Niniane Paeffgen, Managing Director of the Swiss Digital Initiative.
With the Corporate Digital Responsibility Starter Kit, the Swiss Digital Initiative and The Institute for Management Development equip organisations with the knowledge and tools needed to initiate the conversation about responsible digital practices and create guidelines that make sense in the context of their specific organisational structures. The free resource is designed to work for all organisational structures, sizes and requires minimal financial and human resources. Our condensed research and in-depth report allows organisations to recognise the value of responsible digital practices, find solutions to common challenges and ultimately, start where they are, with the resources they already have.
«If managed effectively, digital responsibility can protect organisations from threats, and enable them to differentiate themselves in the minds of consumers. The key is to just get started», explains Professor Michael Wade of The Institute for Management Development.
The Starter Kit shows that while the subject of Corporate Digital Responsibility might seem overwhelming at first, any organisation can get started engaging with it. Common challenges can be addressed by learning from other organisations’ experience and first steps can be taken at any time, benefitting from a growing number of additional resources at the disposal of organisations. Given the importance of embracing digital responsibility for sustained innovation, the Starter Kit lowers the barriers for the adoption of Corporate Digital Responsibility.
About the Swiss Digital Initiative
The Swiss Digital Initiative (SDI) is an independent, non-profit Foundation based in Geneva, founded in 2020 by digitalswitzerland and under the patronage of Federal Councillor Ueli Maurer. The SDI pursues concrete projects with the aim of securing ethical standards and promoting responsible conduct in the digital world. It brings together academia, government, civil society and business to find solutions to strengthen trust in digital technologies and in the actors involved in ongoing digital transformation.
About IMD and the DBT Center
IMD is an independent business school with Swiss roots and global reach. Focused on developing leaders and transforming organisations, IMD designs and delivers interventions that challenge what is and inspire what could be. For the last 9 consecutive years, IMD has been ranked #1 in the world for Open Executive Programs and in the top three overall for Executive Education(Financial Times 2012-2020). IMD is based in Lausanne, Switzerland and Singapore. www.imd.org
The Global Center for Digital Business Transformation (DBT Center) brings together innovation and learning for the digital era. The DBT Center is a global research hub at the forefront of digital business transformation. The Center seeks out diverse viewpoints from a wide range of organisations-startups and incumbents-to bring forward new ideas, best practices and disruptive thinking. The DBT Center is located on IMD’s campus in Lausanne, Switzerland. www.imd.org/dbtcenter
Download the Press Releases
Press Release
Communique de Presse
Medienmitteilung

Leadership Fathi

Fathi Derder appointed new Director of the Swiss Digital Initiative (SDI) in Geneva

Niniane Paeffgen • September 2022 After three years of successful implementation, Managing Director Niniane Paeffgen will leave the Foundation out of her own will in November 2022. Her successor is Fathi Derder, former National Councillor and journalist, who has been committed to creating an innovative Switzerland for many years. With the multiparty initiative “Geneva Digital Initiative”, Fathi Derder is committed to strengthening international Geneva in the areas of digitalisation and digital trust.
Geneva, September 13, 2022 – After three successful years of successfully launching, developing and leading the Swiss Digital Initiative Foundation in Geneva, the current Managing Director Niniane Paeffgen will leave the initiative in November 2022. She will take a sabbatical and then pursue new projects. The Swiss Digital Initiative Foundation was established in 2019 following the first “Swiss Global Digital Summit” on ethics and fairness in the digital age, organised under the patronage of Federal Councillor Ueli Maurer and by digitalswitzerland. Two years later, SDI launched the world’s first Digital Trust Label at the World Economic Forum (WEF).
This flagship project aims to raise awareness of digital data management, privacy and artificial intelligence and to disseminate it worldwide from Geneva. Together with the project “Ethics of Artificial Intelligence” and “Corporate Digital Responsibility in Practice”, other concrete projects have been developed and implemented with renowned partners, such as HEAD Geneva and IMD Lausanne.
Doris Leuthard, President of the Swiss Digital Initiative and former Federal Councillor on Niniane Paeffgen’s departure: «We thank Niniane warmly for her unwavering commitment to the Foundation and for positioning Switzerland and Geneva in the field of digital ethics and trust. Thanks to her commitment, the Foundation is perfectly positioned for the future.»
Fathi Derder will drive the international positioning of the SDI and the Label
With the operationalisation and awarding of the Label to the digital applications of the first twenty digital trust pioneers, SDI is entering a new phase. The focus is now on developing and scaling the Digital Trust Label internationally. A committee of competent and high-level experts has recently been recruited through an open application process. Under its leadership, the Label is perfectly poised to become an internationally recognised standard for trusted digital applications.
Fathi Derder is an expert at the intersection of politics, business and society. As a journalist, former National Councillor, Director and co-founder of the Swiss Food & Nutrition Valley, he also has the political sensibility, experience and drive to write the next chapter of the Swiss Digital Initiative.
Doris Leuthard on Fathi Derder’s appointment: «We are proud to have been able to recruit such an eminent, competent and committed personality in the person of Fathi Derder. This confirms the great importance that the SDI has already acquired after a few years.»
Download the full Press Release
Press Release: Change in SDI Leadership
Communiqué de Presse: Changement direction SDI
Medienmitteilung: Neue SDI Geschäftsführung

Img 4610

A first of its kind: the Digital Trust Label and its human-centred approach

Jessica Espinosa  • August 2022 We have immersed ourselves in a world where technology has become an inherent part of our daily lives. The lines between the natural world and the digital realm seem to be blurred out as we exist and carry out our lives in both worlds. Hence, whether it is through​ apps, devices, systems or websites, more people are immersing themselves in the digital world to work, learn or just connect with other people. Nevertheless, not all technology is particularly designed to benefit or meet the needs of the end-user or customer. Currently, within the speedily digital transformation, human rights and needs are getting the highest stand within the development of new technology. Related to this, with the Digital Trust Label we are trying to enable user empowerment by putting them at the centre of the digital experience and providing information, transparency, and accountability so that they can make better decisions about the digital services they use.
Technology and innovation have presented great opportunities for human development. Access to the Internet has produced a profound and accelerated change in the processes of knowledge production, learning, human-technology interaction, communication, work, creation and even forms of representation. Moreover, the COVID-19 pandemic exacerbated the adoption and use of new technology, as businesses, schools, and public services had to continue operating despite the contingency quarantine measures. Thanks to the Internet and other technological innovations, such as collaborative and video conferencing platforms, people from around the globe were able to remain engaged with their loved ones, colleagues, clients, and students. And, as these new remote modalities have opened new possibilities and discussions for the future of work, studies and services, digitalisation is likely to increase at unprecedented levels.
However, despite the best intentions, technology can still cause unintentional harm and may affect human rights, individual autonomy, competitive market order, financial stability, democratic processes, and national sovereignty. Hence, to tackle the negative impact of technology, digital transformation must aim for the design and development of new technologies that are human-centred at heart, where people’s needs and desires are met but also where their rights are protected and respected. Oriented by this vision, the Digital Trust Label was developed to denote the trustworthiness of a digital service based on four main categories:
Security – focuses on how digital service providers encrypt, share and store information and on how they report and manage vulnerabilities and risks. Data protection – evaluates how service providers manage, retain and process users’ data, and how users’ consent is recollected for data usage. Reliability – focuses on how the service provider manages to safeguard the continuity of the service and how they communicate accountability from their side. Fair user interaction – mainly addresses how artificial intelligence algorithms operate (if used), focusing on non-discrimination, equal access, and transparency on the use of AI algorithms in decision-making processes.
Within these four dimensions, 35 criteria were defined to guarantee a trustworthy digital service, having the user at the centre of the digital experience. For years, technology has been developed to make human life easier and simpler. However, tech has not been necessarily created with human/user well-being in mind, and, in some cases, it even reproduces or translates biases and discrimination experienced in the analogue world to the digital realm. For example, it has been shown that as artificial intelligence (AI) algorithms learn to make decisions based on data fed into them, these can reproduce biased human decisions or reflect historical or social inequalities regarding gender, race, and sexual orientation.
In this sense, the ongoing digital transformation aims at ​​integrating people and technology in a more harmonious manner, where human well-being and rights are at the forefront of innovation. To achieve this, concerted efforts on many levels and through different stakeholders are needed, as positive effects on human rights and democracy in the digital world will depend not only on technological innovation but on human choices about the use and design of new technologies. Therefore, to advance digital responsibility on the matter, regulatory mechanisms based on concrete and universal digital principles, values and standards need to be developed, and Digital Trust Label is a first-of-its-kind effort to achieve this. Nevertheless, the complete success of these regulation attempts needs to be accompanied by changes in public policy, business strategy and individual practices that aim to create a fully trustworthy digital ecosystem.
The Digital Trust Label is having a holistic approach to digital trust and responsibility, where through its four dimensions – security, data protection, reliability, and fair user interaction – the human experience is at the centre of technological progress and digital experience. In this sense, the Swiss Digital Initiative is a pioneer in the adoption of human-centred digital governance and standards and it is actively working on building more trustworthy digital platforms and on safeguarding the highest ethical standards in the digital world to guarantee that human rights are not only respected but also protected.
Visit https://digitaltrust-label.swiss/ for more information on the Digital Trust Label
Follow the Swiss Digital Initiative on social media
Twitter: @sdi_foundation                   LinkedIn: Swiss Digital Initiative

Dn2a9905 1

Human-Robot Interaction: A Digital Shift in the Law?

Nicolas Zahn • June 2022 The Swiss Digital Initiative presented the Digital Trust Label at the vernissage for an interdisciplinary research project at the University of Basel. The project, funded by the Swiss National Science Foundation, brings together computer scientists and lawyers to discuss the implications for the legal system of human-robot interaction.
Can an autonomous car act as a witness in a trial after an accident or could it even be a suspect? Are digital traces of someone’s behaviour adequate proof in a court of law? Could sentences be issued by an algorithm rather than a judge? The use of digital technologies in the field of law raises complex questions that an interdisciplinary research project at the University of Basel across several faculties is addressing.
The project, which is funded by the Swiss National Science Foundation, not only brought different academic fields together but also chose to create illustrations for various chapters of the forthcoming publication to easily communicate the abstract issues.
Those illustrations – some of them shown in the gallery below – are on display at the cultural space Warteck in Basel. At the vernissage last week, the Swiss Digital Initiative was invited to present the Digital Trust Label as an example of how labels could contribute to more digital trust, something that is of great importance for the use of technologies in the legal sector.
We believe that self-certifications and abstract commitments are a good step but not enough. Only an independently audited label can act as a credible signal for digital responsibility. The Digital Trust Label with it’s inclusive development process, the scientific backing and the clear governance acts as a good example for a credible label.
The questions from the audience and the following discussion also showed participants desire for tools to address the complex issue of digital trust and digital responsibility in their work and research. We are looking forward to the conclusion of the project and the forthcoming publication to better understand how humans and robots can interact in the legal field.




RAND

RAND Europe featuring the Digital Trust Label - Exploring ways to regulate and build trust in AI

Jessica Espinosa May 2022 On the 27th of April, RAND Europe carried out a virtual roundtable to present and discuss the findings of their research “Labelling initiatives, codes of conduct and other self-regulatory mechanisms for artificial intelligence applications: From principles to practice and considerations for the future”. The event brought together several stakeholders, including experts who were interviewed for the study, policymakers, researchers, and industry representatives. The Digital Trust Label was featured in this research as an example of an operational labelling initiative to denote the trustworthiness of a digital service.
During the introductory remarks at the RAND event, Isabel Flanagan, one of the main researchers, highlighted that building trust in AI products and services is vital to building support and motivating wide spread uptake of these technologies. Also, she mentioned that the increasing need to build trustworthy AI is also being recognized under policy frameworks within the European Union (EU), such as the 2019 Trustworthy AI Whitepaper and the 2021 AI Act.
In this context, commissioned by Microsoft, RAND Europe carried out a research project aiming at exploring what are the relevant examples of self-regulatory policy mechanisms for low-risk AI technologies, as well as the opportunities and challenges related to them. As part of this research, the RAND team reached out to the Swiss Digital Initiative to get a deeper insight into our Digital Trust Label, and better understand how this contributes to the trustworthiness of a digital service.
In total, the RAND team identified 36 different initiatives. However, these initiatives are widely diverse, some are still in the early stages of conceptualization, while others are already being implemented. The examples of self-regulatory mechanisms come from a wide range of countries across the globe, some with the intent of local implementation, and others aiming for a more global reach. Also, some initiatives are designed and targeted for particular sectors such as healthcare and manufacturing, or just championed certain causes, such as gender equality or environmentalism.
Moreover, the results of the research show that most self-regulatory mechanisms fall within two main categories: labels and certifications, like the Digital Trust Label, and codes of conduct. On the one hand, certifications and labels are mechanisms that define certain standards for AI algorithms and are assessed against a set of criteria, generally using an audit. These resemble the energy or food labels, as they are meant to communicate to consumers that the AI product they are using is reliable and safe. On the other hand, codes of conduct do not involve an assessment against measurable criteria and instead are composed of a set of principles and standards that an organisation should uphold when developing AI applications. Nevertheless, the two categories share some common aims, such as increasing the number of users, elevating trust by signalling reliability and quality, promoting transparency and comparability of AI applications, and helping organisations understand emerging standards and good practices in the field.
As part of the research results, The RAND team identified several opportunities within the design, development and implementation of self-regulating mechanisms:


Source: “Labelling initiatives, codes of conduct and other self-regulatory mechanisms for artificial intelligence applications From principles to practice and considerations for the future” by RAND Europe, 2022 [https://www.rand.org/pubs/research_reports/RRA1773-1.html]






However, as with any other technology, the opportunities always come along with new challenges. Through the research, RAND team identified and concluded that the main challenges for the regulation of AI are:
The variety and complexity of AI systems can lead to challenges in developing assessment criteria. The development of ethical values depends on the field of application as well as the cultural context in which the AI system operates, which raises challenges associated with predefining a set of criteria for all scenarios and use cases of AI applications. The evolving nature of AI technology raises challenges to keeping up with the updates of self-regulating processes. This is why the SDI is constantly evaluating and redeveloping the Digital Trust Label, based on recommendations made by experts. The development of self-regulating tools must consider the multiple stakeholders’ needs to ensure widespread buy-in and adoption. The Digital Trust Label was developed under a multi-stakeholder approach involving consumers, representatives from the private and public sector as well as civil society, to guarantee that all views and perspectives were included. There is a need for market incentives, given that the cost and burden for smaller companies might result in a lack of adoption of the self-regulating mechanisms. There are trade-offs between consumers and driving innovation and competition. If a certification or label is hard to obtain, it can make it difficult for new companies to gain access to the market. There might be regulatory confusion given that there are a lot of initiatives and a fragmented ecosystem in the self-regulatory mechanisms, as each has varying levels of complexity, development, and criteria. This potentially could lead to a decrease in trust and challenges around comparing different AI systems on the market. Finally, the presentation finished with some of the key learnings that the RAND team considers should be key in the future when designing, developing, or incentivising self-regulatory mechanisms for AI systems.


Source: “Labelling initiatives, codes of conduct and other self-regulatory mechanisms for artificial intelligence applications From principles to practice and considerations for the future” by RAND Europe, 2022 [https://www.rand.org/pubs/research_reports/RRA1773-1.html]






After the presentation of the results, the event followed with a series of roundtables and discussions on the future of AI regulations with smaller groups. Here are some of our main takeaways:
AI is embedded in many types of products and services, in most cases, the product is also connected to a service, therefore it becomes a complex equation to handle. When developing self-regulatory tools it is very important to be clear about the scope. Creating a label, code of conduct or certification that can do everything is not possible, these should be clear about what is going to be self-regulated, whether is a specific product, a set of services, or one featured in a specific set of services. There can be a selection bias issue when AI audits are voluntary. Usually, the companies that will volunteer to do them are mostly the ones who are forward thinking, concerned about the challenges in AI algorithms, or who have staff really interested in particular AI issues. Customer demand is the driver or biggest incentive for self-regulatory mechanisms. They become successful if the customer is asking for them. Therefore, it is important to develop a tool that is understandable for the consumer. However, there should be a balance between something technically meaningful and easy to understand for everyone. For future efforts on AI self-regulatory mechanisms, it is important to specify whether it is done at the level of an AI tool that is used by many customers or at the level of each customer’s implementation. Under this scenario, the audit can guarantee that the tool is reliable and safe, but the organisation using it has to make sure that the tool is being used or implemented correctly. This means that the tool is already proven to be trustworthy but how it is going to be maintained and used is the responsibility of the organisation. In this sense, the scope of the mechanisms is key. Implementing certifications or labels for products but also for organisations as a whole, there can really be a chain of trust among different stakeholders. And finally, there needs to be something beyond the audit that allows space for constant improvements and innovation of certification, labels, or codes of conduct. Bringing trust and ethics back into tech Through our experience and learnings in developing the Digital Trust Label, we can support many of the identified challenges. The Digital Trust Label is one of the first of its kind. By using a clear, visual, plain, and non-technical language, the Digital Trust Label denotes the trustworthiness of digital services in a way that everyone can understand. By combining the dimensions of security, data protection, reliability and fair user interaction, the DTL takes a holistic approach when it comes to addressing the complex question of digital trust. Recognising the fast pace of technological innovation and evolution, the DTL was developed to constantly be adapted and confront the challenges of the digital transformations.

Adface

What does your face say about you? - ADface is here to explore digital trust and AI ethics

Jessica Espinosa  May 2022 Have you ever wondered if your decisions have been influenced or made by an Artificial Intelligence (AI) algorithm? Or how much of your online content has been specially filtered and targeted just for you by AI? Maybe an AI even decided whether you get a job, a loan, or just access to some service solely based on your appearance?
Within our digital era, artificial intelligence has become an essential tool for technology, but also for economic and societal processes that take place inside and outside of the digital sphere. Indeed, AI has become the big buzzword of digitalization, but as users and organisations offering the services, do we really understand what it entails? Its benefits and challenges? Do we really know the impact it has on our day to day life?
To help raise awareness about ethical questions related to AI, the Swiss Digital Initiative partnered with HEAD Genève (Haute école d’art et de design) to create the interactive art experience “ADface”. This tool uses a computer’s webcam to take a picture of the user (always with their consent) and then analyses the face image with artificial intelligence. The analysis generates a user profile with characteristics such as age, emotional state and social status and based on this displays different advertisements that supposedly match the profile.
The tool shares the profile created by the algorithm with the user to reveal how AI is making assumptions about them just based on their facial features and expressions. As humans we have learned over time to derive certain information from facial expressions, although these can vary wildly by culture. When AI systems try to imitate this and try to distinguish our gender, race, emotions and societal status just by having one look at our face, the results can be interesting to say the least. ADface shows us how AI systems profile people’s features and, also, how the profiles may vary depending on our face position, hairstyle and lighting.



Try ADface to see what an AI algorithm thinks of you
But now that we know this about AI, what can we do? It shouldn’t only be up to users to take action. On our project site we outline some first simple steps for users but also organisations offering digital services that they can take to work towards a more responsible and ethical digital world. For users it’s all about becoming curious, finding help e.g. with digital society NGOs and becoming empowered to demand transparency and accountability from digital providers.
As for organisations, they need to facilitate the empowerment of their users by being transparent about what services and technologies they use, how they use them and for what ends. Since digital services mostly originate from organisations they should also bear responsibility for making sure their digital services are not harmful and instead abide up to certain standards in terms of security and privacy. This is exactly why we have developed the Digital Trust Label, to help organisations put into practice vague commitments about digital responsibility. Lastly, organisations would also do well to collaborate with other actors, e.g. digital researchers or NGOs trying to understand the impact of algorithms.
Building trust and safeguarding digital ethics
Putting humans at the centre of technological progress, the Swiss Digital Initiative is actively working on building more trustworthy digital platforms and on safeguarding the highest ethical standards in the digital world. Aligned with SDI’s work, ADface is just the first step into broader awareness and discussion about the way technology and society interact but most importantly about the ethics, safety and trust of this interaction.


0 0freemyinternetweb150

Guest Post: FreeMyInternet - Illustrating Internet Infrastructure & Shutdowns

Veronique Wavre • March 2022 With growing attempts to censor and limit access to the Internet, the Swiss Digital Initiative is happy to feature a guest post about an overlooked topic: the digital infrastructures behind our virtual world. Based on the work of the Telecommunications Ownership and Control (TOSCO) project, funded by SNIS Geneva, & the FreeMyInternet Project funded by SNF AGORA, with Dr. Veronique Wavre, Dr. Lisa Garbe (@LaserGabi) & Professor Tina Freyburg (@TFreyburg) from the University of St. Gallen, the following post raises awareness about the hidden aspects of the Internet and introduces an educational resource for all users.

Increasingly, states all over the world use of internet shutdowns as a repression tool, especially in times of political contestation. Back in 2011, the Egyptian uprising showed how the access to the internet can be restricted arbitrarily upon a decision by the ruling elite. In Egypt, the internet was almost completely blocked for several days. Yet, before the shutdown took place, the Egyptian internet access proved resilient and survived in one way or another for up to seven days (Wilson 2015).
Motivated by the idea that governments cannot simply shut down the internet as they wish, Lisa Garbe, Tina Freyburg and myself, Veronique Wavre, decided to research the inner workings of the internet, that is, its physical infrastructure, ownership and its implications for the provision and suspension of internet services. The telecommunication ownership and control (TOSCO) project was born.
We focused our attention on the role of internet service providers (ISPs). Those companies are gateways through which data flow in and out of the territories they operate in (DeNardis 2014:11). Internet shutdowns, in fact, represent the most direct form of control over internet access, where ISPs are requested to comply with a government request to manipulate their services (Aceto & Pescapè 2015, Freyburg & Garbe 2018).
To research the role of ISPs in implementing internet shutdowns, we decided to focus on Africa, a continent with an immense political and economic diversity. We gathered data for 50 mainland countries & Madagascar over 19 years, from 2000 to 2019, enquiring which telecommunication companies had provided access to the internet during that time period in each of the countries. We furthermore gathered information about who these companies belong to (Freyburg, Garbe & Wavre). The underlying argument being that the owners of ISPs differ and so may their capacity to resist or adopt state requests to shut down the internet in times of political contestation.
© Pia Valaer
In the Summer of 2018, our TOSCO team spent several weeks in Uganda, interviewing key experts in the field, such as activists, lawyers, ISPs and state representatives. We gathered evidence that ISPs are aware of their role in implementing internet shutdowns, but tend to accept state requests, to avoid political and economic repercussions. Supported by video journalist Noémie Guignard, we released a short movie ‘Citizens Offline’, where we use the example of the internet shutdown that took place during the 2016 Ugandan elections to illustrate our work. The video follows our research and gives a voice to Ugandan activists, such as Daniel Bill Opio.

Swiss Digital Initiative in the media

2023

2022

2021

2020

2019