In digital societies, more and more structures and processes are digitized and automated. Almost all areas of everyday life are increasingly connected with each other. Digital (inter-)action and communication have become a major part of our daily work and private lives.
However, when structures and processes in a society are increasingly based on algorithms and automation, questions of decision-making and responsibilities arise. Who is in charge of the decisions made in a digital society, and how does this affect the security and privacy of its citizens? How can technologies become more trustworthy and privacy-aware – and how can people gain a better understanding of intelligent systems? Can digital systems themselves become more resistant and resilient to attacks and frauds? And why is the concept of trust so important in the context of resilience?
With the workshop "Trust and Resilience in Digital Societies" we aim to initiate a process of reflection on the role that trust and resilience play in digital societies. The workshop is supposed to contribute to reaching a common sense about terms such as trust and resilience across disciplines and tackle questions regarding what a digital society actually needs from AI, cyber security, and related areas of research.
The event is open to all PhDs and postdoctoral researchers as well as PIs from the discipline of IT security or related areas of research. The workshop is a joint event of the graduate school SecHuman, the Research Center Trustworthy Data Science and Security, and CASA Cluster of Excellence.
Date & time: Monday, 6th November 2023, 10:00 am - 5:15 pm (+ open end) | Tuesday, 7th November 2023, 10:00 am - 4:30 pm
Place: Veranstaltungszentrum at Ruhr-Universität Bochum
Open for: PhDs and postdoctoral researchers as well as PIs from the discipline of IT security or related areas of research
Please register here by October 20, 2023.
Monday, 6th November 2023
10:00-10:30 AM Registration & Coffee
10:30-10:45 AM Welcome
- Prof. Angela Sasse, Spokesperson SecHuman, Ruhr-Universität Bochum
- Prof. Nicole Krämer, Spokesperson Research Center Trustworthy Data Science and Security, Universität Duisburg-Essen
10:45 AM-12:45 PM Input and discussion: "What do we know about trust and resilience? What role do they play for a digital society?"
- Prof. Markus Langer, Work and Organizational Psychology, Georg-August Universität Göttingen
- Prof. Sebastian Weydner-Volkmann, Ethics of Digital Methods and Technologies, Ruhr-Universität Bochum
- Tenure-track faculty member Dr. Yixin Zou, Human-Centered Security and Privacy, Max Planck Institute for Security and Privacy
- Prof. Nils Köbis, Human Understanding of Algorithms and Machines, Universität Duisburg-Essen
Moderator: Prof. Nicole Krämer, Spokesperson Research Center Trustworthy Data Science and Security, Universität Duisburg-Essen
12:45-2:00 PM Lunch
2:00-3:00 PM KEYNOTE I with Q&A: "How do we assess the trustworthiness of systems?"
Prof. Markus Langer, Work and Organizational Psychology, Georg-August Universität Göttingen
Abstract: Designing trustworthy systems and enabling external parties to accurately assess the trustworthiness of these systems are crucial objectives. Only if trustors assess system trustworthiness accurately, they can base their trust on adequate expectations about the system and justifiedly follow or reject its outputs. However, the process by which trustors assess a system’s actual trustworthiness to arrive at their perceived trustworthiness remains surprisingly unclear. In this talk, I will introduce the Trustworthiness Assessment Model (TrAM)* that draws on psychological models describing how individuals assess others people’s characteristics. I will describe how the TrAM advances existing models of trust and sheds light on factors influencing the (accuracy of) trustworthiness assessments. In the end of the talk, I will discuss implications for implications for system design, stakeholder training, and regulation related to trustworthiness assessments.
*The TrAM is a model that has been developed especially by Nadine Schlicker as a main author and by myself, with support by Kevin Baum, Alarith Uhde, Sarah Sterz, and Martin Hirsch.
3:00-3:15 PM Coffee Break
3:15-4:15 PM Keynote II with Q&A: "How Artificial Intelligence influences Human Ethical Behavior"
Prof. Nils Köbis, Human Understanding of Algorithms and Machines, Universität Duisburg-Essen
Abstract: As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, in this talk I present an overview of the available evidence from behavioural science, human–computer interaction and AI research. I sketch four main social roles through which both humans and machines can influence ethical behaviour. These are role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. The talk will feature a review and original empirical findings that cover how AI might corrupt human ethical behavior and discuss ways how we might counter this corrupting influence.
4:15-5:15 PM Speed dating
5:15 PM Buffet and open end
Tuesday, 7th November 2023
10:00-11:00 AM Keynote III with Q&A (remote): "Fragile Computing – Corporate Digital Security beyond Blaming and Shaming"
Post.Doc. Laura Kocksch, The Technoanthropology Lab (TANTlab), Aalborg University Denmark
Abstract: Corporations are not unaware, unsavvy or ignorant towards digital security threats. Against the prevalent understanding of corporate digital security, the talk illustrates skilful and collective actions, painful awareness, and continued engagement. Extensive ethnographic fieldwork unveils such surprising tactics through which companies endure their “fragile” technologies; forging trusting relationships rather than controls, crafting long-term commitments rather than short-term fixes and building resilience rather than prevention. The talk concludes that we must engage in a critical conversation of where “good” and “bad” security become blurred in practice, instead of blaming and shaming.
11:00-11:15 Coffee Break
11:15 AM-12:15 PM Lightning talks
12:15-1:30 PM Lunch
1:30-2:30 PM Poster session, coffee break included
2:30-3:45 PM World café
4:30 PM End