Ruhr-Uni-Bochum

SecHuman & RC TRUST: Summer School 2024

Taming the risks of digital technologies - interdisciplinary collaboration for a trustworthy future. 29 – 31 July 2024, Ruhr University Bochum.

The many advantages and new possibilities of digitalization are increasingly accompanied by unclear responsibilities, a lack of accountability, and growing power to digital tech firms. Meeting the new challenges and threats that come along with digital interconnectivity requires a multifaceted approach that addresses both technical vulnerabilities and human factors. By bringing together experts from diverse fields, an interdisciplinary approach enables a comprehensive understanding of the current and developing challenges – and helps to develop holistic solutions that address technical, human, legal, and ethical aspects.

Recent technological advances such as deepfakes (the hyperrealistic imitation of audio-visual content) but also ever more powerful Large Language Models (such as ChatGPT or LLaMa) pose new risks. Namely, using such technologies enables new forms of manipulation, deception, and fraud. The consequences of such attacks range from the societal level such as manipulation of democratic processes, to the individual level where people might reveal sensitive information, or transfer data or money to attackers.

The summer school “Taming the risks of digital technologies - interdisciplinary collaboration for a trustworthy future” will bring together researchers working in the field of manipulative technologies. It will bring together scholars from various disciplines including IT security research, computer science, communication science, psychology, economics and more. The summer school will feature keynotes, hands-on workshops highlighting the risks but also providing insights into concrete countermeasures to manipulative tech.

The summer school is hosted by SecHuman and the Research Center Trustworthy Data Science and Security.

Date: 29 – 31 July 2024
Place: Beckmanns Hof, Ruhr-Universität Bochum
Open for: PhDs and Postdocs as well as PIs from the discipline of IT security or related areas of research

Registration closed on June 12, 2024.

How to get there: Beckmanns Hof (RUB) by car | Beckmanns Hof (RUB) by public transport

 

Program (Overview of the program here | overview of speakers and abstracts here)

DAY 1 | Monday, July 29

9:30-10:00 Coffee and registration

10:00-10:15 Welcome and introduction

  • Angela Sasse (Human-Centred Security, Ruhr University Bochum & Research Center Trustworthy Data Science and Security)
  • Nils Köbis (Human Understanding of Algorithms and Machines, University of Duisburg-Essen & Research Center Trustworthy Data Science and Security)

10:15-11:15 Invited talk: "Foundations for Foundational Models"

  • Krishna P. Gummadi (Networked Systems Research Group & Scientific Director of Max Planck Institute for Software Systems)

    Abstract: In this talk, I will present our attempts to investigate two related foundational questions about large language models (LLMs): (a) how can we know what an LLM knows? and (b) how do LLMs memorise and recollect information from training data? The answers to these questions have important implications for the privacy of training data as well as the reliability of generated outputs, including the potential for LLM hallucinations. I will propose experimental frameworks to study these questions: specifically, a framework to reliably estimate latent knowledge about real-world entities that is embedded in LLMs and a framework to study the phenomena of recollecting training data via rote memorisation. I will present some (surprising) preliminary empirical results from experimenting with a number of large open-source language models.
     

11:15-11:30 Coffee break

11:30-12:30 Challenges/Case studies (parallel sessions)

12:30-14:00 Lunch break

14:00-15:30 Invited talk: "Data, Automation and the Human Technology Nexus"

  • Mark Elliot (Social Statistics, University of Manchester & Director of SPRITE+, UK)


    Abstract: All technological developments have cultural, social and psychological consequences – some intended, some not. What has changed recently is the degree of interaction between new developments; underpinned in part by the interdisciplinary mixing that 21st century academia has increasingly encouraged and in part by humans’ fascination with technology that is seemingly driving us along a path of merging with our artefacts. It seems likely that over the next two decades the manifestation of these socio-technological changes will radically alter the nature of society, from individual lives to how we organise ourselves. In this context, that most human of questions – “what sort of society do we want to live in?” takes on new levels of meaning. This talk will consider some of these emerging technologies, their likely trajectories and impact of humans. I will be particularly focusing on TIPS (Trust, Identity, Privacy and Security).
     

15:30-15:45 Coffee break

15:45-17:00 Speed dating

17:15-20:30 Barbecue at Beckmanns Hof

 

DAY 2 | Tuesday, July 30

9:30-10:00 Coffee

10:00-11:15 Invited talk: "Trusting the Untrustworthy"

  • Alice Hutchings (Emergent Harms, University of Cambridge & Director of Cambridge Cybercrime Centre, UK)

    Abstract: Cybercrime is facilitated by anonymous online environments, yet the degree of specialisation required often means there is a need to trade and collaborate with others. This poses a problem: Why trust those who are inherently untrustworthy? We'll explore issues relating to trust and anonymity as they relate to online marketplaces and forums, including a focus on how offenders overcome the cold start problem.
     

11:15-11:30 Coffee break

11:30-12:30 Challenges/Case studies (parallel sessions)

12:30-14:00 Lunch break

14:00-15:15 Lightning talks participants

15:15-15:30 Coffee break

15:30-16:15 Talk:

  • Olga Vogel (Work, Organizational, and Business Psychology, Ruhr University Bochum, Alumna SecHuman)

16:15-17:15 Poster session

17:15 Snacks at Beckmanns Hof

18:30-20:00 Adventure tour with brewing culture: beer tour through Bochum

 

DAY 3 | Wednesday, July 31

9:30-10:00 Coffee

10:00-11:15 Invited talk: "Value Based Engineering for a Better AI Future"

  • Sarah Spiekermann-Hoff (Information Systems and Society, Vienna University of Economics and Business & Co-Founder of Sustainable Computing Lab, AU)

    Abstract: This talk is going to give an introduction and overview of the Value-based Engineering Method (short: VBE). It is a method to ensure a value-based and ethical IT design and is based on the world’s first standardised ethical model process for system design ISO/IEC/IEEE 24748-7000. The different phases of VBE and underlying philosophies are presented and what this would mean for AI design.
     

11:15-11:30 Coffee break

11:30-12:30 Challenges/Case studies (parallel sessions)

12:30-14:00 Lunch break

14:00-15:30 Talks

  • Ivan Habernal (Fairness and Transparency, Ruhr University Bochum & Research Center Trustworthy Data Science and Security)

    "It’s all solved by ChatGPT now, right? Tales from Legal Natural Language Processing"
    Abstract: We think that contemporary large language models, such as ChatGPT, are so all-mighty that we might believe there's no task they cannot tackle well. But is NLP (natural language processing) really a "solved problem"? What if we are not interested in boring text generation tasks like writing a fake summer school motivation letter, but want to understand legal argument reasoning instead? What if we want to know which legal arguments matter for the courts to decide? What if we want to answer laymen's question in a language that maybe only few persons on Earth really understand (yes, I'm referring to German "legalese")? In this talk, I'm going to address some of these research questions through empirical research lenses.
     
  • Bilal Zafar (Computing and Society, Ruhr University Bochum & Research Center Trustworthy Data Science and Security)

    "On Early Detection of Hallucinations in Factual Questions Answering"
    Abstract: Hallucinations remain a major impediment to the adoption of LLMs. In this work, we explore if the artifacts associated with model generations can provide hints that a response will contain hallucinations. Our results show changes in entropy in input token attribution and output softmax probability for hallucinated outputs, revealing an “uncertain” behavior during model inference. This uncertain behavior also manifests itself in auxiliary classifiers trained on outputs and internal activations, which we use to create a hallucination detector. We further show that tokens preceding the hallucination can predict subsequent hallucinations even before they occur.

15:30-15:45 Coffee break

15:45-16:45 Challenges: Presentation of Results // Closing Remarks