Organized by the Distributed and Pervasive Systems Group
Abdul-Rasheed OttunUniversity of Tartu, Estonia
M.M. Rasinthe MarasingheUniversity of Tartu, Estonia
16:15: Huber Flores
University of Tartu, Estonia
Topic: AI Sensors and Dashboards: Gauging and Monitoring the Inference Capabilities of AI
Abstract: Humans are often perplexed by concepts that are difficult to grasp. Measurements are fundamental for understanding, controlling, and improving various aspects of the world around us. Disruptive technologies as AI cause confusion to people as it is unclear how it works, necessitating to develop mechanisms to gauge and tune how AI operates. In this talk, we introduce AI sensors as a way to construct a dashboard that can be used to tune, control and ensure that AI decision making is trustworthy and responsible.
Speaker bio:Huber Flores is an Associate Professor of Pervasive Computing at the Institute of Computer Science, University of Tartu, Estonia as well as a Docent (Affiliate Professor) at the University of Helsinki, Finland. Prior to that, he held the prestigious Academy of Finland Postdoctoral Fellowship and the competitive Faculty of Science Postdoctoral Fellowship of the University of Helsinki. He is former member of UBICOMP at University of Oulu, Finland, and SyMLab at the Hong Kong University of Science and Technology, Hong Kong. He is member of ACM (SIGMOBILE) and IEEE associations. He is also the recipient of the Jorma Ollila Award given by the Nokia Foundation. His major research interests include mobile and pervasive computing, distributed systems, and mobile cloud computing.
16:40: Michell Boerger
Fraunhofer Institute for Open Communication Systems FOKUS
Topic: Exploring the Robustness and Accountability of ML Models operating on Multivariate Timeseries Data
Abstract: This presentation delves into the critical domain of analyzing trustworthiness aspects tailored for machine learning (ML) models handling multivariate time series data. As ML applications become pervasive, ensuring the reliability and interpretability of models is imperative, especially in dynamic environments with sequential data. In this context, the presentation will explore various facets of trustworthiness analysis for timeseries classifiers, encompassing model accountability and robustness. Focused on real-world applications, the session illuminates these aspects based on two tangible use cases: 1) We explore the impact of data poisoning attacks on a fall detection use case operating on multivariate acceleration data, and 2) We present findings of a quantitative accountability analysis in a healthcare context, in which the stability and consistency of various XAI methods are investigated for an ML model operating on multivariate ECG data. These real-world case studies will allow participants to gain valuable insights into analyzing the robustness and accountability of ML models and serve as a valuable guide for dealing with trustworthiness challenges and deploying responsible AI practices when developing ML models operating on multivariate time series data.
Speaker bio: Michell Boerger is currently working as a Researcher for the Fraunhofer Institute for Open Communication Systems FOKUS, where he is active in the research group “Quality Engineering für Urban ICT”. In his work at Fraunhofer FOKUS, Michell is involved in several industry and research projects in the areas of Smart Cities, Artificial Intelligence, Blockchain, IT Security, and Smart Energy. Currently, he is working in the H2020 project SPATIAL on the challenges and possible solutions of explainable and trustworthy Artificial Intelligence in ICT systems and cybersecurity applications.
17:05: Sonia Sousa
Tallinn University/Tallinn University of Technology
Topic: An Human-Computer Interaction (HCI) Perspective on Users' Trust in AI
Abstract: Fostering and maintaining user trust in AI systems can help unlock its potential society and ensure its uptake (or appropriation) without fear. The Trustworthy HCI lab at Tallinn University has been conducting research on further understanding this phenomenon from a human-centred perspective. It recognises the complex nature and challenges of incorporating users' trust characteristics in AI-enabling systems and, therefore, focuses on finding ways for non-experts to implement and operationalise trust requirements from an HCI perspective. The main goal of this talk is to provide insights on how novel human-centred design (HCD) approaches towards trust and trustworthiness can be used to help understand user requirements, resources, and environments, evaluate their user experience (UX) and improve users' feelings of trustworthiness in technical systems. It also aims to provide an overview of user trust definitions, user trust influencing factors, and methods to measure user trust in AI-enabled systems. It provides an overview of how some trust concepts can be adjusted and used in the Human-Computer Interaction (HCI) field to help shape people's experiences with digital artefacts. Results achieved demonstrate that User trust in AI-enabled systems is influenced by three main themes: socio-ethical considerations, technical and design features, and user characteristics. It also highlights the importance of selecting and tailoring system features according to the targeted user group's characteristics.
Speaker bio: Sonia Sousa is an Associate Professor of Interaction Design at Tallinn University and a Senior Researcher at Tallinn University of Technology. She holds a PhD in Web Science and Technology, focused on the topic of “Trust in Computing”, and a PhD in Education technology focused on “Exploring the Interaction between Trust and Performance”. Her research interests are focused on Trust in technology, Trustworthy AI, Human-Computer Interaction, and User experience. She has been researching and mapping the Trust implications in Human-Computer Interaction (HCI) for over 15 years. She is the head of the Join Online MSc in Interaction Design. Her most recently funded projects include NGI-Trust, CHIST-ERA, Horizon 2020, and AFOSR (U.S. Air Force Office of Scientific Research). Nominated for AcademiaNet – the expert database for outstanding female academics.
17:45: Manh Nguyen
Research Engineer at Montimage, France
Topic: Towards Improving Explainability, Resilience, and Performance of Cybersecurity Analysis of 5G/IoT Networks
Abstract: The rise of encrypted Internet traffic has created a need for more advanced techniques in traffic analysis and classification. While traditional rule-based and signature-based methods face challenges due to encryption, machine learning (ML) has shown promise in anomaly detection. However, complex ML models like deep neural networks lack transparency, leading to a need for explainable AI (XAI) frameworks. In this talk, we introduce Montimage AI Platform (MAIP), a user-friendly deep learning framework for detecting and classifying malicious traffic while providing explanations for model decisions. Additionally, we perform adversarial attacks to assess the accountability and robustness of our model using different quantifiable metrics. Through experiments with both public and private network traffic, we demonstrate MAIP's high performance, robustness, and alignment with domain knowledge.
Speaker bio: Manh Nguyen is currently a research engineer at Montimage, France. He earned his PhD from CEA LIST and the University of Grenoble Alpes in 2021. He also holds a master's degree in network and telecommunications from the University of Paris Sud, obtained in 2013. His current research emphasizes explainable AI and automated vulnerability detection, with a particular interest in (directed) greybox fuzzing. He is currently contributing to several H2020 projects, such as SPATIAL, AI4CYBER, NATWORK, and RESILMESH, focusing on explainable and trustworthy artificial intelligence-based solutions in IoT/5G/6G and cybersecurity applications.
18:10: David Solans
Telefónica Research
Topic: Fairness and Diversity in Federated Learning
Abstract: Distributed computing, with federated learning at its forefront, emerges as a relevant framework for developing AI systems fostering the preservation of data privacy. While reducing the amount of communication required to train AI models, the same framework imposes novel barriers for training fair models, what often requires having sensible attributes at hand. In the other hand, existing techniques to reduce the surface against adversarial attacks might lead to a reduced diversity in the upcoming model. In this talk, we will cover both aspects in the context of Federated Learning, giving an overview of our last advances in this domain.
Speaker bio: David Solans is Researcher at Telefónica Research. He holds a PhD in Communication Sciences from Pompeu Fabra University, focused on the topic of Algorithmic Fairness and non-discrimination. His research interest is focused on mitigating social risks arising from the incorporation of novel technologies in our daily lives as well as using technology to address issues of social significance. He is the author and/or co-author of multiple scientific articles and is the inventor of several patents published by the United States Patent Office as well as the European office.
18:35: Bart Siniarski
University College Dublin
Topic: AI Under Attack: Protecting 6G Networks Through Resilience and Innovation
Abstract: In recent years, Artificial Intelligence (AI) has become increasingly popular, especially in the field of advanced network systems that are paving the way for future technologies like Beyond 5G and 6G. While these advancements promise a more automated and efficient world, they also bring new challenges, particularly in terms of security. AI systems can be vulnerable to cyber attacks, which could jeopardize the privacy and safety of many people. To address this, researchers are focusing on how to make these AI systems more resilient against such attacks. This involves testing and refining AI models used in network traffic management to ensure they can withstand and recognize attacks during both their development and operational phases. One innovative method being explored is Explainable AI (XAI), which helps in understanding and identifying when a network is under attack. Through extensive experiments, it has been found that certain measures can effectively gauge how well an AI model can resist attacks, and XAI proves to be a promising tool in detecting these threats. This research is crucial for ensuring the security and reliability of AI in our future network systems.
Speaker bio: Bartlomiej Siniarski currently holds the position of a post-doctoral researcher and serves as a project manager for the EU H2020 SPATIAL project at University College Dublin. His academic journey commenced with undergraduate studies in Computer Science at both University College Dublin (Ireland) and the University of New South Wales (Australia). In 2018, he successfully earned his doctoral degree. His expertise lies in the field of IoT networks, with a particular focus on the design, data collection, storage, and analysis derived from intelligent sensors. Moreover, Bartlomiej has actively contributed to various projects, including MSCA-ITN-ETN, ICT-52-2020, and H2020-SU-DS-2020, which are dedicated to addressing challenges in network security, performance, and management within 5G and B5G networks.