About
Advance Program
Invited Speakers
Call for Papers
Important Dates
Accepted Papers
Program Committee
Officers
Conference Venue
Registration
Hotel Information
Travel Information
Since 1995

 

Maintained and
Sponsored by

Keynote Speakers

Dr. Roberto Di Pietro (College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar)
Title:

Securing Content in Decentralized Online Social Networks: Solutions, Limitations, and the Road Ahead

Abstract:

The most popular On-line Social Networks (OSNs) are based on centralized architectures, where service providers (e.g., Facebook, Twitter, or Instagram) have full control over the data published by their user—a requirement of their business model, based on the monetization of such data. In addition, such centralized architectures also increase the risk of censorship, surveillance, and information leakage. Distributed On-line Social Networks (DOSNs), instead, are typically based on a P2P architecture, where there is no central service provider in control of user data.

In this talk, we investigate and compare the principal content privacy enforcement models adopted by current DOSNs. In particular, we will discuss their suitability to support different types of privacy policies based on different user-group modelling. The discussion is supported by an evaluation carried out by implementing several models and comparing their performance for the typical operations performed on groups, that is: content publish, user join and leave. Further, we also highlight the limitations of current approaches and show future research directions to help DOSNs become a relevant player in the On-line Social Networks ecosystem, so as to put users back in control of their generated content.

Speaker's Biography:

Dr. Roberto Di Pietro, an ACM Distinguished Scientist, is a Full Professor in Cybersecurity at HBKU-CSE (College of Science and Engineering at Hamad Bin Khalifa University). Previously, he was in the capacity of Global Head Security Research at Nokia Bell Labs, and Associate Professor (with tenure) of Computer Science at University of Padua, Italy. He started his Computer Scientist career back in 1995, serving for a few years as a senior military technical officer, with the Italian MoD (Ministry of Defence). His main research interests include: AI driven cyber-security; resiliency, security, and privacy for wired and wireless distributed systems (e.g. Blockchain technology, Cloud, IoT, OSNs); virtualization security; applied cryptography; intrusion detection, and data science. He is involved in M&A of start-up—having also founded one (exited). In 2011-2012 he was awarded a Chair of Excellence from University Carlos III, Madrid. In 2020 he received the Jean-Claude Laprie Award for having significantly influenced the theory and practice of Dependable Computing.



Nathalie Baracaldo (IBM Research)
Title:

Federated Learning: The Hype, State-of-the-Art and Open Challenges

Abstract:

The popularity of machine learning models has dramatically increased in a large variety of applications that affect people's daily lives, including product recommendations, healthcare predictions and critical applications. This wide availability has at the same time raised questions about the trustworthiness, security, and privacy implications of using these systems. While novel technologies and methodologies have been emerging to protect the privacy and security of AI Systems, there are still open challenges that need to be addressed by the research community.

Over the past years, my research has focused on the creation of defenses to protect the machine learning pipeline and the design of privacy- aware methodologies to enable the training of accurate machine learning models without transmitting the data to a central place. In this talk, I will focus on data privacy covering a game-changing paradigm known as federated learning, which to some extend addresses privacy concerns and regulations that prevent the free transmission and sharing of information. Federated learning is a technology that enables multiple participants owning private data to collaboratively train a single machine learning model while maintaining their training data locally. This is in sharp contrast to traditional machine learning where all data needs to be in a central place. Some argue that federated learning is a privacy-by-design technology given that it does not require data to be transmitted to a central place. However, there are still privacy risks that are relevant in some scenarios. Novel inference attacks that take advantage of the federated learning process have been demonstrated in the literature, resulting in a variety of defenses that aim to reduce these risks. I will present some of these attacks and several cryptographical and differential privacy techniques to deter them. The plethora of defenses is particularly interesting given their diverse threat models and the divergent set of privacy requirements they address. In this talk I will demystify them. I will also explain some challenges related to manipulation attacks and machine learning fairness in the context of federated learning. Finally, I will touch upon transparency issues and how to enable accountability for regulated industries and vertical federated learning. This talk will go through the security and privacy challenges and solutions in federated learning systems.

Speaker's Biography:

Dr. Nathalie Baracaldo leads the AI Security and Privacy Solutions team and is a Research Staff Member at IBM's Almaden Research Center in San Jose, CA. Nathalie is passionate about delivering machine learning solutions that are highly accurate, withstand adversarial attacks and protect data privacy. Nathalie has led her team to the design of the IBM Federated Learning framework, which is now part of the Watson Machine Learning product. Nathalie is also the primary investigator for the DARPA program “Guaranteeing AI Robustness Against Deception” (GARD). In 2020, Nathalie received the IBM Master Inventor distinction for her contributions to IBM Intellectual Property and innovation. Nathalie also received the 2021 Corporate Technical Recognition, one of the highest recognitions provided to IBMers for breakthrough technical achievements that have led to notable market and industry success for IBM. This recognition was awarded for Nathalie's contribution to the Trusted AI Initiative. Nathalie has received multiple best paper awards and published in top- tier conferences and journals. Nathalie's research interests include security and privacy, distributed systems and machine learning. Nathalie received her Ph.D. degree from the University of Pittsburgh in 2016.



Dr. Endadul Hoque (Syracuse University)
Title:

Network (In)security: Leniency in Protocols' Design, Code and Configuration

Abstract:

Protocols are one of the founding pillars of network communication. Given their importance, protocols have received great attention not only from the research community but also from adversaries. Protocols, particularly their implementations, have been lucrative targets for adversarial attacks to induce network insecurity by compromising the guarantees that these implementations should provide. Most of these attacks can be traced back to the leniency in their design, code, or configuration. Finding leniency in implementations is challenging as these lenient instances are primarily tied to the semantics of the protocol and thus demand for techniques unlike existing approaches that we use for finding low-level memory corruption bugs.

In this talk, I will discuss our experience and lesson learned in detecting leniency in different layers of the TCP/IP network protocol stack. First, I will show how leniency in the design of loss-based TCP congestion control schemes can be exploited by an attacker to manipulate the victim into taking actions favorable for the attacker. I will introduce our model-guided fuzzing approach to find such manipulation attacks in different TCP implementations part of mainstream operating systems (e.g., Linux, Windows). Next, I will focus on leniency in code, where an implementation exhibits noncompliance with its design. Specifically, I will talk about how lenient implementations of X.509 certificate validation in SSL/TLS libraries can be exploited by an attacker to mount impersonation attacks. Finally, I will highlight that it is not always the protocol's design or code, sometimes it is humans: the users and/or the IT (Information Technology) administrators. Specifically, I will present our multifaceted measurement study where we examined the WPA2-Enterprise Wi-Fi configurations prescribed in tertiary education institutes (TEIs) around the world. I will share our findings about the widespread insecure practices that can leave users of these institutes open to credential theft.

Speaker's Biography:

Dr. Endadul Hoque is an Assistant Professor at Syracuse University (SU) in the Department of Electrical Engineering and Computer Science. His research interests are broadly in security of computer networks and systems, with a focus on automated vulnerability detection, applied program analysis, and building resilient systems. At SU, he is the director of the Security of Networked Systems (SYNE) lab. Before joining SU, he was an Assistant Professor at Florida International University (FIU) and a Postdoctoral Research Associate at Northeastern University. He received his Ph.D. in Computer Science from Purdue University. He received the Teaching Fellowship Award and the Bilsland Dissertation Fellowship Award at Purdue, a distinguished paper award at NDSS 2018, and a Google Research Scholar Award in 2022.


Copyright (c) ACM SACMAT All rights reserved.