Loading…
Friday May 23, 2025 12:35pm - 12:48pm EDT
Authors - Shahinul Hoque, Farhin Farhad Riya, Jinyuan Sun, Hairong Qi, Kevin Tomsovic
Abstract - Machine learning (ML) models hosted on cloud platforms are increasingly susceptible to security vulnerabilities, particularly due to their exposure to external queries in untrusted environments. In this paper, we explore this specific vulnerability by leveraging fuzzing techniques to systematically generate diverse input samples (X) to query cloud-hosted ML models. By capturing the corresponding outputs (y), we attempted to train a shadow model that mimics the behavior of the target model. This methodology allows us to systematically assess the security risks associated with such models, including information leakage, extraction of decision boundaries, and model inversion. The core of our study is to determine the feasibility of mimicking cloud-hosted ML models using shadow models trained via various fuzzing attacks. We focus on computationally efficient fuzzing methods to evaluate the practicality of these attacks. Our findings demonstrate that fuzzing effectively creates a comprehensive dataset for training the shadow model, thereby minimizing the number of queries needed to mount successful attacks. Moreover, we discuss the broader implications of these security breaches on the confidentiality, integrity, and availability of the models, identifying significant security deficiencies in current deployment practices of cloud-hosted ML models. We conclude with proposed countermeasures designed to defend the security of these systems, underscoring the importance of implementing robust defensive strategies in cloud-based ML frameworks.
Paper Presenter
avatar for Shahinul Hoque

Shahinul Hoque

United States of America
Friday May 23, 2025 12:35pm - 12:48pm EDT
Room - 1235 NYC-ILR Conference Center, NY, USA

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link