-
Construction of all MDS and involutory MDS matrices arXiv.cs.CR Pub Date : 2024-03-15 Yogesh Kumar, P. R. Mishra, Susanta Samanta, Kishan Chand Gupta, Atul Gaur
In this paper, we propose two algorithms for a hybrid construction of all $n\times n$ MDS and involutory MDS matrices over a finite field $\mathbb{F}_{p^m}$, respectively. The proposed algorithms effectively narrow down the search space to identify $(n-1) \times (n-1)$ MDS matrices, facilitating the generation of all $n \times n$ MDS and involutory MDS matrices over $\mathbb{F}_{p^m}$. To the best
-
Unveiling Wash Trading in Popular NFT Markets arXiv.cs.CR Pub Date : 2024-03-15 Yuanzheng Niu, Xiaoqi Li, Hongli Peng, Wenkai Li
As emerging digital assets, NFTs are susceptible to anomalous trading behaviors due to the lack of stringent regulatory mechanisms, potentially causing economic losses. In this paper, we conduct the first systematic analysis of four non-fungible tokens (NFT) markets. Specifically, we analyze more than 25 million transactions within these markets, to explore the evolution of wash trade activities. Furthermore
-
Unsupervised Threat Hunting using Continuous Bag-of-Terms-and-Time (CBoTT) arXiv.cs.CR Pub Date : 2024-03-15 Varol Kayhan, Shivendu Shivendu, Rouzbeh Behnia, Clinton Daniel, Manish Agrawal
Threat hunting is sifting through system logs to detect malicious activities that might have bypassed existing security measures. It can be performed in several ways, one of which is based on detecting anomalies. We propose an unsupervised framework, called continuous bag-of-terms-and-time (CBoTT), and publish its application programming interface (API) to help researchers and cybersecurity analysts
-
Formal Security Analysis of the AMD SEV-SNP Software Interface arXiv.cs.CR Pub Date : 2024-03-15 Petar Paradžik, Ante Derek, Marko Horvat
AMD Secure Encrypted Virtualization technologies enable confidential computing by protecting virtual machines from highly privileged software such as hypervisors. In this work, we develop the first, comprehensive symbolic model of the software interface of the latest SEV iteration called SEV Secure Nested Paging (SEV-SNP). Our model covers remote attestation, key derivation, page swap and live migration
-
Liquid Staking Tokens in Automated Market Makers arXiv.cs.CR Pub Date : 2024-03-15 Krzysztof Gogol, Robin Fritsch, Malte Schlosser, Johnnatan Messias, Benjamin Kraner, Claudio Tessone
This paper studies liquid staking tokens (LSTs) on automated market makers (AMMs), both theoretically and empirically. LSTs are tokenized representations of staked assets on proof-of-stake blockchains. First, we theoretically model LST-liquidity on AMMs. This includes categorizing suitable AMM types for LST liquidity, as well as deriving formulas for the necessary returns from trading fees to adequately
-
Specification and Enforcement of Activity Dependency Policies using XACML arXiv.cs.CR Pub Date : 2024-03-15 Tanjila Mawla, Maanak Gupta, Ravi Sandhu
The evolving smart and interconnected systems are designed to operate with minimal human intervention. Devices within these smart systems often engage in prolonged operations based on sensor data and contextual factors. Recently, an Activity-Centric Access Control (ACAC) model has been introduced to regulate these prolonged operations, referred to as activities, which undergo state changes over extended
-
Time-Frequency Jointed Imperceptible Adversarial Attack to Brainprint Recognition with Deep Learning Models arXiv.cs.CR Pub Date : 2024-03-15 Hangjie Yi, Yuhang Ming, Dongjun Liu, Wanzeng Kong
EEG-based brainprint recognition with deep learning models has garnered much attention in biometric identification. Yet, studies have indicated vulnerability to adversarial attacks in deep learning models with EEG inputs. In this paper, we introduce a novel adversarial attack method that jointly attacks time-domain and frequency-domain EEG signals by employing wavelet transform. Different from most
-
Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks arXiv.cs.CR Pub Date : 2024-03-15 Zahir Alsulaimawi
The advent of Federated Learning (FL) as a distributed machine learning paradigm has introduced new cybersecurity challenges, notably adversarial attacks that threaten model integrity and participant privacy. This study proposes an innovative security framework inspired by Control-Flow Attestation (CFA) mechanisms, traditionally used in cybersecurity, to ensure software execution integrity. By integrating
-
Federated Learning with Anomaly Detection via Gradient and Reconstruction Analysis arXiv.cs.CR Pub Date : 2024-03-15 Zahir Alsulaimawi
In the evolving landscape of Federated Learning (FL), the challenge of ensuring data integrity against poisoning attacks is paramount, particularly for applications demanding stringent privacy preservation. Traditional anomaly detection strategies often struggle to adapt to the distributed nature of FL, leaving a gap our research aims to bridge. We introduce a novel framework that synergizes gradient-based
-
Search-based Ordered Password Generation of Autoregressive Neural Networks arXiv.cs.CR Pub Date : 2024-03-15 Min Jin, Junbin Ye, Rongxuan Shen, Huaxing Lu
Passwords are the most widely used method of authentication and password guessing is the essential part of password cracking and password security research. The progress of deep learning technology provides a promising way to improve the efficiency of password guessing. However, current research on neural network password guessing methods mostly focuses on model structure and has overlooked the generation
-
How To Save Fees in Bitcoin Smart Contracts: a Simple Optimistic Off-chain Protocol arXiv.cs.CR Pub Date : 2024-03-14 Dario Maddaloni, Riccardo Marchesin, Roberto Zunino
We consider the execution of smart contracts on Bitcoin. There, every contract step corresponds to appending to the blockchain a new transaction that spends the output representing the old contract state, creating a new one for the updated state. This standard procedure requires the contract participants to pay transaction fees for every execution step. In this paper, we introduce a protocol that moves
-
Helpful or Harmful? Exploring the Efficacy of Large Language Models for Online Grooming Prevention arXiv.cs.CR Pub Date : 2024-03-14 Ellie Prosser, Matthew Edwards
Powerful generative Large Language Models (LLMs) are becoming popular tools amongst the general public as question-answering systems, and are being utilised by vulnerable groups such as children. With children increasingly interacting with these tools, it is imperative for researchers to scrutinise the safety of LLMs, especially for applications that could lead to serious outcomes, such as online child
-
Explainable Machine Learning-Based Security and Privacy Protection Framework for Internet of Medical Things Systems arXiv.cs.CR Pub Date : 2024-03-14 Ayoub Si-ahmed, Mohammed Ali Al-Garadi, Narhimene Boustia
The Internet of Medical Things (IoMT) transcends traditional medical boundaries, enabling a transition from reactive treatment to proactive prevention. This innovative method revolutionizes healthcare by facilitating early disease detection and tailored care, particularly in chronic disease management, where IoMT automates treatments based on real-time health data collection. Nonetheless, its benefits
-
What Was Your Prompt? A Remote Keylogging Attack on AI Assistants arXiv.cs.CR Pub Date : 2024-03-14 Roy Weiss, Daniel Ayzenshteyn, Guy Amit, Yisroel Mirsky
AI assistants are becoming an integral part of society, used for asking advice or help in personal and confidential issues. In this paper, we unveil a novel side-channel that can be used to read encrypted responses from AI Assistants over the web: the token-length side-channel. We found that many vendors, including OpenAI and Microsoft, have this side-channel. However, inferring the content of a response
-
A Sophisticated Framework for the Accurate Detection of Phishing Websites arXiv.cs.CR Pub Date : 2024-03-13 Asif Newaz, Farhan Shahriyar Haq, Nadim Ahmed
Phishing is an increasingly sophisticated form of cyberattack that is inflicting huge financial damage to corporations throughout the globe while also jeopardizing individuals' privacy. Attackers are constantly devising new methods of launching such assaults and detecting them has become a daunting task. Many different techniques have been suggested, each with its own pros and cons. While machine learning-based
-
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps arXiv.cs.CR Pub Date : 2024-03-14 Ruixuan Liu, Tianhao Wang, Yang Cao, Li Xiong
The pre-training and fine-tuning paradigm has demonstrated its effectiveness and has become the standard approach for tailoring language models to various tasks. Currently, community-based platforms offer easy access to various pre-trained models, as anyone can publish without strict validation processes. However, a released pre-trained model can be a privacy trap for fine-tuning datasets if it is
-
RANDAO-based RNG: Last Revealer Attacks in Ethereum 2.0 Randomness and a Potential Solution arXiv.cs.CR Pub Date : 2024-03-14 Do Hai Son, Tran Thi Thuy Quynh, Le Quang Minh
Ethereum 2.0 is a major upgrade to improve its scalability, throughput, and security. In this version, RANDAO is the scheme to randomly select the users who propose, confirm blocks, and get rewards. However, a vulnerability, referred to as the `Last Revealer Attack' (LRA), compromises the randomness of this scheme by introducing bias to the Random Number Generator (RNG) process. This vulnerability
-
AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting arXiv.cs.CR Pub Date : 2024-03-14 Yu Wang, Xiaogeng Liu, Yu Li, Muhao Chen, Chaowei Xiao
With the advent and widespread deployment of Multimodal Large Language Models (MLLMs), the imperative to ensure their safety has become increasingly pronounced. However, with the integration of additional modalities, MLLMs are exposed to new vulnerabilities, rendering them prone to structured-based jailbreak attacks, where semantic content (e.g., "harmful text") has been injected into the images to
-
Covert Communication for Untrusted UAV-Assisted Wireless Systems arXiv.cs.CR Pub Date : 2024-03-14 Chan Gao, Linying Tian, Dong Zheng
Wireless systems are of paramount importance for providing ubiquitous data transmission for smart cities. However, due to the broadcasting and openness of wireless channels, such systems face potential security challenges. UAV-assisted covert communication is a supporting technology for improving covert performances and has become a hot issue in the research of wireless communication security. This
-
REPQC: Reverse Engineering and Backdooring Hardware Accelerators for Post-quantum Cryptography arXiv.cs.CR Pub Date : 2024-03-14 Samuel Pagliarini, Aikata Aikata, Malik Imran, Sujoy Sinha Roy
Significant research efforts have been dedicated to designing cryptographic algorithms that are quantum-resistant. The motivation is clear: robust quantum computers, once available, will render current cryptographic standards vulnerable. Thus, we need new Post-Quantum Cryptography (PQC) algorithms, and, due to the inherent complexity of such algorithms, there is also a demand to accelerate them in
-
LDPRecover: Recovering Frequencies from Poisoning Attacks against Local Differential Privacy arXiv.cs.CR Pub Date : 2024-03-14 Xinyue Sun, Qingqing Ye, Haibo Hu, Jiawei Duan, Tianyu Wo, Jie Xu, Renyu Yang
Local differential privacy (LDP), which enables an untrusted server to collect aggregated statistics from distributed users while protecting the privacy of those users, has been widely deployed in practice. However, LDP protocols for frequency estimation are vulnerable to poisoning attacks, in which an attacker can poison the aggregated frequencies by manipulating the data sent from malicious users
-
Privacy Preserving Anomaly Detection on Homomorphic Encrypted Data from IoT Sensors arXiv.cs.CR Pub Date : 2024-03-14 Anca Hangan, Dragos Lazea, Tudor Cioara
IoT devices have become indispensable components of our lives, and the advancement of AI technologies will make them even more pervasive, increasing the vulnerability to malfunctions or cyberattacks and raising privacy concerns. Encryption can mitigate these challenges; however, most existing anomaly detection techniques decrypt the data to perform the analysis, potentially undermining the encryption
-
Graph-Based DDoS Attack Detection in IoT Systems with Lossy Network arXiv.cs.CR Pub Date : 2024-03-14 Arvin Hekmati, Bhaskar Krishnamachari
This study introduces a robust solution for the detection of Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) systems, leveraging the capabilities of Graph Convolutional Networks (GCN). By conceptualizing IoT devices as nodes within a graph structure, we present a detection mechanism capable of operating efficiently even in lossy network environments. We introduce various graph
-
Ciphertext-Only Attack on a Secure $k$-NN Computation on Cloud arXiv.cs.CR Pub Date : 2024-03-14 Shyam Murthy, Santosh Kumar Upadhyaya, Srinivas Vivek
The rise of cloud computing has spurred a trend of transferring data storage and computational tasks to the cloud. To protect confidential information such as customer data and business details, it is essential to encrypt this sensitive data before cloud storage. Implementing encryption can prevent unauthorized access, data breaches, and the resultant financial loss, reputation damage, and legal issues
-
Acoustic Side Channel Attack on Keyboards Based on Typing Patterns arXiv.cs.CR Pub Date : 2024-03-13 Alireza Taheritajar, Reza Rahaeimehr
Acoustic side-channel attacks on keyboards can bypass security measures in many systems that use keyboards as one of the input devices. These attacks aim to reveal users' sensitive information by targeting the sounds made by their keyboards as they type. Most existing approaches in this field ignore the negative impacts of typing patterns and environmental noise in their results. This paper seeks to
-
Review of Generative AI Methods in Cybersecurity arXiv.cs.CR Pub Date : 2024-03-13 Yagmur Yigit, William J Buchanan, Madjid G Tehrani, Leandros Maglaras
Large language models (LLMs) and generative artificial intelligence (GenAI) constitute paradigm shifts in cybersecurity that present hitherto unseen challenges as well as opportunities. In examining the state-of-the-art application of GenAI in cybersecurity, this work highlights how models like Google's Gemini and ChatGPT-4 potentially enhance security protocols, vulnerability assessment, and threat
-
A Comparison of SynDiffix Multi-table versus Single-table Synthetic Data arXiv.cs.CR Pub Date : 2024-03-13 Paul Francis
SynDiffix is a new open-source tool for structured data synthesis. It has anonymization features that allow it to generate multiple synthetic tables while maintaining strong anonymity. Compared to the more common single-table approach, multi-table leads to more accurate data, since only the features of interest for a given analysis need be synthesized. This paper compares SynDiffix with 15 other commercial
-
Tastle: Distract Large Language Models for Automatic Jailbreak Attack arXiv.cs.CR Pub Date : 2024-03-13 Zeguan Xiao, Yan Yang, Guanhua Chen, Yun Chen
Large language models (LLMs) have achieved significant advances in recent days. Extensive efforts have been made before the public release of LLMs to align their behaviors with human values. The primary goal of alignment is to ensure their helpfulness, honesty and harmlessness. However, even meticulously aligned LLMs remain vulnerable to malicious manipulations such as jailbreaking, leading to unintended
-
DONAPI: Malicious NPM Packages Detector using Behavior Sequence Knowledge Mapping arXiv.cs.CR Pub Date : 2024-03-13 Cheng HuangSichuan University, Nannan WangSichuan University, Ziyan WangSichuan University, Siqi SunSichuan University, Lingzi LiSichuan University, Junren ChenSichuan University, Qianchong ZhaoSichuan University, Jiaxuan HanSichuan University, Zhen YangSichuan University, Lei ShiHuawei Technologies
With the growing popularity of modularity in software development comes the rise of package managers and language ecosystems. Among them, npm stands out as the most extensive package manager, hosting more than 2 million third-party open-source packages that greatly simplify the process of building code. However, this openness also brings security risks, as evidenced by numerous package poisoning incidents
-
Information Leakage through Physical Layer Supply Voltage Coupling Vulnerability arXiv.cs.CR Pub Date : 2024-03-12 Sahan Sanjaya, Aruna Jayasena, Prabhat Mishra
Side-channel attacks exploit variations in non-functional behaviors to expose sensitive information across security boundaries. Existing methods leverage side-channels based on power consumption, electromagnetic radiation, silicon substrate coupling, and channels created by malicious implants. Power-based side-channel attacks are widely known for extracting information from data processed within a
-
The Variant of Designated Verifier Signature Scheme with Message Recovery arXiv.cs.CR Pub Date : 2024-03-12 Hong-Sheng Huang, Yu-Lei Fu, Han-Yu Lin
In this work, we introduce a strong Designated Verifier Signature (DVS) scheme that incorporates a message recovery mechanism inspired by the concept of the Universal Designated Verifier Signature (UDVS) scheme. It is worth noting that Saeednia's strong designated verifier signature scheme fails to guarantee the privacy of the signature, making it unsuitable for certain applications such as medical
-
UniHand: Privacy-preserving Universal Handover for Small-Cell Networks in 5G-enabled Mobile Communication with KCI Resilience arXiv.cs.CR Pub Date : 2024-03-12 Rabiah Alnashwan, Prosanta Gope, Benjamin Dowling
Introducing Small Cell Networks (SCN) has significantly improved wireless link quality, spectrum efficiency and network capacity, which has been viewed as one of the key technologies in the fifth-generation (5G) mobile network. However, this technology increases the frequency of handover (HO) procedures caused by the dense deployment of cells in the network with reduced cell coverage, bringing new
-
Towards Model Extraction Attacks in GAN-Based Image Translation via Domain Shift Mitigation arXiv.cs.CR Pub Date : 2024-03-12 Di Mi, Yanjun Zhang, Leo Yu Zhang, Shengshan Hu, Qi Zhong, Haizhuan Yuan, Shirui Pan
Model extraction attacks (MEAs) enable an attacker to replicate the functionality of a victim deep neural network (DNN) model by only querying its API service remotely, posing a severe threat to the security and integrity of pay-per-query DNN-based services. Although the majority of current research on MEAs has primarily concentrated on neural classifiers, there is a growing prevalence of image-to-image
-
WannaLaugh: A Configurable Ransomware Emulator -- Learning to Mimic Malicious Storage Traces arXiv.cs.CR Pub Date : 2024-03-12 Dionysios Diamantopolous, Roman Pletka, Slavisa Sarafijanovic, A. L. Narasimha Reddy, Haris Pozidis
Ransomware, a fearsome and rapidly evolving cybersecurity threat, continues to inflict severe consequences on individuals and organizations worldwide. Traditional detection methods, reliant on static signatures and application behavioral patterns, are challenged by the dynamic nature of these threats. This paper introduces three primary contributions to address this challenge. First, we introduce a
-
Backdoor Attack with Mode Mixture Latent Modification arXiv.cs.CR Pub Date : 2024-03-12 Hongwei Zhang, Xiaoyin Xu, Dongsheng An, Xianfeng Gu, Min Zhang
Backdoor attacks become a significant security concern for deep neural networks in recent years. An image classification model can be compromised if malicious backdoors are injected into it. This corruption will cause the model to function normally on clean images but predict a specific target label when triggers are present. Previous research can be categorized into two genres: poisoning a portion
-
The order-theoretical foundation for data flow security arXiv.cs.CR Pub Date : 2024-03-12 Luigi Logrippo
Some theories on data flow security are based on order-theoretical concepts, most commonly on lattice concepts. This paper presents a correspondence between security concepts and partial order concepts, by which the former become an application of the latter. The formalization involves concepts of data flow, equivalence classes of entities that can access the same data, and labels. Efficient, well-known
-
A Model for Assessing Network Asset Vulnerability Using QPSO-LightGBM arXiv.cs.CR Pub Date : 2024-03-11 Xinyu Li, Yu Gu, Chenwei Wang, Peng Zhao
With the continuous development of computer technology and network technology, the scale of the network continues to expand, the network space tends to be complex, and the application of computers and networks has been deeply into politics, the military, finance, electricity, and other important fields. When security events do not occur, the vulnerability assessment of these high-risk network assets
-
Contemplating Secure and Optimal Design Practices for Information Infrastructure From a Human Factors Perspective arXiv.cs.CR Pub Date : 2024-03-09 Niroop Sugunaraj
Designing secure information infrastructure is a function of design and usability. However, security is seldom given priority when systems are being developed. Secure design practices should balance between functionality (i.e., proper design) to meet minimum requirements and user-friendliness. Design recommendations such as those with a user-centric approach (i.e., inclusive of only relevant information
-
Towards Incident Response Orchestration and Automation for the Advanced Metering Infrastructure arXiv.cs.CR Pub Date : 2024-03-11 Alexios Lekidis, Vasileios Mavroeidis, Konstantinos Fysarakis
The threat landscape of industrial infrastructures has expanded exponentially over the last few years. Such infrastructures include services such as the smart meter data exchange that should have real-time availability. Smart meters constitute the main component of the Advanced Metering Infrastructure, and their measurements are also used as historical data for forecasting the energy demand to avoid
-
Unprotected 4G/5G Control Procedures at Low Layers Considered Dangerous arXiv.cs.CR Pub Date : 2024-03-11 Norbert Ludant, Marinos Vomvas, Guevara Noubir
Over the years, several security vulnerabilities in the 3GPP cellular systems have been demonstrated in the literature. Most studies focus on higher layers of the cellular radio stack, such as the RRC and NAS, which are cryptographically protected. However, lower layers of the stack, such as PHY and MAC, are not as thoroughly studied, even though they are neither encrypted nor integrity protected.
-
Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code arXiv.cs.CR Pub Date : 2024-03-11 Cristina Improta
AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by
-
Stealing Part of a Production Language Model arXiv.cs.CR Pub Date : 2024-03-11 Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr
We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI's ChatGPT or Google's PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under \$20 USD, our attack extracts the entire projection matrix of OpenAI's Ada and Babbage
-
Self-Sovereign Identity for Electric Vehicle Charging arXiv.cs.CR Pub Date : 2024-03-11 Adrian Kailus, Dustin Kern, Christoph Krauß
Electric Vehicles (EVs) are more and more charged at public Charge Points (CPs) using Plug-and-Charge (PnC) protocols such as the ISO 15118 standard which eliminates user interaction for authentication and authorization. Currently, this requires a rather complex Public Key Infrastructure (PKI) and enables driver tracking via the included unique identifiers. In this paper, we propose an approach for
-
Real is not True: Backdoor Attacks Against Deepfake Detection arXiv.cs.CR Pub Date : 2024-03-11 Hong Sun, Ziqiang Li, Lei Liu, Bin Li
The proliferation of malicious deepfake applications has ignited substantial public apprehension, casting a shadow of doubt upon the integrity of digital media. Despite the development of proficient deepfake detection mechanisms, they persistently demonstrate pronounced vulnerability to an array of attacks. It is noteworthy that the pre-existing repertoire of attacks predominantly comprises adversarial
-
Towards more accurate and useful data anonymity vulnerability measures arXiv.cs.CR Pub Date : 2024-03-11 Paul Francis, David Wagner
The purpose of anonymizing structured data is to protect the privacy of individuals in the data while retaining the statistical properties of the data. There is a large body of work that examines anonymization vulnerabilities. Focusing on strong anonymization mechanisms, this paper examines a number of prominent attack papers and finds several problems, all of which lead to overstating risk. First
-
DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification arXiv.cs.CR Pub Date : 2024-03-11 Jasper Stang, Torsten Krauß, Alexandra Dmitrienko
The surge in popularity of machine learning (ML) has driven significant investments in training Deep Neural Networks (DNNs). However, these models that require resource-intensive training are vulnerable to theft and unauthorized use. This paper addresses this challenge by introducing DNNShield, a novel approach for DNN protection that integrates seamlessly before training. DNNShield embeds unique identifiers
-
Asset-driven Threat Modeling for AI-based Systems arXiv.cs.CR Pub Date : 2024-03-11 Jan von der Assen, Jamo Sharif, Chao Feng, Gérôme Bovet, Burkhard Stiller
Threat modeling is a popular method to securely develop systems by achieving awareness of potential areas of future damage caused by adversaries. The benefit of threat modeling lies in its ability to indicate areas of concern, paving the way to consider mitigation during the design stage. However, threat modeling for systems relying on Artificial Intelligence is still not well explored. While conventional
-
Intra-Section Code Cave Injection for Adversarial Evasion Attacks on Windows PE Malware File arXiv.cs.CR Pub Date : 2024-03-11 Kshitiz Aryal, Maanak Gupta, Mahmoud Abdelsalam, Moustafa Saleh
Windows malware is predominantly available in cyberspace and is a prime target for deliberate adversarial evasion attacks. Although researchers have investigated the adversarial malware attack problem, a multitude of important questions remain unanswered, including (a) Are the existing techniques to inject adversarial perturbations in Windows Portable Executable (PE) malware files effective enough
-
A Zero Trust Framework for Realization and Defense Against Generative AI Attacks in Power Grid arXiv.cs.CR Pub Date : 2024-03-11 Md. Shirajum Munir, Sravanthi Proddatoori, Manjushree Muralidhara, Walid Saad, Zhu Han, Sachin Shetty
Understanding the potential of generative AI (GenAI)-based attacks on the power grid is a fundamental challenge that must be addressed in order to protect the power grid by realizing and validating risk in new attack vectors. In this paper, a novel zero trust framework for a power grid supply chain (PGSC) is proposed. This framework facilitates early detection of potential GenAI-driven attack vectors
-
Practically adaptable CPABE based Health-Records sharing framework arXiv.cs.CR Pub Date : 2024-03-11 Raza Imam, Faisal Anwer
With recent elevated adaptation of cloud services in almost every major public sector, the health sector emerges as a vulnerable segment, particularly in data exchange of sensitive Health records, as determining the retention, exchange, and efficient use of patient records without jeopardizing patient privacy, particularly on mobile-applications remains an area to expand. In the existing scenarios
-
Refinement of MMIO Models for Improving the Coverage of Firmware Fuzzing arXiv.cs.CR Pub Date : 2024-03-10 Wei-Lun Huang, Kang G. Shin
Embedded systems (ESes) are now ubiquitous, collecting sensitive user data and helping the users make safety-critical decisions. Their vulnerability may thus pose a grave threat to the security and privacy of billions of ES users. Grey-box fuzzing is widely used for testing ES firmware. It usually runs the firmware in a fully emulated environment for efficient testing. In such a setting, the fuzzer
-
ABC-Channel: An Advanced Blockchain-based Covert Channel arXiv.cs.CR Pub Date : 2024-03-10 Xiaobo Ma, Pengyu Pan, Jianfeng Li, Wei Wang, Weizhi Meng, Xiaohong Guan
Establishing efficient and robust covert channels is crucial for secure communication within insecure network environments. With its inherent benefits of decentralization and anonymization, blockchain has gained considerable attention in developing covert channels. To guarantee a highly secure covert channel, channel negotiation should be contactless before the communication, carrier transaction features
-
Fluent: Round-efficient Secure Aggregation for Private Federated Learning arXiv.cs.CR Pub Date : 2024-03-10 Xincheng Li, Jianting Ning, Geong Sen Poh, Leo Yu Zhang, Xinchun Yin, Tianwei Zhang
Federated learning (FL) facilitates collaborative training of machine learning models among a large number of clients while safeguarding the privacy of their local datasets. However, FL remains susceptible to vulnerabilities such as privacy inference and inversion attacks. Single-server secure aggregation schemes were proposed to address these threats. Nonetheless, they encounter practical constraints
-
FedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning arXiv.cs.CR Pub Date : 2024-03-10 Zhuo Zhang, Jingyuan Zhang, Jintao Huang, Lizhen Qu, Hongzhi Zhang, Zenglin Xu
Instruction tuning has proven essential for enhancing the performance of large language models (LLMs) in generating human-aligned responses. However, collecting diverse, high-quality instruction data for tuning poses challenges, particularly in privacy-sensitive domains. Federated instruction tuning (FedIT) has emerged as a solution, leveraging federated learning from multiple data owners while preserving
-
SecureRights: A Blockchain-Powered Trusted DRM Framework for Robust Protection and Asserting Digital Rights arXiv.cs.CR Pub Date : 2024-03-10 Tiroshan Madushanka, Dhammika S. Kumara, Atheesh A. Rathnaweera
In the dynamic realm of digital content, safeguarding intellectual property rights poses critical challenges. This paper presents "SecureRights," an innovative Blockchain-based Trusted Digital Rights Management (DRM) framework. It strengthens the defence against unauthorized use and streamlines the claim of digital rights. Utilizing blockchain, digital watermarking, perceptual hashing, Quick Response
-
Federated Learning: Attacks, Defenses, Opportunities, and Challenges arXiv.cs.CR Pub Date : 2024-03-10 Ghazaleh Shirvani, Saeid Ghasemshirazi, Behzad Beigzadeh
Using dispersed data and training, federated learning (FL) moves AI capabilities to edge devices or does tasks locally. Many consider FL the start of a new era in AI, yet it is still immature. FL has not garnered the community's trust since its security and privacy implications are controversial. FL's security and privacy concerns must be discovered, analyzed, and recorded before widespread usage and
-
MirrorAttack: Backdoor Attack on 3D Point Cloud with a Distorting Mirror arXiv.cs.CR Pub Date : 2024-03-09 Yuhao Bian, Shengjing Tian, Xiuping Liu
The widespread deployment of Deep Neural Networks (DNNs) for 3D point cloud processing starkly contrasts with their susceptibility to security breaches, notably backdoor attacks. These attacks hijack DNNs during training, embedding triggers in the data that, once activated, cause the network to make predetermined errors while maintaining normal performance on unaltered data. This vulnerability poses
-
Hufu: A Modality-Agnositc Watermarking System for Pre-Trained Transformers via Permutation Equivariance arXiv.cs.CR Pub Date : 2024-03-09 Hengyuan Xu, Liyao Xiang, Xingjun Ma, Borui Yang, Baochun Li
With the blossom of deep learning models and services, it has become an imperative concern to safeguard the valuable model parameters from being stolen. Watermarking is considered an important tool for ownership verification. However, current watermarking schemes are customized for different models and tasks, hard to be integrated as an integrated intellectual protection service. We propose Hufu, a
-
Privacy-Preserving Diffusion Model Using Homomorphic Encryption arXiv.cs.CR Pub Date : 2024-03-09 Yaojian Chen, Qiben Yan
In this paper, we introduce a privacy-preserving stable diffusion framework leveraging homomorphic encryption, called HE-Diffusion, which primarily focuses on protecting the denoising phase of the diffusion process. HE-Diffusion is a tailored encryption framework specifically designed to align with the unique architecture of stable diffusion, ensuring both privacy and functionality. To address the
-
Inception Attacks: Immersive Hijacking in Virtual Reality Systems arXiv.cs.CR Pub Date : 2024-03-08 Zhuolin Yang, Cathy Yuanchen Li, Arman Bhalla, Ben Y. Zhao, Haitao Zheng
Recent advances in virtual reality (VR) system provide fully immersive interactions that connect users with online resources, applications, and each other. Yet these immersive interfaces can make it easier for users to fall prey to a new type of security attacks. We introduce the inception attack, where an attacker controls and manipulates a user's interaction with their VR environment and applications