Information Reuse and Security - In software engineering, the aspect of addressing security requirements is considered to be of paramount importance. In most cases, however, security requirements for a system are considered as non-functional requirements (NFRs) and are addressed at the very end of the software development life cycle. The increasing number of security incidents in software systems around the world has made researchers and developers rethink and consider this issue at an earlier stage. An important and essential step towards this process is the elicitation of relevant security requirements. In a recent work, Imtiaz et al. proposed a framework for creating a mapping between existing requirements and the vulnerabilities associated with them. The idea is that, this mapping can be used by developers to predict potential vulnerabilities associated with new functional requirements and capture security requirements to avoid these vulnerabilities. However, to what extent, such existing vulnerability information can be useful in security requirements elicitation is still an open question. In this paper, we design a human subject study to answer this question. We also present the results of a pilot study and discuss their implications. Preliminary results show that existing vulnerability information can be a useful resource in eliciting security requirements and lays ground work for a full scale study.
Authored by Md Amin, Tanmay Bhowmik
Information Reuse and Security - New malware increasingly adopts novel fileless techniques to evade detection from antivirus programs. Process injection is one of the most popular fileless attack techniques. This technique makes malware more stealthy by writing malicious code into memory space and reusing the name and port of the host process. It is difficult for traditional security software to detect and intercept process injections due to the stealthiness of its behavior. We propose a novel framework called ProcGuard for detecting process injection behaviors. This framework collects sensitive function call information of typical process injection. Then we perform a fine-grained analysis of process injection behavior based on the function call chain characteristics of the program, and we also use the improved RCNN network to enhance API analysis on the tampered memory segments. We combine API analysis with deep learning to determine whether a process injection attack has been executed. We collect a large number of malicious samples with process injection behavior and construct a dataset for evaluating the effectiveness of ProcGuard. The experimental results demonstrate that it achieves an accuracy of 81.58\% with a lower false-positive rate compared to other systems. In addition, we also evaluate the detection time and runtime performance loss metrics of ProcGuard, both of which are improved compared to previous detection tools.
Authored by Juan Wang, Chenjun Ma, Ziang Li, Huanyu Yuan, Jie Wang
Information Reuse and Security - In Production System Engineering (PSE), domain experts from different disciplines reuse assets such as products, production processes, and resources. Therefore, PSE organizations aim at establishing reuse across engineering disciplines. However, the coordination of multi-disciplinary reuse tasks, e.g., the re-validation of related assets after changes, is hampered by the coarse-grained representation of tasks and by scattered, heterogeneous domain knowledge. This paper introduces the Multi-disciplinary Reuse Coordination (MRC) artifact to improve task management for multi-disciplinary reuse. For assets and their properties, the MRC artifact describes sub-tasks with progress and result states to provide references for detailed reuse task management across engineering disciplines. In a feasibility study on a typical robot cell in automotive manufacturing, we investigate the effectiveness of task management with the MRC artifact compared to traditional approaches. Results indicate that the MRC artifact is feasible and provides effective capabilities for coordinating multi-disciplinary re-validation after changes.
Authored by Kristof Meixner, Jürgen Musil, Arndt Lüder, Dietmar Winkler, Stefan Biffl
Information Reuse and Security - Evaluating the security gains brought by software diversity is one key issue of software diversity research, but the existing software diversity evaluation methods are generally based on conventional code features and are relatively single, which are difficult to accurately reflect the security gains brought by software diversity. To solve these problems, from the perspective of return-oriented programming (ROP) attack, we present a software diversity evaluation method which integrates metrics for the quality and distribution of gadgets. Based on the proposed evaluation method and SpiderMonkey JavaScript engine, we implement a software diversity evaluation system for compiled languages and script languages. Diversity techniques with different granularities are used to test. The evaluation results show that the proposed evaluation method can accurately and comprehensively reflect the security gains brought by software diversity.
Authored by Genlin Xie, Guozhen Cheng, Hao Liang, Qingfeng Wang, Benwei He
Information Reuse and Security - Common Vulnerabilities and Exposures (CVE) databases contain information about vulnerabilities of software products and source code. If individual elements of CVE descriptions can be extracted and structured, then the data can be used to search and analyze CVE descriptions. Herein we propose a method to label each element in CVE descriptions by applying Named Entity Recognition (NER). For NER, we used BERT, a transformer-based natural language processing model. Using NER with machine learning can label information from CVE descriptions even if there are some distortions in the data. An experiment involving manually prepared label information for 1000 CVE descriptions shows that the labeling accuracy of the proposed method is about 0.81 for precision and about 0.89 for recall. In addition, we devise a way to train the data by dividing it into labels. Our proposed method can be used to label each element automatically from CVE descriptions.
Authored by Kensuke Sumoto, Kenta Kanakogi, Hironori Washizaki, Naohiko Tsuda, Nobukazu Yoshioka, Yoshiaki Fukazawa, Hideyuki Kanuka
Information Reuse and Security - Successive approximation register analog-to-digital converter (SAR ADC) is widely adopted in the Internet of Things (IoT) systems due to its simple structure and high energy efficiency. Unfortunately, SAR ADC dissipates various and unique power features when it converts different input signals, leading to severe vulnerability to power side-channel attack (PSA). The adversary can accurately derive the input signal by only measuring the power information from the analog supply pin (AVDD), digital supply pin (DVDD), and/or reference pin (Ref) which feed to the trained machine learning models. This paper first presents the detailed mathematical analysis of power side-channel attack (PSA) to SAR ADC, concluding that the power information from AVDD is the most vulnerable to PSA compared with the other supply pin. Then, an LSB-reused protection technique is proposed, which utilizes the characteristic of LSB from the SAR ADC itself to protect against PSA. Lastly, this technique is verified in a 12-bit 5 MS/s secure SAR ADC implemented in 65nm technology. By using the current waveform from AVDD, the adopted convolutional neural network (CNN) algorithms can achieve \textgreater99\% prediction accuracy from LSB to MSB in the SAR ADC without protection. With the proposed protection, the bit-wise accuracy drops to around 50\%.
Authored by Lele Fang, Jiahao Liu, Yan Zhu, Chi-Hang Chan, Rui Martins
Information Reuse and Security - The experimental results demonstrated that, With the development of cloud computing, more and more people use cloud computing to do all kinds of things. However, for cloud computing, the most important thing is to ensure the stability of user data and improve security at the same time. From an analysis of the experimental results, it can be found that Cloud computing makes extensive use of technical means such as computing virtualization, storage system virtualization and network system virtualization, abstracts the underlying physical facilities into external unified interfaces, maps several virtual networks with different topologies to the underlying infrastructure, and provides differentiated services for external users. By comparing and analyzing the experimental results, it is clear that virtualization technology will be the main way to solve cloud computing security. Virtualization technology introduces a virtual layer between software and hardware, provides an independent running environment for applications, shields the dynamics, distribution and differences of hardware platforms, supports the sharing and reuse of hardware resources, provides each user with an independent and isolated computer environment, and facilitates the efficient and dynamic management and maintenance of software and hardware resources of the whole system. Applying virtualization technology to cloud security reduces the hardware cost and management cost of "cloud security" enterprises to a certain extent, and improves the security of "cloud security" technology to a certain extent. This paper will outline the basic cloud computing security methods, and focus on the analysis of virtualization cloud security technology
Authored by Jiaxing Zhang
Information Reuse and Security - Code-reuse attacks (including ROP/JOP) severely threaten computer security. Control-flow integrity (CFI), which can restrict control flow in legal scope, is recognised as an effective defence mechanism against code-reuse attacks. Hardware-based CFI uses Instruction Set Architecture (ISA) extensions with additional hardware modules to implement CFI and achieve better performance. However, hardware-based fine-grained CFI adds new instructions to the ISA, which can not be executed on old processors and breaks the compatibility of programs. Some coarse-grained CFI designs, such as Intel IBT, maintain the compatibility of programs but can not provide enough security guarantees.To balance the security and compatibility of hardware CFI, we propose Transparent Forward CFI (TFCFI). TFCFI implements hardware-based fine-grained CFI designs without changing the ISA. The software modification of TFCFI utilizes address information and hint instructions in RISC-V as transparent labels to mark the program. The hardware module of TFCFI monitors the control flow during execution. The program modified by TFCFI can be executed on old processors without TFCFI. Benefiting from transparent labels, TFCFI also solves the destination equivalence problem. The experiment on FPGA shows that TFCFI incurs negligible performance overhead (1.82\% on average).
Authored by Cairui She, Liwei Chen, Gang Shi
Information Reuse and Security - With the development of software defined network and network function virtualization, network operators can flexibly deploy service function chains (SFC) to provide network security services more than before according to the network security requirements of business systems. At present, most research on verifying the correctness of SFC is based on whether the logical sequence between service functions (SF) in SFC is correct before deployment, and there is less research on verifying the correctness after SFC deployment. Therefore, this paper proposes a method of using Colored Petri Net (CPN) to establish a verification model offline and verify whether each SF deployment in SFC is correct after online deployment. After the SFC deployment is completed, the information is obtained online and input into the established model for verification. The experimental results show that the SFC correctness verification method proposed in this paper can effectively verify whether each SF in the deployed SFC is deployed correctly. In this process, the correctness of SF model is verified by using SF model in the model library, and the model reuse technology is preliminarily discussed.
Authored by Zhenyu Liu, Xuanyu Lou, Yajun Cui, Yingdong Zhao, Hua Li
Information Reuse and Security - At present, code reuse attacks, such as Return Oriented Programming (ROP), execute attacks through the code of the application itself, bypassing the traditional defense mechanism and seriously threatening the security of computer software. The existing two mainstream defense mechanisms, Address Space Layout Randomization (ASLR), are vulnerable to information disclosure attacks, and Control-Flow Integrity (CFI) will bring high overhead to programs. At the same time, due to the widespread use of software of unknown origin, there is no source code provided or available, so it is not always possible to secure the source code. In this paper, we propose FRCFI, an effective method based on binary rewriting to prevent code reuse attacks. FRCFI first disrupts the program s memory space layout through function shuffling and NOP insertion, then verifies the execution of the control-flow branch instruction ret and indirect call/jmp instructions to ensure that the target address is not modified by attackers. Experiment show shows that FRCFI can effectively defend against code reuse attacks. After randomization, the survival rate of gadgets is only 1.7\%, and FRCFI adds on average 6.1\% runtime overhead on SPEC CPU2006 benchmark programs.
Authored by Benwei He, Yunfei Guo, Hao Liang, Qingfeng Wang, Genlin Xie
Intrusion Intolerance - Low Power Wide Area Networks (LPWAN) offer a promising wireless communications technology for Internet of Things (IoT) applications. Among various existing LPWAN technologies, Long-Range WAN (LoRaWAN) consumes minimal power and provides virtual channels for communication through spreading factors. However, LoRaWAN suffers from the interference problem among nodes connected to a gateway that uses the same spreading factor. Such interference increases data communication time, thus reducing data freshness and suitability of LoRaWAN for delay-sensitive applications. To minimize the interference problem, an optimal allocation of the spreading factor is requisite for determining the time duration of data transmission. This paper proposes a game-theoretic approach to estimate the time duration of using a spreading factor that ensures on-time data delivery with maximum network utilization. We incorporate the Age of Information (AoI) metric to capture the freshness of information as demanded by the applications. Our proposed approach is validated through simulation experiments, and its applicability is demonstrated for a crop protection system that ensures real-time monitoring and intrusion control of animals in an agricultural field. The simulation and prototype results demonstrate the impact of the number of nodes, AoI metric, and game-theoretic parameters on the performance of the IoT network.
Authored by Preti Kumari, Hari Gupta, Tanima Dutta, Sajal Das
Intrusion Intolerance - While our society accelerates its transition to the Internet of Things, billions of IoT devices are now linked to the network. While these gadgets provide enormous convenience, they generate a large amount of data that has already beyond the network’s capacity. To make matters worse, the data acquired by sensors on such IoT devices also include sensitive user data that must be appropriately treated. At the moment, the answer is to provide hub services for data storage in data centers. However, when data is housed in a centralized data center, data owners lose control of the data, since data centers are centralized solutions that rely on data owners’ faith in the service provider. In addition, edge computing enables edge devices to collect, analyze, and act closer to the data source, the challenge of data privacy near the edge is also a tough nut to crack.A large number of user information leakage both for IoT hub and edge made the system untrusted all along. Accordingly, building a decentralized IoT system near the edge and bringing real trust to the edge is indispensable and significant. To eliminate the need for a centralized data hub, we present a prototype of a unique, secure, and decentralized IoT framework called Reja, which is built on a permissioned Blockchain and an intrusion-tolerant messaging system ChiosEdge, and the critical components of ChiosEdge are reliable broadcast and BFT consensus. We evaluated the latency and throughput of Reja and its sub-module ChiosEdge.
Authored by Yusen Wu, Jinghui Liao, Phuong Nguyen, Weisong Shi, Yelena Yesha
Intrusion Intolerance - The cascaded multi-level inverter (CMI) is becoming increasingly popular for wide range of applications in power electronics dominated grid (PEDG). The increased number of semiconductors devices in these class of power converters leads to an increased need for fault detection, isolation, and selfhealing. In addition, the PEDG’s cyber and physical layers are exposed to malicious attacks. These malicious actions, if not detected and classified in a timely manner, can cause catastrophic events in power grid. The inverters’ internal failures make the anomaly detection and classification in PEDG a challenging task. The main objective of this paper is to address this challenge by implementing a recurrent neural network (RNN), specifically utilizing long short-term memory (LSTM) for detection and classification of internal failures in CMI and distinguish them from malicious activities in PEDG. The proposed anomaly classification framework is a module in the primary control layer of inverters which can provide information for intrusion detection systems in a secondary control layer of PEDG for further analysis.
Authored by Matthew Baker, Hassan Althuwaini, Mohammad Shadmand
Intrusion Intolerance - In the world of increasing cyber threats, a compromised protective relay can put power grid resilience at risk by irreparably damaging costly power assets or by causing significant disruptions. We present the first architecture and protocols for the substation that ensure correct protective relay operation in the face of successful relay intrusions and network attacks while meeting the required latency constraint of a quarter power cycle (4.167ms). Our architecture supports other rigid requirements, including continuous availability over a long system lifetime and seamless substation integration. We evaluate our implementation in a range of fault-free and faulty operation conditions, and provide deployment tradeoffs.
Authored by Sahiti Bommareddy, Daniel Qian, Christopher Bonebrake, Paul Skare, Yair Amir
Intrusion Intolerance - Compound threats involving cyberattacks that are targeted in the aftermath of a natural disaster pose an important emerging threat for critical infrastructure. We introduce a novel compound threat model and data-centric framework for evaluating the resilience of power grid SCADA systems to such threats. We present a case study of a compound threat involving a hurricane and follow-on cyberattack on Oahu Hawaii and analyze the ability of existing SCADA architectures to withstand this threat model. We show that no existing architecture fully addresses this threat model, and demonstrate the importance of considering compound threats in planning system deployments.
Authored by Sahiti Bommareddy, Benjamin Gilby, Maher Khan, Imes Chiu, Mathaios Panteli, John van de Lindt, Linton Wells, Yair Amir, Amy Babay
Intrusion Intolerance - The Time-Triggered Architecture (TTA) presents a blueprint for building safe and real-time constrained distributed systems, based on a set of orthogonal concepts that make extensive use of the availability of a globally consistent notion of time and a priori knowledge of events. Although the TTA tolerates arbitrary failures of any of its nodes by architectural means (active node replication, a membership service, and bus guardians), the design of these means considers only accidental faults. However, distributed safety- and real-time critical systems have been emerging into more open and interconnected systems, operating autonomously for prolonged times and interfacing with other possibly non-real-time systems. Therefore, the existence of vulnerabilities that adversaries may exploit to compromise system safety cannot be ruled out. In this paper, we discuss potential targeted attacks capable of bypassing TTA s fault-tolerance mechanisms and demonstrate how two well-known recovery techniques - proactive and reactive rejuvenation - can be incorporated into TTA to reduce the window of vulnerability for attacks without introducing extensive and costly changes.
Authored by Mohammad Alkoudsi, Gerhard Fohler, Marcus Völp
Intrusion Intolerance - Container-based virtualization has gained momentum over the past few years thanks to its lightweight nature and support for agility. However, its appealing features come at the price of a reduced isolation level compared to the traditional host-based virtualization techniques, exposing workloads to various faults, such as co-residency attacks like container escape. In this work, we propose to leverage the automated management capabilities of containerized environments to derive a Fault and Intrusion Tolerance (FIT) framework based on error detection-recovery and fault treatment. Namely, we aim at deriving a specification-based error detection mechanism at the host level to systematically and formally capture security state errors indicating breaches potentially caused by malicious containers. Although the paper focuses on security side use cases, results are logically extendable to accidental faults. Our aim is to immunize the target environments against accidental and malicious faults and preserve their core dependability and security properties.
Authored by Taous Madi, Paulo Esteves-Verissimo
Intrusion Intolerance - Redundant execution technology is one of the effective ways to improve the safety and reliability of computer systems. By rationally configuring redundant resources, adding components with the same function, using the determined redundant execution logic to coordinate and efficiently execute synchronously can effectively ensure high availability of the machine and system. Fault-tolerant is based on redundant execution, which is the primary method of dealing with system hardware failures. Recently, multi-threading redundancy has realized the continuous development of fault-tolerant technology, which makes the processing granularity of the system tolerate random failure factors gradually reduced. At the same time, intrusion tolerant technology has also been continuously developed with the emergence of multi-variant execution technology. It mainly uses the idea of dynamic heterogeneous redundancy to construct a set of variants with equivalent functions and different structures to complete the detection and processing of threats outside the system. We summarize the critical technologies of redundant execution to achieve fault tolerance and intrusion tolerance in recent years, sorts out the role of redundant execution in the development process from fault tolerance technology to intrusion tolerance technology, classify redundant execution technologies at different levels, finally point out the development prospects of redundant execution technology in multiple application fields and future technical research directions.
Authored by Zijing Liu, Zheng Zhang, Ruicheng Xi, Pengzhe Zhu, Bolin Ma
Intrusion Intolerance - Network intrusion detection technology has developed for more than ten years, but due to the network intrusion is complex and variable, it is impossible to determine the function of network intrusion behaviour. Combined with the research on the intrusion detection technology of the cluster system, the network security intrusion detection and mass alarms are realized. Method: This article starts with an intrusion detection system, which introduces the classification and workflow. The structure and working principle of intrusion detection system based on protocol analysis technology are analysed in detail. Results: With the help of the existing network intrusion detection in the network laboratory, the Synflood attack has successfully detected, which verified the flexibility, accuracy, and high reliability of the protocol analysis technology. Conclusion: The high-performance cluster-computing platform designed in this paper is already available. The focus of future work will strengthen the functions of the cluster-computing platform, enhancing stability, and improving and optimizing the fault tolerance mechanism.
Authored by Feng Li, Fei Shu, Mingxuan Li, Bin Wang
Malware Analysis and Graph Theory - A reliable database of Indicators of Compromise (IoC’s) is a cornerstone of almost every malware detection system. Building the database and keeping it up-to-date is a lengthy and often manual process where each IoC should be manually reviewed and labeled by an analyst. In this paper, we focus on an automatic way of identifying IoC’s intended to save analysts’ time and scale to the volume of network data. We leverage relations of each IoC to other entities on the internet to build a heterogeneous graph. We formulate a classification task on this graph and apply graph neural networks (GNNs) in order to identify malicious domains. Our experiments show that the presented approach provides promising results on the task of identifying high-risk malware as well as legitimate domains classification.
Authored by Stepan Dvorak, Pavel Prochazka, Lukas Bajer
Malware Analysis and Graph Theory - The rapidly increasing malware threats must be coped with new effective malware detection methodologies. Current malware threats are not limited to daily personal transactions but dowelled deeply within large enterprises and organizations. This paper introduces a new methodology for detecting and discriminating malicious versus normal applications. In this paper, we employed Ant-colony optimization to generate two behavioural graphs that characterize the difference in the execution behavior between malware and normal applications. Our proposed approach relied on the API call sequence generated when an application is executed. We used the API calls as one of the most widely used malware dynamic analysis features. Our proposed method showed distinctive behavioral differences between malicious and non-malicious applications. Our experimental results showed a comparative performance compared to other machine learning methods. Therefore, we can employ our method as an efficient technique in capturing malicious applications.
Authored by Eslam Amer, Adham Samir, Hazem Mostafa, Amer Mohamed, Mohamed Amin
Malware Analysis and Graph Theory - This paper provides an in-depth analysis of Android malware that bypassed the strictest defenses of the Google Play application store and penetrated the official Android market between January 2016 and July 2021. We systematically identified 1,238 such malicious applications, grouped them into 134 families, and manually analyzed one application from 105 distinct families. During our manual analysis, we identified malicious payloads the applications execute, conditions guarding execution of the payloads, hiding techniques applications employ to evade detection by the user, and other implementation-level properties relevant for automated malware detection. As most applications in our dataset contain multiple payloads, each triggered via its own complex activation logic, we also contribute a graph-based representation showing activation paths for all application payloads in form of a control- and data-flow graph. Furthermore, we discuss the capabilities of existing malware detection tools, put them in context of the properties observed in the analyzed malware, and identify gaps and future research directions. We believe that our detailed analysis of the recent, evasive malware will be of interest to researchers and practitioners and will help further improve malware detection tools.
Authored by Michael Cao, Khaled Ahmed, Julia Rubin
Malware Analysis and Graph Theory - With the ever increasing threat of malware, extensive research effort has been put on applying Deep Learning for malware classification tasks. Graph Neural Networks (GNNs) that process malware as Control Flow Graphs (CFGs) have shown great promise for malware classification. However, these models are viewed as black-boxes, which makes it hard to validate and identify malicious patterns. To that end, we propose CFG-Explainer, a deep learning based model for interpreting GNN-oriented malware classification results. CFGExplainer identifies a subgraph of the malware CFG that contributes most towards classification and provides insight into importance of the nodes (i.e., basic blocks) within it. To the best of our knowledge, CFGExplainer is the first work that explains GNN-based mal-ware classification. We compared CFGExplainer against three explainers, namely GNNExplainer, SubgraphX and PGExplainer, and showed that CFGExplainer is able to identify top equisized subgraphs with higher classification accuracy than the other three models.
Authored by Jerome Herath, Priti Wakodikar, Ping Yang, Guanhua Yan
Malware Analysis and Graph Theory - Open set recognition (OSR) problem has been a challenge in many machine learning (ML) applications, such as security. As new/unknown malware families occur regularly, it is difficult to exhaust samples that cover all the classes for the training process in ML systems. An advanced malware classification system should classify the known classes correctly while sensitive to the unknown class. In this paper, we introduce a self-supervised pre-training approach for the OSR problem in malware classification. We propose two transformations for the function call graph (FCG) based malware representations to facilitate the pretext task. Also, we present a statistical thresholding approach to find the optimal threshold for the unknown class. Moreover, the experiment results indicate that our proposed pre-training process can improve different performances of different downstream loss functions for the OSR problem.
Authored by Jingyun Jia, Philip Chan
Malware Analysis and Graph Theory - With the dramatic increase in malicious software, the sophistication and innovation of malware have increased over the years. In particular, the dynamic analysis based on the deep neural network has shown high accuracy in malware detection. However, most of the existing methods only employ the raw API sequence feature, which cannot accurately reflect the actual behavior of malicious programs in detail. The relationship between API calls is critical for detecting suspicious behavior. Therefore, this paper proposes a malware detection method based on the graph neural network. We first connect the API sequences executed by different processes to build a directed process graph. Then, we apply Bert to encode the API sequences of each process into node embedding, which facilitates the semantic execution information inside the processes. Finally, we employ GCN to mine the deep semantic information based on the directed process graph and node embedding. In addition to presenting the design, we have implemented and evaluated our method on 10,000 malware and 10,000 benign software datasets. The results show that the precision and recall of our detection model reach 97.84\% and 97.83\%, verifying the effectiveness of our proposed method.
Authored by Zhenquan Ding, Hui Xu, Yonghe Guo, Longchuan Yan, Lei Cui, Zhiyu Hao