A mail spoofing attack is a harmful activity that modifies the source of the mail and trick users into believing that the message originated from a trusted sender whereas the actual sender is the attacker. Based on the previous work, this paper analyzes the transmission process of an email. Our work identifies new attacks suitable for bypassing SPF, DMARC, and Mail User Agent’s protection mechanisms. We can forge much more realistic emails to penetrate the famous mail service provider like Tencent by conducting the attack. By completing a large-scale experiment on these well-known mail service providers, we find some of them are affected by the related vulnerabilities. Some of the bypass methods are different from previous work. Our work found that this potential security problem can only be effectively protected when all email service providers have a standard view of security and can configure appropriate security policies for each email delivery node. In addition, we also propose a mitigate method to defend against these attacks. We hope our work can draw the attention of email service providers and users and effectively reduce the potential risk of phishing email attacks on them.
Authored by Beiyuan Yu, Pan Li, Jianwei Liu, Ziyu Zhou, Yiran Han, Zongxiao Li
Worldwide, societies are rapidly becoming more connected, owing primarily to the growing number of intelligent things and smart applications (e.g, smart automobiles, smart wearable devices, etc.) These have occurred in tandem with the Internet Of Things, a new method of connecting the physical and virtual worlds. It is a new promising paradigm whereby every ‘thing’ can connect to anything via the Internet. However, with IoT systems being deployed even on large-scale, security concerns arise amongst other challenges. Hence the need to allocate appropriate protection of resources. The realization of secure IoT systems could only be accomplished with a comprehensive understanding of the particular needs of a specific system. How-ever, this paradigm lacks a proper and exhaustive classification of security requirements. This paper presents an approach towards understanding and classifying the security requirements of IoT devices. This effort is expected to play a role in designing cost-efficient and purposefully secured future IoT systems. During the coming up with and the classification of the requirements, We present a variety of set-ups and define possible attacks and threats within the scope of IoT. Considering the nature of IoT and security weaknesses as manifestations of unrealized security requirements, We put together possible attacks and threats in categories, assessed the existent IoT security requirements as seen in literature, added more in accordance with the applied domain of the IoT and then classified the security requirements. An IoT system can be secure, scalable, and flexible by following the proposed security requirement classification.
Authored by Arafat Mukalazi, Ali Boyaci
Cloud computing provides customers with enormous compute power and storage capacity, allowing them to deploy their computation and data-intensive applications without having to invest in infrastructure. Many firms use cloud computing as a means of relocating and maintaining resources outside of their enterprise, regardless of the cloud server's location. However, preserving the data in cloud leads to a number of issues related to data loss, accountability, security etc. Such fears become a great barrier to the adoption of the cloud services by users. Cloud computing offers a high scale storage facility for internet users with reference to the cost based on the usage of facilities provided. Privacy protection of a user's data is considered as a challenge as the internal operations offered by the service providers cannot be accessed by the users. Hence, it becomes necessary for monitoring the usage of the client's data in cloud. In this research, we suggest an effective cloud storage solution for accessing patient medical records across hospitals in different countries while maintaining data security and integrity. In the suggested system, multifactor authentication for user login to the cloud, homomorphic encryption for data storage with integrity verification, and integrity verification have all been implemented effectively. To illustrate the efficacy of the proposed strategy, an experimental investigation was conducted.
Authored by M. Rupasri, Anupam Lakhanpal, Soumalya Ghosh, Atharav Hedage, Manoj Bangare, K. Ketaraju
Virtual Private Networks (VPNs) have become a communication medium for accessing information, data exchange and flow of information. Many organizations require Intranet or VPN, for data access, access to servers from computers and sharing different types of data among their offices and users. A secure VPN environment is essential to the organizations to protect the information and their IT infrastructure and their assets. Every organization needs to protect their computer network environment from various malicious cyber threats. This paper presents a comprehensive network security management which includes significant strategies and protective measures during the management of a VPN in an organization. The paper also presents the procedures and necessary counter measures to preserve the security of VPN environment and also discussed few Identified Security Strategies and measures in VPN. It also briefs the Network Security and their Policies Management for implementation by covering security measures in firewall, visualized security profile, role of sandbox for securing network. In addition, a few identified security controls to strengthen the organizational security which are useful in designing a secure, efficient and scalable VPN environment, are also discussed.
Authored by Srinivasa Pedapudi, Nagalakshmi Vadlamani
The blockchain network is often considered a reliable and secure network. However, some security attacks, such as eclipse attacks, have a significant impact on blockchain networks. In order to perform an eclipse attack, the attacker must be able to control enough IP addresses. This type of attack can be mitigated by blocking incoming connections. Connected machines may only establish outbound connections to machines they trust, such as those on a whitelist that other network peers maintain. However, this technique is not scalable since the solution does not allow nodes with new incoming communications to join the network. In this paper, we propose a scalable and secure trust-based solution against eclipse attacks with a peer-selection strategy that minimizes the probability of eclipse attacks from nodes in the network by developing a trust point. Finally, we experimentally analyze the proposed solution by creating a network simulation environment. The analysis results show that the proposed solution reduces the probability of an eclipse attack and has a success rate of over 97%.
Authored by
One of the key topics in network security research is the autonomous COA (Couse-of-Action) attack search method. Traditional COA attack search methods that passively search for attacks can be difficult, especially as the network gets bigger. To address these issues, new autonomous COA techniques are being developed, and among them, an intelligent spatial algorithm is designed in this paper for efficient operations in scalable networks. On top of the spatial search, a Monte-Carlo (MC)-based temporal approach is additionally considered for taking care of time-varying network behaviors. Therefore, we propose a spatio-temporal attack COA search algorithm for scalable and time-varying networks.
Authored by Haemin Lee, Seok Bin Son, Won Yun, Joongheon Kim, Soyi Jung, Dong Kim
Since deep learning (DL) can automatically learn features from source code, it has been widely used to detect source code vulnerability. To achieve scalable vulnerability scanning, some prior studies intend to process the source code directly by treating them as text. To achieve accurate vulnerability detection, other approaches consider distilling the program semantics into graph representations and using them to detect vulnerability. In practice, text-based techniques are scalable but not accurate due to the lack of program semantics. Graph-based methods are accurate but not scalable since graph analysis is typically time-consuming. In this paper, we aim to achieve both scalability and accuracy on scanning large-scale source code vulnerabilities. Inspired by existing DL-based image classification which has the ability to analyze millions of images accurately, we prefer to use these techniques to accomplish our purpose. Specifically, we propose a novel idea that can efficiently convert the source code of a function into an image while preserving the program details. We implement Vul-CNN and evaluate it on a dataset of 13,687 vulnerable functions and 26,970 non-vulnerable functions. Experimental results report that VulCNN can achieve better accuracy than eight state-of-the-art vul-nerability detectors (i.e., Checkmarx, FlawFinder, RATS, TokenCNN, VulDeePecker, SySeVR, VulDeeLocator, and Devign). As for scalability, VulCNN is about four times faster than VulDeePecker and SySeVR, about 15 times faster than VulDeeLocator, and about six times faster than Devign. Furthermore, we conduct a case study on more than 25 million lines of code and the result indicates that VulCNN can detect large-scale vulnerability. Through the scanning reports, we finally discover 73 vulnerabilities that are not reported in NVD.
Authored by Yueming Wu, Deqing Zou, Shihan Dou, Wei Yang, Duo Xu, Hai Jin
In this study, the nature of human trust in communication robots was experimentally investigated comparing with trusts in other people and artificial intelligence (AI) systems. The results of the experiment showed that trust in robots is basically similar to that in AI systems in a calculation task where a single solution can be obtained and is partly similar to that in other people in an emotion recognition task where multiple interpretations can be acceptable. This study will contribute to designing a smooth interaction between people and communication robots.
Authored by Akihiro Maehigashi
Domestic service robots become increasingly prevalent and autonomous, which will make task priority conflicts more likely. The robot must be able to effectively and appropriately negotiate to gain priority if necessary. In previous human-robot interaction (HRI) studies, imitating human negotiation behavior was effective but long-term effects have not been studied. Filling this research gap, an interactive online study (\$N=103\$) with two sessions and six trials was conducted. In a conflict scenario, participants repeatedly interacted with a domestic service robot that applied three different conflict resolution strategies: appeal, command, diminution of request. The second manipulation was reinforcement (thanking) of compliance behavior (yes/no). This led to a 3×2×6 mixed-subject design. User acceptance, trust, user compliance to the robot, and self-reported compliance to a household member were assessed. The diminution of a request combined with positive reinforcement was the most effective strategy and perceived trustworthiness increased significantly over time. For this strategy only, self-reported compliance rates to the human and the robot were similar. Therefore, applying this strategy potentially seems to make a robot equally effective as a human requester. This paper contributes to the design of acceptable and effective robot conflict resolution strategies for long-term use.
Authored by Franziska Babel, Philipp Hock, Johannes Kraus, Martin Baumann
Trust is a cognitive ability that can be dependent on behavioral consistency. In this paper, a partially observable Markov Decision Process (POMDP)-based computational robot-human trust model is proposed for hand-over tasks in human-robot collaborative contexts. The robot's trust in its human partner is evaluated based on the human behavior estimates and object detection during the hand-over task. The human-robot hand-over process is parameterized as a partially observable Markov Decision Process. The proposed approach is verified in real-world human-robot collaborative tasks. Results show that our approach can be successfully applied to human-robot hand-over tasks to achieve high efficiency, reduce redundant robot movements, and realize predictability and mutual understanding of the task.
Authored by Pallavi Tilloo, Jesse Parron, Omar Obidat, Michelle Zhu, Weitian Wang
For designing the interaction with robots in healthcare scenarios, understanding how trust develops in such situations characterized by vulnerability and uncertainty is important. The goal of this study was to investigate how technology-related user dispositions, anxiety, and robot characteristics influence trust. A second goal was to substantiate the association between hospital patients' trust and their intention to use a transport robot. In an online study, patients, who were currently treated in hospitals, were introduced to the concept of a transport robot with both written and video-based material. Participants evaluated the robot several times. Technology-related user dispositions were found to be essentially associated with trust and the intention to use. Furthermore, hospital patients' anxiety was negatively associated with the intention to use. This relationship was mediated by trust. Moreover, no effects of the manipulated robot characteristics were found. In conclusion, for a successful implementation of robots in hospital settings patients' individual prior learning history - e.g., in terms of existing robot attitudes - and anxiety levels should be considered during the introduction and implementation phase.
Authored by Mareike Schüle, Johannes Kraus, Franziska Babel, Nadine Reißner
As autonomous service robots will become increasingly ubiquitous in our daily lives, human-robot conflicts will become more likely when humans and robots share the same spaces and resources. This thesis investigates the conflict resolution of robots and humans in everyday conflicts in the domestic and public context. Hereby, the acceptability, trustworthiness, and effectiveness of verbal and non-verbal strategies for the robot to solve the conflict in its favor are evaluated. Based on the assumption of the Media Equation and CASA paradigm that people interact with computers as social actors, robot conflict resolution strategies from social psychology and human-machine interaction were derived. The effectiveness, acceptability, and trustworthiness of those strategies were evaluated in online, virtual reality, and laboratory experiments. Future work includes determining the psychological processes of human-robot conflict resolution in further experimental studies.
Authored by Franziska Babel, Martin Baumann
The success of human-robot interaction is strongly affected by the people’s ability to infer others’ intentions and behaviours, and the level of people’s trust that others will abide by their same principles and social conventions to achieve a common goal. The ability of understanding and reasoning about other agents’ mental states is known as Theory of Mind (ToM). ToM and trust, therefore, are key factors in the positive outcome of human-robot interaction. We believe that a robot endowed with a ToM is able to gain people’s trust, even when this may occasionally make errors.In this work, we present a user study in the field in which participants (N=123) interacted with a robot that may or may not have a ToM, and may or may not exhibit erroneous behaviour. Our findings indicate that a robot with ToM is perceived as more reliable, and they trusted it more than a robot without a ToM even when the robot made errors. Finally, ToM results to be a key driver for tuning people’s trust in the robot even when the initial condition of the interaction changed (i.e., loss and regain of trust in a longer relationship).
Authored by Alessandra Rossi, Antonio Andriella, Silvia Rossi, Carme Torras, Guillem Alenyà
With the influx of technology use and human-robot teams, it is important to understand how swift trust is developed within these teams. Given this influx, we plan to study how surface cues (i.e., observable characteristics) and imported information (i.e., knowledge from external sources or personal experiences) effect the development of swift trust. We hypothesize that human-like surface level cues and positive imported information will yield higher swift trust. These findings will help the assignment of human robot teams in the future.
Authored by Sabina Patel, Elizabeth Phillips, Elizabeth Lazzara
Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting human-robot collaboration as it is in promoting human-human collaboration. In addition, individuals can signif-icantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individual's attitude.
Authored by Connor Esterwood, Lionel Robert
Human safety has always been the main priority when working near an industrial robot. With the rise of Human-Robot Collaborative environments, physical barriers to avoiding collisions have been disappearing, increasing the risk of accidents and the need for solutions that ensure a safe Human-Robot Collaboration. This paper proposes a safety system that implements Speed and Separation Monitoring (SSM) type of operation. For this, safety zones are defined in the robot's workspace following current standards for industrial collaborative robots. A deep learning-based computer vision system detects, tracks, and estimates the 3D position of operators close to the robot. The robot control system receives the operator's 3D position and generates 3D representations of them in a simulation environment. Depending on the zone where the closest operator was detected, the robot stops or changes its operating speed. Three different operation modes in which the human and robot interact are presented. Results show that the vision-based system can correctly detect and classify in which safety zone an operator is located and that the different proposed operation modes ensure that the robot's reaction and stop time are within the required time limits to guarantee safety.
Authored by Lina Amaya-Mejía, Nicolás Duque-Suárez, Daniel Jaramillo-Ramírez, Carol Martinez
Focusing on human experience of vulnerability in everyday life interaction scenarios is still a novel approach. So far, only a proof-of-concept online study has been conducted, and to extend this work, we present a follow-up online study. We consider in more detail how human experience of vulnerability caused by a trust violation through a privacy breach affects trust ratings in an interaction scenario with the PEPPER robot assisting with clothes shopping. We report the results from 32 survey responses and 11 semi-structured interviews. Our findings reveal the existence of the privacy paradox also for studying trust in HRI, which is a common observation describing a discrepancy between the stated privacy concerns by people and their behavior to safeguard it. Moreover, we reflect that participants considered only the added value of utility and entertainment when deciding whether or not to interact with the robot again, but not the privacy breach. We conclude that people might tolerate an untrustworthy robot even when they are feeling vulnerable in the everyday life situation of clothes shopping.
Authored by Glenda Hannibal, Anna Dobrosovestnova, Astrid Weiss
This paper presents a program-code mutation technique that is applied in-field to embedded systems in order to create diversity in a population of systems that are identical at the time of their deployment. With this diversity, it becomes more difficult for attackers to carry out the very popular Return-Oriented-Programming (ROP) attack in a large scale, since the gadgets in different systems are located at different program addresses after code permutation. In order to prevent the system from a system crash after a failed ROP attack, we further propose the combination of the code mutation with a return address checking. We will report the overhead in time and memory along with a security analysis.
Authored by P. Tabatt, J. Jelonek, M. Schölzel, K. Lehniger, P. Langendörfer
Side-channel attacks have been a constant threat to computing systems. In recent times, vulnerabilities in the architecture were discovered and exploited to mount and execute a state-of-the-art attack such as Spectre. The Spectre attack exploits a vulnerability in the Intel-based processors to leak confidential data through the covert channel. There exist some defenses to mitigate the Spectre attack. Among multiple defenses, hardware-assisted attack/intrusion detection (HID) systems have received overwhelming response due to its low overhead and efficient attack detection. The HID systems deploy machine learning (ML) classifiers to perform anomaly detection to determine whether the system is under attack. For this purpose, a performance monitoring tool profiles the applications to record hardware performance counters (HPC), utilized for anomaly detection. Previous HID systems assume that the Spectre is executed as a standalone application. In contrast, we propose an attack that dynamically generates variations in the injected code to evade detection. The attack is injected into a benign application. In this manner, the attack conceals itself as a benign application and gen-erates perturbations to avoid detection. For the attack injection, we exploit a return-oriented programming (ROP)-based code-injection technique that reuses the code, called gadgets, present in the exploited victim's (host) memory to execute the attack, which, in our case, is the CR-Spectre attack to steal sensitive data from a target victim (target) application. Our work focuses on proposing a dynamic attack that can evade HID detection by injecting perturbations, and its dynamically generated variations thereof, under the cloak of a benign application. We evaluate the proposed attack on the MiBench suite as the host. From our experiments, the HID performance degrades from 90% to 16%, indicating our Spectre-CR attack avoids detection successfully.
Authored by Abhijitt Dhavlle, Setareh Rafatirad, Houman Homayoun, Sai Dinakarrao
Control flow integrity (CFI) checks are used in desktop systems, in order to protect them from various forms of attacks, but they are rarely investigated for embedded systems, due to their introduced overhead. The contribution of this paper is an efficient software implementation of a CFI-check for ARM-and Xtensa processors. Moreover, we propose the combination of this CFI-check with another defense mechanism against return-oriented-programming (ROP). We show that by this combination the security is significantly improved. Moreover, it will also in-crease the safety of the system, since the combination can detect a failed ROP-attack and bring the system in a safe state, which is not possible when using each technique separately. We will also report on the introduced overhead in code size and run time.
Authored by Kai Lehniger, Mario Schölze, Jonas Jelonek, Peter Tabatt, Marcin Aftowicz, Peter Langendorfer
Memory-based vulnerabilities are becoming more and more common in low-power and low-cost devices in IOT. We study several low-level vulnerabilities that lead to memory corruption in C and C++ programs, and how to use stack corruption and format string attack to exploit these vulnerabilities. Automatic methods for resisting memory attacks, such as stack canary and address space layout randomization ASLR, are studied. These methods do not need to change the source program. However, a return-oriented programming (ROP) technology can bypass them. Control flow integrity (CFI) can resist the destruction of ROP technology. In fact, the security design is holistic. Finally, we summarize the rules of security coding in embedded devices, and propose two novel methods of software anomaly detection process for IOT devices in the future.
Authored by Qian Zhou, Hua Dai, Liang Liu, Kai Shi, Jie Chen, Hong Jiang
This paper shows that the modern high customizable Xtensa architecture for embedded devices is exploitable by Return-Oriented Programming (ROP) attacks. We used a simple Hello-World application written with the RIOT OS as an almost minimal code basis for determining if the number of gadgets that can be found in this code base is sufficient to build a reasonably complex attack. We determined 859 found gadgets which are sufficient to create a gadget catalog for the Xtensa. Despite the code basis used being really small, the presented gadget catalog provides Turing completeness, which allows an arbitrary computation of any exploit program.
Authored by Batyi Amatov, Kai Lehniger, Peter Langendorfer
This paper proposes a control flow integrity checking method based on the LBR register: through an analysis of the static target program loaded binary modules, gain function attributes such as borders and build the initial transfer of legal control flow boundary, real-time maintenance when combined with the dynamic execution of the program flow of control transfer record, build a complete profile control flow transfer security; Get the call location of /bin/sh or system() in the program to build an internal monitor for control-flow integrity checks. In the process of program execution, on the one hand, the control flow transfer outside the outline is judged as the abnormal control flow transfer with attack threat; On the other hand, abnormal transitions across the contour are picked up by an internal detector. In this method, by identifying abnormal control flow transitions, attacks are initially detected before the attack code is executed, while some attacks that bypass the coarse-grained verification of security profile are captured by the refined internal detector of control flow integrity. This method reduces the cost of control flow integrity check by using the safety profile analysis of coarse-grained check. In addition, a fine-grained shell internal detector is inserted into the contour to improve the safety performance of the system and achieve a good balance between performance and efficiency.
Authored by Nige Li, Peng Zhou, Tengyan Wang, Jingnan Chen
Microcontroller-based embedded systems have become ubiquitous with the emergence of IoT technology. Given its critical roles in many applications, its security is becoming increasingly important. Unfortunately, MCU devices are especially vulnerable. Code reuse attacks are particularly noteworthy since the memory address of firmware code is static. This work seeks to combat code reuse attacks, including ROP and more advanced JIT-ROP via continuous randomization. Previous proposals are geared towards full-fledged OSs with rich runtime environments, and therefore cannot be applied to MCUs. We propose the first solution for ARM-based MCUs. Our system, named HARM, comprises a secure runtime and a binary analysis tool with rewriting module. The secure runtime, protected inside the secure world, proactively triggers and performs non-bypassable randomization to the firmware running in a sandbox in the normal world. Our system does not rely on any firmware feature, and therefore is generally applicable to both bare-metal and RTOS-powered firmware. We have implemented a prototype on a development board. Our evaluation results indicate that HARM can effectively thaw code reuse attacks while keeping the performance and energy overhead low.
Authored by Jiameng Shi, Le Guan, Wenqiang Li, Dayou Zhang, Ping Chen, Ning Zhang
The spread of the Internet of Things (IoT) and the use of smart control systems in many mission-critical or safety-critical applications domains, like automotive or aeronautical, make devices attractive targets for attackers. Nowadays, several of these are mixed-criticality systems, i.e., they run both high-criticality tasks (e.g., a car control system) and low-criticality ones (e.g., infotainment). High-criticality routines often employ Real-Time Operating Systems (RTOS) to enforce hard real-time requirements, while the tasks with lower constraints can be delegated to more generic-purpose operating systems (GPOS).Much of the control code for these devices is written in memory-unsafe languages such as C and C++. This makes them susceptible to powerful binary attacks, such as the famous Return-Oriented Programming (ROP). Control-Flow Integrity (CFI) is the most investigated security technique to protect against such threats. At now, CFI solutions for real-time embedded systems are not as mature as the ones for general-purpose systems, and even more, there is a lack of in-depth studies on how different operating systems with different security requirements and timing constraints can coexist on a single multicore platform.This paper aims at drawing attention to the subject, discussing the current scientific proposal, and in turn proposing a solution for an optimized asymmetric verification system for execution integrity. By using an embedded hypervisor, predefined cores could be dedicated to only high or low-criticality tasks, with the high-priority core being monitored by the lower-criticality core, relying on offline binary instrumentation and a light exchange of information and signals at runtime. The work also presents preliminary results about a possible implementation for multicore ARM platforms, running both RTOS and GPOS, both in terms of security and performance penalties.
Authored by Vahid Moghadam, Paolo Prinetto, Gianluca Roascio