With the upsurge of knowledge graph research, the multimedia field supports the upper multimedia services by constructing the domain knowledge graph to improve the user experience quality of multimedia services. However, the quality of knowledge graph will have a great impact on the performance of the upper multimedia service supported by it. The existing quantitative methods of knowledge graph quality have some problems, such as incomplete information utilization and so on. Therefore, this paper proposes a trustworthiness measurement method of joint entity type embedding and transformer encoder, which is quantified respectively from the local level and the global level, makes full use of the rich semantic information in the knowledge graph, and comprehensively evaluates the quality of the knowledge graph. The experimental results show that the method proposed in this paper can solve the problem that the traditional methods can not detect the matching error of entity and entity type under the condition that the error detection effect of traditional triples is basically unchanged.
Authored by Yujie Yan, Peng Yu, Honglin Fang, Zihao Wu
As in many other research domains, Artificial Intelligence (AI) techniques have been increasing their footprint in Earth Sciences to extract meaningful information from the large amount of high-detailed data available from multiple sensor modalities. While on the one hand the existing success cases endorse the great potential of AI to help address open challenges in ES, on the other hand on-going discussions and established lessons from studies on the sustainability, ethics and trustworthiness of AI must be taken into consideration if the community is to ensure that its research efforts move into directions that effectively benefit the society and the environment. In this paper, we discuss insights gathered from a brief literature review on the subtopics of AI Ethics, Sustainable AI, AI Trustworthiness and AI for Earth Sciences in an attempt to identify some of the promising directions and key needs to successfully bring these concepts together.
Authored by Philipe Dias, Dalton Lunga
The demo presents recent work on social robots that provide information from knowledge graphs in online graph databases. Sometimes more cooperative responses can be generated by using taxonomies and other semantic metadata that has been added to the knowledge graphs. Sometimes metadata about data provenance suggests higher or lower trustworthiness of the data. This raises the question whether robots should indicate trustworthiness when providing the information, and whether this should be done explicitly by meta-level comments or implicitly for example by modulating the robots’ tone of voice and generating positive and negative affect in the robots’ facial expressions.
Authored by Graham Wilcock, Kristiina Jokinen
Taking the sellers’ trustworthiness as a mediation variable, this paper mainly examines the impact of online reviews on the consumers’ purchase decisions. Conducting an online survey, we collect the corresponding data to conduct the hypothesis test by using the SPSS software. We find that the quality of online reviews has a positive impact on consumers’ perceived values and sellers’ trustworthiness. The timeliness of online reviews has a positive impact on consumers’ perceived values, which can have a positive impact on sellers’ trustworthiness. An interesting observation indicates that the perceived values can indirectly influence consumers’ purchase decisions by taking sellers’ trustworthiness as a mediation variable. The sellers’ trustworthiness has a positive impact on consumers’ purchase decisions. We believe that our findings can help online sellers to better manage online reviews.
Authored by Xiaohu Qian, Yunxia Li, Mingqiang Yin, Fan Yang
Software trustworthiness evaluation (STE) is regarded as a Multi-Criteria Decision Making (MCDM) problem that consists of criteria. However, most of the current STE do not consider the relationships between criteria. This paper presents a software trustworthiness evaluation method based on the relationships between criteria. Firstly, the relationships between criteria is described by cooperative and conflicting degrees between criteria. Then, a measure formula for the substitutivity between criteria is proposed and the cooperative degree between criteria is taken as the approximation of the substitutivity between criteria. With the help of the substitutivity between criteria, the software trustworthiness measurement model obtained by axiomatic approaches is applied to aggregate the degree to which each optional software product meets each objective. Finally, the candidate software products are sorted according to the trustworthiness aggregation results, and the optimal product is obtained from the alternative software products on the basis of the sorting results. The effectiveness of the proposed method is demonstrated through case study.
Authored by Hongwei Tao
The loose coupling and distributed nature of service-oriented architecture (SOA) can easily lead to trustworthiness problem of service composition. The current Web service composition trustworthiness evaluation method is biased towards the service provider itself, while ignoring the trustworthiness of the service discoverer and user. This paper mainly studies a multi-angle Web service composition trustworthiness evaluation method, comprehensively considers the three aspects of Web service composition, and uses the service finder to expand the service center. The experiment proves that this kind of trustworthiness evaluation method of Web service composition can improve the accuracy and comprehensiveness of trustworthiness evaluation.
Authored by Zhang Yanhong
Artificial intelligence (AI) technology is rapidly being introduced and used in all industries as the core technology. Further, concerns about unexpected social issues are also emerging. Therefore, each country, and standard and international organizations, are developing and distributing guidelines to maximize the benefits of AI while minimizing risks and side effects. However, there are several hurdles for developers to use them in actual industrial fields such as ambiguity in terminologies, lack of concreteness according to domain, and non-specific requirements. Therefore, in this paper, approaches to address these problems are presented. If the recommendations or guidelines to be developed in the future refer to the proposed approaches, it would be a guideline for assuring AI trustworthiness that is more developer-friendly.
Authored by Jae Hwang
Network security isolation technology is an important means to protect the internal information security of enterprises. Generally, isolation is achieved through traditional network devices, such as firewalls and gatekeepers. However, the security rules are relatively rigid and cannot better meet the flexible and changeable business needs. Through the double sandbox structure created for each user, each user in the virtual machine is isolated from each other and security is ensured. By creating a virtual disk in a virtual machine as a user storage sandbox, and encrypting the read and write of the disk, the shortcomings of traditional network isolation methods are discussed, and the application of cloud desktop network isolation technology based on VMwarer technology in universities is expounded.
Authored by Kai Ye
Hardware breakpoints are used to monitor the behavior of a program on a virtual machine (VM). Although a virtual machine monitor (VMM) can inspect programs on a VM at hardware breakpoints, the programs themselves can detect hardware breakpoints by reading debug registers. Malicious programs may change their behavior to avoid introspection and other security mechanisms if a hardware breakpoint is detected. To prevent introspection evasion, methods for hiding hardware breakpoints by returning a fake value to the VM are proposed. These methods detect the read and write operations of the debug register from the VM and then return the processing to the VM as if their access has succeeded. However, VM introspection remains detectable from the VM by confirming the availability of the debug exception in the address set. While the previous work handles the read and write operations of the debug register, the debug exception is not delivered to the VM program. To address this problem, this study presents a method for making hardware breakpoints compatible with VM introspection. The proposed method uses surplus debug address registers to deliver the debug exception at the hardware breakpoint set by the VM program. If a VM program attempts to write a value to a debug register, the VMM detects and stores the value in a real debug register that is not used for VM introspection. Because debug exception at the hardware breakpoint was delivered to the VM, hardware breakpoints set by the VM were compatible with VM introspection. The evaluation results showed that the proposed method had a low performance overhead.
Authored by Masaya Sato, Ryosuke Nakamura, Toshihiro Yamauchi, Hideo Taniguchi
Distributed Ledger Technology (DLT), from the initial goal of moving digital assets, allows more advanced approaches as smart contracts executed on distributed computational enabling nodes such as Ethereum Virtual Machines (EVM) initially available only on the Ethereum ledger. Since the release of different EVM-based ledgers, the use cases to incentive the integration of smart contracts on other domains, such as IoT environments, increased. In this paper, we analyze the most IoT environment expedient quantitative metrics of various popular EVM-enabling ledgers to provide an overview of potential EVMenabling characteristics.
Authored by Sandi Gec, Dejan Lavbič, Vlado Stankovski, Petar Kochovski
Pen-testing or penetration testing is an exercise embraced to differentiate and take advantage of all the possible weaknesses in a system or network. It certifies the reasonability or deficiency of the security endeavours which have been executed. Our manuscript shows an outline of pen testing. We examine all systems, the advantages, and respective procedure of pen testing. The technique of pen testing incorporates following stages: Planning of the tests, endlessly tests investigation. The testing stage includes the following steps: Weakness investigation, data gathering and weakness exploitation. This manuscript furthermore shows the application of this procedure to direct pen testing on the model of the web applications.
Authored by Sarthak Baluni, Shivansu Dutt, Pranjal Dabral, Srabanti Maji, Anil Kumar, Alka Chaudhary
With the development of information networks, cloud computing, big data, and virtualization technologies promote the emergence of various new network applications to meet the needs of various Internet services. A security protection system for virtual host in cloud computing center is proposed in the article. The system takes "security as a service" as the starting point, takes virtual machines as the core, and takes virtual machine clusters as the unit to provide unified security protection against the borderless characteristics of virtualized computing. The thesis builds a network security protection system for APT attacks; uses the system dynamics method to establish a system capability model, and conducts simulation analysis. The simulation results prove the validity and rationality of the network communication security system framework and modeling analysis method proposed in the thesis. Compared with traditional methods, this method has more comprehensive modeling and analysis elements, and the deduced results are more instructive.
Authored by Xin Nie, Chengcheng Lou
Python continues to be one of the most popular programming languages and has been used in many safetycritical fields such as medical treatment, autonomous driving systems, and data science. These fields put forward higher security requirements to Python ecosystems. However, existing studies on machine learning systems in Python concentrate on data security, model security and model privacy, and just assume the underlying Python virtual machines (PVMs) are secure and trustworthy. Unfortunately, whether such an assumption really holds is still unknown.
Authored by Xinrong Lin, Baojian Hua, Qiliang Fan
Java-based applications are widely used by companies, government agencies, and financial institutions. Every day, these applications process a considerable amount of sensitive data, such as people’s credit card numbers and passwords. Research has found that the Java Virtual Machine (JVM), an essential component for executing Java-based applications, stores data in memory for an unknown period of time even after the data are no longer used. This mismanagement of JVM puts all the data, sensitive or non-sensitive, in danger and raises a huge concern to all Java-based applications globally. This problem has serious implications for many “secure” applications that employ Javabased frameworks or libraries with a severe security risk of having sensitive data that attackers can access after the data are thought to be cleared. This paper presents a prototype of a secure Java API we design through an undergraduate student research project. The API is implemented using direct Byte buffer so that sensitive data are not managed by JVM garbage collection. We also implement the API using obfuscation so that data are encrypted. Using an initial experimental evaluation, the proposed secure API can successfully protect sensitive data from being accessed by malicious users.
Authored by Lin Deng, Bingyang Wei, Jin Guo, Matt Benke, Tyler Howard, Matt Krause, Aman Patel
A huge number of cloud users and cloud providers are threatened of security issues by cloud computing adoption. Cloud computing is a hub of virtualization that provides virtualization-based infrastructure over physically connected systems. With the rapid advancement of cloud computing technology, data protection is becoming increasingly necessary. It s important to weigh the advantages and disadvantages of moving to cloud computing when deciding whether to do so. As a result of security and other problems in the cloud, cloud clients need more time to consider transitioning to cloud environments. Cloud computing, like any other technology, faces numerous challenges, especially in terms of cloud security. Many future customers are wary of cloud adoption because of this. Virtualization Technologies facilitates the sharing of recourses among multiple users. Cloud services are protected using various models such as type-I and type-II hypervisors, OS-level, and unikernel virtualization but also offer a variety of security issues. Unfortunately, several attacks have been built in recent years to compromise the hypervisor and take control of all virtual machines running above it. It is extremely difficult to reduce the size of a hypervisor due to the functions it offers. It is not acceptable for a safe device design to include a large hypervisor in the Trusted Computing Base (TCB). Virtualization is used by cloud computing service providers to provide services. However, using these methods entails handing over complete ownership of data to a third party. This paper covers a variety of topics related to virtualization protection, including a summary of various solutions and risk mitigation in VMM (virtual machine monitor). In this paper, we will discuss issues possible with a malicious virtual machine. We will also discuss security precautions that are required to handle malicious behaviors. We notice the issues of investigating malicious behaviors in cloud computing, give the scientific categorization and demonstrate the future headings. We ve identified: i) security specifications for virtualization in Cloud computing, which can be used as a starting point for securing Cloud virtual infrastructure, ii) attacks that can be conducted against Cloud virtual infrastructure, and iii) security solutions to protect the virtualization environment from DDOS attacks.
Authored by Tahir Alyas, Karamath Ateeq, Mohammed Alqahtani, Saigeeta Kukunuru, Nadia Tabassum, Rukshanda Kamran
Virtualization is essential in assisting businesses in lowering operational costs while still ensuring increased productivity, better hardware utilization, and flexibility. According to Patrick Lin, Senior Director of Product Management for VMware, "virtualization is both an opportunity and a threat." This survey gives a review of the literature on major virtualization technology security concerns. Our study primarily focuses on several open security flaws that virtualization introduces into the environment. Virtual machines (VMs) are overtaking physical machine infrastructures due to their capacity to simulate hardware environments, share hardware resources, and make use of a range of operating systems (OS). By offering a higher level of hardware abstraction and isolation, efficient external monitoring and recording, and on-demand access, VMs offer more effective security architecture than traditional machines. It concentrates on virtual machine-specific security concerns. The security risks mentioned in this proposal apply to all of the virtualization technologies now on the market; they are not unique to any one particular virtualization technology. In addition to some security advantages that come along with virtualization, the survey first gives a brief review of the various virtualization technologies that are now on the market. It conclude by going into great depth on a number of security gaps in the virtualized environment.
Authored by N.B. Kadu, Pramod Jadhav, Santosh Pawar
The world has seen a quick transition from hard devices for local storage to massive virtual data centers, all possible because of cloud storage technology. Businesses have grown to be scalable, meeting consumer demands on every turn. Cloud computing has transforming the way we do business making IT more efficient and cost effective that leads to new types of cybercrimes. Securing the data in cloud is a challenging task. Cloud security is a mixture of art and science. Art is to create your own technique and technologies in such a way that the user should be authenticated. Science is because you have to come up with ways of securing your application. Data security refers to a broad set of policies, technologies and controls deployed to protect data application and the associated infrastructure of cloud computing. It ensures that the data has not been accessed by any unauthorized person. Cloud storage systems are considered to be a network of distributed data centers which typically uses cloud computing technologies like virtualization and offers some kind of interface for storing data. Virtualization is the process of grouping the physical storage from multiple network storage devices so that it looks like a single storage device.
Authored by Jeevitha K, Thriveni J
Cloud computing has since been turned into the most transcendental growth. This creative invention provides forms of technology and software assistance to companies. Cloud computing is a crucial concept for the distribution of information on the internet. Virtualization is a focal point for supporting cloud resources sharing. The secrecy of data management is the essential warning for the assurance of computer security such that cloud processing will not have effective privacy safety. All subtleties of information relocation to cloud stay escaped the clients. In this review, the effective mobility techniques for privacy and secured cloud computing have been studied to support the infrastructure as service.
Authored by Betty Samuel, Saahira Ahamed, Padmanayaki Selvarajan
We have seen the tremendous expansion of machine learning (ML) technology in Artificial Intelligence (AI) applications, including computer vision, voice recognition, and many others. The availability of a vast amount of data has spurred the rise of ML technologies, especially Deep Learning (DL). Traditional ML systems consolidate all data into a central location, usually a data center, which may breach privacy and confidentiality rules. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating data privacy, legality, scalability, and unwanted bandwidth loss problems. This paper outlines a vision for leveraging FL for better traffic steering predictions. Specifically, we propose a hierarchical FL framework that will dynamically update service function chains in a network by predicting future user demand and network state using the FL method.
Authored by Abdullah Bittar, Changcheng Huang
From financial transactions to digital voting systems, identity management, and asset monitoring, blockchain technology is increasingly being developed for use in a wide range of applications. The problem of security and privacy in the blockchain ecosystem, which is now a hot topic in the blockchain community, is discussed in this study. The survey’s goal was to investigate this issue by considering several sorts of assaults on the blockchain network in relation to the algorithms offered. Following a preliminary literature assessment, it appears that some attention has been paid to the first use case; however the second use case, to the best of my knowledge, deserves more attention when blockchain is used to investigate it. However, due to the subsequent government mandated secrecy around the implementation of DES, and the distrust of the academic community because of this, a movement was spawned that put a premium on individual privacy and decentralized control. This movement brought together the top minds in encryption and spawned the technology we know of as blockchain today. This survey paper also explores the genesis of encryption, its early adoption, and the government meddling which eventually spawned a movement which gave birth to the ideas behind blockchain. It also closes with a demonstration of blockchain technology used in a novel way to refactor the traditional design paradigms of databases.
Authored by Mohammed Mahmood, Osman Ucan, Abdullahi Ibrahim
With the rapid development of Internet of Things technology, the requirements for edge node data processing capability are increasing, and GPU processors are becoming more widely applied in edge nodes. Current research on GPU virtualization technology mainly focuses on cloud data centers, with little research on embedded GPU virtualization in scenarios with limited edge node resources. In contrast to cloud data centers, embedded GPUs in edge nodes typically do not have access to GPU utilization and video memory usage within each container. As a result, traditional GPU virtualization technologies are ineffective for resource virtualization on embedded devices. This paper presents a method to virtualize embedded GPU resources in an edge computing environment, called sGPU. We integrated edge nodes with embedded GPUs into Kubernetes via the device-plugin mechanism. Users can package GPU applications in containers and deploy them using Kubernetes on edge nodes with embedded GPUs. sGPU allows containers to share embedded GPU computing resources and divides a physical GPU into multiple virtual GPUs that can be allocated to containers on demand. To achieve GPU computing power division, we proposed a multi-container sharing GPU algorithm and implemented it in sGPU, which ensures the most accurate computing power segmentation under the competition of computing resources of a GPU used by multiple containers at the same time. According to the experimental results, the average overhead of sGPU is 3.28\%. The accuracy of computing power segmentation is 92.7\% when a single container uses GPU.
Authored by Xinyu Yang, Xin Wang, Lei Yan, Suzhi Cao
The 5G technology ensures reliable and affordable broadband access worldwide, increases user mobility, and assures reliable and affordable connectivity of a wide range of electronic devices such as the Internet of Things (IoT).SDN (Software Defined Networking), NFV ( Network Function Virtualization), and cloud computing are three technologies that every technology provider or technology enabler tries to incorporate into their products to capitalize on the useability of the 5th generation.The emergence of 5G networks and services expands the range of security threats and leads to many challenges in terms of user privacy and security. The purpose of this research paper is to define the security challenges and threats associated with implementing this technology, particularly those affecting user privacy. This research paper will discuss some solutions related to the challenges that occur when implementing 5G, and also will provide some guidance for further development and implementation of a secure 5G system.
Authored by Aysha Alfaw, Alauddin Al-Omary
The incredible speed with which Information Technology (IT) has evolved in recent decades has brought about a major change in people s daily lives and in practically all areas of knowledge. The diversification of means of access using mobile devices, the evolution of technologies such as virtualization, added to a growing demand from users for new systems and services adapted to these new market trends, were the fuel for the emergence of a new paradigm, Cloud Computing. The general objective of this paper is to enable the offer of privacy preservation system provided by third parties through which Cloud Data Storage Services customers can continuously monitor the integrity of their files.
Authored by Zahraa Lafta, Muhammad Ilyas
In the era of big data, more and more applications of smart devices are computing-intensive, thus raising the strong demand for task offloading to cloud data centers. However, it gives rise to network delay and privacy data leak issues. Edge computing can effectively solve latency, bandwidth occupation and data privacy problems, but the deployment of applications are also limited by hardware architectures and resources, i.e., computing and storage resources. Therefore, the combination of virtualization technology and edge computing become important in order to realize the rapid deployment of intelligent application in an edge server or an edge node by virtualization technology. The traditional virtual machine (VM) is no longer suitable for resource-constrained devices. Container technique including Docker can effectively solve these problems, but it also depends on an operating system. Unikernel is the state-of-art virtualization technology. In this paper, we combine Unikernel with edge computing to explore its application in an edge computing system. An application architecture of edge computing based on Unikernel is proposed. It is suitable for application in edge computing.
Authored by Shichao Chen, Ruijie Xu, Wenqiao Sun
In the present situation, storing digital health records in the cloud for the immediate usage of patients and treatment providers is the most convenient and economical way for patients. Cloud based Electronic Health Records contain information about the patients and also provide updates to the treatment providers. From the treatment providers’ perspective, it is easy for them to see the previous health records of their patients. As a result, the duplication of health records is eliminated. However, the major issue in this system is storing health records and protecting the privacy of patient’s details in the cloud. Currently, there are many research scholars who are working constantly to maintain and update the existing electronic health records in the cloud. The aim of this paper is to create virtual storage to secure electronic health records and to provide privacy and backups to customers.
Authored by Ramana B, Indiramma M