Protocol Verification - PCI Express (Peripheral Component Interconnect) is a point-to-point, high-performance, serial interconnect protocol. PCIe outperforms older buses and offers greater bandwidth, making it a fantastic option for a wide range of applications. PCIe features layered architecture with three distinct layers. Packets are used to convey information between these layers. The verification IP of the physical layer in PCI Express is implemented in this paper. The Universal Verification Methodology is used for development of VIP of PCIe, which is written in System Verilog (UVM).
Authored by Viraj Vaidya, Vaishali Ingale, Ashlesha Gokhale
Protocol Verification - Transactions involving multiple blockchains are implemented by cross-chain protocols. These protocols are based on smart contracts, programs that run on blockchains, executed by a network of computers. Verifying the runtime correctness of smart contracts is a problem of compelling practical interest since, smart contracts can automatically transfer ownership of cryptocurrencies, electronic securities, and other valuable assets among untrusting parties. Such verification is challenging since smart contract execution is time sensitive, and the clocks on different blockchains may not be perfectly synchronized. This paper describes a method for runtime monitoring of blockchain executions. First, we propose a generalized runtime verification technique for verifying partially synchronous distributed computations for the metric temporal logic (MTL) by exploiting bounded-skew clock synchronization. Second, we introduce a progression-based formula rewriting scheme for monitoring MTL specifications which employs SMT solving techniques and report experimental results.
Authored by Ritam Ganguly, Yingjie Xue, Aaron Jonckheere, Parker Ljung, Benjamin Schornstein, Borzoo Bonakdarpour, Maurice Herlihy
Protocol Verification - As a new generation of high-performance chip-to-chip packet transmission protocol, Interlaken is usually embedded in FPGA, ARM, DSP and other chips in the way of IP core. As a high bandwidth network communication protocol, relevant testing and verification work is the key to support users highly reliable applications.This paper analyzes the framing layer and protocol layer of Interlaken IP core embedded in FPGA, encapsulates the pre-defined data at the client end of the protocol layer, tests the working condition under the transmission rate of 100Gbps, and error injection in one segment of the four-segment user-side to simulate packet encapsulation errors. The study found that when sending packets, when any control word in any segment is lost, about 1/2 of the packets are received and about 1/4 are received correctly.
Authored by Chao Zhang, Huabo Sun, Zichen Zhang, Lipei Zhang, Sun Lei
Protocol Verification - Symbolic protocol verification generally abstracts probabilities away, considering computations that succeed only with negligible probability, such as guessing random numbers or breaking an encryption scheme, as impossible. This abstraction, sometimes referred to as the perfect cryptography assumption, has shown very useful as it simplifies automation of the analysis. However, probabilities may also appear in the control flow where they are generally not negligible. In this paper we consider a framework for symbolic protocol analysis with a probabilistic choice operator: the probabilistic applied pi calculus. We define and explore the relationships between several behavioral equivalences. In particular we show the need for randomized schedulers and exhibit a counterexample to a result in a previous work that relied on nonrandomized ones. As in other frameworks that mix both non-deterministic and probabilistic choices, schedulers may sometimes be unrealistically powerful. We therefore consider two subclasses of processes that avoid this problem. In particular, when considering purely non-deterministic protocols, as is done in classical symbolic verification, we show that a probabilistic adversary has—maybe surprisingly—a strictly superior distinguishing power for may testing, which, when the number of sessions is bounded, we show to coincide with purely possibilistic similarity.
Authored by Vincent Cheval, Raphaëlle Crubillé, Steve Kremer
Protocol Verification - Microservices is an architectural style that enables the design of applications as series of services independent from each other in function, maintainability, deployability and scalability. Two phase commit protocol as an example of classical distributed transaction protocols are not suitable for this pattern. These protocols require all participants in a transaction to commit or roll back before the transaction can proceed. This is way saga was chosen to replace 2PC. Saga is a mechanism that belongs to the ’database per service’ pattern and which guarantees distributed transactions that covers many services. The saga pattern provides transaction management using a sequence of local transactions. Saga can be rolled back by a compensating transaction that ensures one of two possibilities all operations complete successfully or the corresponding compensation transactions are run to undo the change. In this paper, we try to run a verification of saga using the maude language. We use the Saga protocol by orchestration, and we present a verification way to check it.
Authored by Manel Djerou, Okba Tibermacine
Protocol Verification - Bus protocols are critical for the operation of a system as all communications are handled through the bus by following a predetermined structure. An IP is designed to verify if the system follows the specified protocol for seamless communications between multiple blocks in the system. As the process technology decreases the number of sub-blocks in the system also increases thus the verification complexity increases. In Traditional verification architecture, the design under test (DUT) signals are individually connected to the verification environment by binding the interface to the subblocks, signals are encapsulated and simplified to handle. In this work, an AHB verification module is designed by employing the interface binding technique.
Authored by Vignesh Mahendra, D.S. Sam, A Hernisha, Josephine Atchaya
Protocol Verification - Attribute-based encryption (ABE) is an extension scheme of identity-based encryption and public-key encryption. It can achieve fine-grained access control and one-to-many encryption mode, suitable for practical applications. In addition to mathematical proofs, it is essential to verify its security properties using different protocols. This paper proposes two key exchange protocols to securely exchange the user secret keys in ABE. ProVerif is an automated cryptographic protocol verifier that we have used during protocol verification. We specifically aim to protect the confidentiality of the generated keys. The proposed protocols are formally analysed and their security property has been formally proved.
Authored by Baasansuren Bat-Erdene, Yuping Yan, Mohammed Kamel, Peter Ligeti
Protocol Verification - The System-On-Chip (SoC) designs are becoming more complex nowadays. Multiple Intellectual Property (IPs) are integrated in a single SoC and these IPs communicate with the help of various bus protocols. Verification takes almost 70 \% time in design cycle hence re-usable verification environment of these commonly used protocols is very important. In this paper, AXI4-Lite protocol is verified using UVM based testbench structure. To verify all channels of AX I protocol, data is written into a 4-bit shift register and it is read back. The UVM testbench acts as a master device which will send all control information, data and address to the register through the AXI interface. To understand verification goal achievement, coverpoints are written and functional and code coverage reports are analyzed. The synopsys V CS tool is used for the simulation.
Authored by Hardi Sangani, Usha Mehta
Protocol Verification - Display Serial Interface (DSI) is a high-speed serial interface standard. It supports display and touch screens in mobile devices such as smartphones, laptops, tablets, and other platforms. DSI describes several layers that define the detailed interconnect between a host processor and a peripheral device in a mobile system. The targeted DSI protocol layer is the low-level layer in the standard which is responsible for bytes organization, error-checking information addition, and packet formulation. In this paper, we propose a constrained random Coverage-Driven Verification approach (CDV) for the DSI lowlevel protocol layer using Universal Verification Methodology (UVM). This approach could be the basis of a standardized Verification Intellectual Property (VIP) to test and verify the standardized DSI layer. We provide a well-established base of a reusable and scalable UVM environment that can verify the DSI protocol layer using various techniques such as error injection mechanism, System Verilog Assertions (SVA), and direct UVM sequences aim to cover the different real-life scenarios between the host processor and the peripheral device. Our results show that we can detect all inserted errors to assure the functionality of the DSI low-level protocol layer. Different real-life scenarios between the host processor and the peripheral device are covered with 100\% functional coverage and 93\% code coverage.
Authored by Ahmed Ali, Ahmed Shalaby, Sherif Saif, Mohamed Taher
Privacy Policies and Measurement - The fundamental target of tone mapping is to duplicate the given scene or an image close to the 64000 world brilliance coordinating the human read inside the show gadgets. Therapeutic imaging utilizes that is procedures to downsize commotion and sharpness subtleties to upgrade the visual delineation of the picture. Because details play such an important role in determining proof and treating disease, it s critical to concentrate on the most important options when displaying medical images. It could be a method for reducing the unpredictability of high-dimensional data. You ll be able to use essential part analysis to rough out high- dimensional data with fewer measurements. Each measurement is regarded as the most important component and refers to a direct blend of the underlying components, as well as the amount of data. This data can be used to solve a wide range of problems that happen on a regular basis. It also highlighted how Big Data may be used to analyse Internet and image data sources effectively. concerns of privacy, methods for securing the components of pattern environments and systems, Edges, on the other hand, which nearly always square measure fascinating options of associate degree image) are also characterized by sharp transitions in grey levels, therefore averaging filters have the undesirable facet result that they blur edges. Another application of this kind of method includes the smoothing of false contours that result from victimization associate degree meagerly range of grey levels.
Authored by Rajeev Kumar, Neha Sharma, Sandeep Kumar
Privacy Policies and Measurement - The Function-as-a-Service cloud computing paradigm has made large-scale application development convenient and efficient as developers no longer need to deploy or manage the necessary infrastructure themselves. However, as a consequence of this abstraction, developers lose insight into how their code is executed and data is processed. Cloud providers currently offer little to no assurance of the integrity of customer data. One approach to robust data integrity verification is the analysis of data provenance—logs that describe the causal history of data, applications, users, and non-person entities. This paper introduces ProProv, a new domain-specific language and graphical user interface for specifying policies over provenance metadata to automate provenance analyses.
Authored by Kevin Dennis, Shamaria Engram, Tyler Kaczmarek, Jay Ligatti
Privacy Policies and Measurement - The emergence of mobile edge computing (MEC) imposes an unprecedented pressure on privacy protection, although it helps the improvement of computation performance including energy consumption and computation delay by computation offloading. To this end, we propose a deep reinforcement learning (DRL)-based computation offloading scheme to optimize jointly privacy protection and computation performance. The privacy exposure risk caused by offloading history is investigated, and an analysis metric is defined to evaluate the privacy level. To find the optimal offloading strategy, an algorithm combining actor-critic, off-policy, and maximum entropy is proposed to accelerate the learning rate. Simulation results show that the proposed scheme has better performance compared with other benchmarks.
Authored by Zhengjun Gao, Guowen Wu, Yizhou Shen, Hong Zhang, Shigen Shen, Qiying Cao
Privacy Policies and Measurement - Email is one of the oldest and most popular applications on today’s Internet and is used for business and private communication. However, most emails are still susceptible to being intercepted or even manipulated by the servers transmitting the messages. Users with S/MIME certificates can protect their email messages. In this paper, we investigate the market for S/MIME certificates and analyse the impact of the ordering and revocation processes on the users’ privacy. We complete those processes for each vendor and investigate the number of requests, the size of the data transfer, and the number of trackers on the vendor’s Web site. We further collect all relevant documents, including privacy policies, and report on their number of words, readability, and quality. Our results show that users must make at least 86 HTTP requests and transfer at least 1.35 MB to obtain a certificate and 178 requests and 2.03 MB to revoke a certificate. All but one vendor employ third-party tracking during these processes, which causes between 43 and 354 third-party requests. Our results further show that the vendors’ privacy policies are at least 1701 words long which requires a user approximately 7 minutes to read. The longest policy requires approximately half an hour to be read. Measurements of the readability of all vendors’ privacy policies indicate that users need a level of education that is nearly equivalent to a bachelor’s degree to comprehend the texts. We also report on the quality of the policies and find that the vendors achieve compliance scores between 45 \% and 90 \%. With our method, vendors can measure their impact on the users’ privacy and create better products. On the other hand, users benefit from an analysis of all S/MIME certificate vendors in that they can make an informed choice of their vendor based on the objective metrics obtained by our study. Ultimately, the results help to increase the prevalence of encrypted emails and render society less susceptible to surveillance.
Authored by Tobias Mueller, Max Hartenstein
Privacy Policies and Measurement - With increased reliance of digital storage for personal, financial, medical, and policy information, a greater demand for robust digital authentication and cybersecurity protection measures results. Current security options include alpha-numeric passwords, two factor authentication, and bio-metric options such as fingerprint or facial recognition. However, all of these methods are not without their drawbacks. This projects leverages the fact that the use of physical handwritten signatures is still prevalent in society, and the thoroughly trained process and motions of handwritten signatures is unique for every individual. Thus, a writing stylus that can authenticate its user via inertial signature detection is proposed, which classifies inertial measurement features for user identification. The current prototype consists of two triaxial accelerometers, one mounted at each of the stylus’ terminal ends. Features extracted from how the pen is held, stroke styles, and writing speed can affect the stylus tip accelerations which leads to a unique signature detection and to deter forgery attacks. Novel, manual spatiotemporal features relating to such metrics were proposed and a multi-layer perceptron was utilized for binary classification. Results of a preliminary user study are promising with overall accuracy of 95.7\%, sensitivity of 100\%, and recall rate of 90\%.
Authored by Divas Subedi, Isabella Yung, Digesh Chitrakar, Kevin Huang
Privacy Policies and Measurement - Although the number of smart Internet of Things (IoT) devices has grown in recent years, the public s perception of how effectively these devices secure IoT data has been questioned. Many IoT users do not have a good level of confidence in the security or privacy procedures implemented within IoT smart devices for protecting personal IoT data. Moreover, determining the level of confidence end users have in their smart devices is becoming a major challenge. In this paper, we present a study that focuses on identifying privacy concerns IoT end users have when using IoT smart devices. We investigated multiple smart devices and conducted a survey to identify users privacy concerns. Furthermore, we identify five IoT privacy-preserving (IoTPP) control policies that we define and employ in comparing the privacy measures implemented by various popular smart devices. Results from our study show that the over 86\% of participants are very or extremely concerned about the security and privacy of their personal data when using smart IoT devices such as Google Nest Hub or Amazon Alexa. In addition, our study shows that a significant number of IoT users may not be aware that their personal data is collected, stored or shared by IoT devices.
Authored by Daniel Joy, Olivera Kotevska, Eyhab Al-Masri
Privacy Policies and Measurement - We report on the ideas and experiences of adapting Brane, a workflow execution framework, for use cases involving medical data exchange and processing. These use cases impose new requirements on the system to enforce policies encoding safety properties, ranging from access control to legal regulations pertaining to data privacy. Our approach emphasizes users’ control over the extent to which they cooperate in distributed execution, at the cost of revealing information about their policies.
Authored by Christopher Esterhuyse, Tim Muller, Thomas Van Binsbergen, Adam Belloum
Privacy Policies and Measurement - It is estimated that over 1 billion Closed-Circuit Television (CCTV) cameras are operational worldwide. The advertised main benefits of CCTV cameras have always been the same; physical security, safety, and crime deterrence. The current scale and rate of deployment of CCTV cameras bring additional research and technical challenges for CCTV forensics as well, as for privacy enhancements.
Authored by Hannu Turtiainen, Andrei Costin, Timo Hämäläinen, Tuomo Lahtinen, Lauri Sintonen
Privacy Policies and Measurement - Modelling and analyzing the massive policy discourse networks are of great importance in critical policy studies and have recently attracted increasing research interests. Yet, the effective representation scheme, quantitative policymaking metrics and the proper analysis methods remain largely unexplored. To address above challenges, with the Latent Dirichlet Allocation embedding, we proposed a government policy network, which models multiple entity types and complex relationships in between. Specifically, we have constructed the government policy network based on approximately 700 rural innovation and entrepreneurship policies released by the Chinese central government and eight provinces’ governments in the past eight years. We verified that the entity degree in the policy network is subject to the power-law distribution. Moreover, we propose a metric to evaluate the coordination between the central and local departments, namely coordination strength. And we find that this metric effectively reflects the coordination relationship between central and local departments. This study could be considered as a theoretical basis for applications such as policy discourse relationship prediction and departmental collaborative analysis.
Authored by Yilin Kang, Renwei Ou
Privacy Policies and Measurement - First introduced as a way of recording client-side state, third-party vendors have proliferated widely on the Web, and have become a fundamental part of the Web ecosystem. However, there is widespread concern that third-party vendors are being abused to track and profile individuals online for commercial, analytical and various other purposes. This paper builds the platform called “PRIVIS”, aiming at providing unique insights on how the privacy ecosystem is structured and affected through the analysis of data that stems from real users. First, to showcase what can be learned from this ecosystem through a datadriven analysis across the country, time and first-party categories, PRIVIS visualises data gathered from over 10K Chrome installers. It also equips participants with the means to collect and analyze their own data so that they could assess how their browsing habits are shared with third parties from their perspectives. Based on real-user datasets, the third-party quantity is not the only measure of web privacy risks. The measure proposed in this paper is how well thirdparty providers know their users. Second, PRIVIS studies the interplay between user location, special periods (after epidemic outbreak) and the overall number of third parties observed. The visualisation suggests that lockdown policies facilitate the growth in the number of third parties. Collectively, there are more active third-party activities, compared with both before the lockdowns and the corresponding periods in the previous year. And throughout the lockdown stages, the first lockdown performs the most aggressive.
Authored by Xuehui Hu
Privacy Policies - Authentication, authorization, and trust verification are central parts of an access control system. The conditions for granting access in such a system are collected in access policies. Since access conditions are often complex, dedicated languages – policy languages – for defining policies are in use.
Authored by Stefan More, Sebastian Ramacher, Lukas Alber, Marco Herzl
Privacy Policies - Companies and organizations inform users of how they handle personal data through privacy policies on their websites. Particular information, such as the purposes of collecting personal data and what data are provided to third parties is required to be disclosed by laws and regulations. An example of such a law is the Act on the Protection of Personal Information in Japan. In addition to privacy policies, an increasing number of companies are publishing security policies to express compliance and transparency of corporate behavior. However, it is challenging to update these policies against legal requirements due to the periodic law revisions and rapid business changes. In this study, we developed a method for analyzing privacy policies to check whether companies comply with legal requirements. In particular, the proposed method classifies policy contents using a convolutional neural network and evaluates privacy compliance by comparing the classification results with legal requirements. In addition, we analyzed security policies using the proposed method, to confirm whether the combination of privacy and security policies contributes to privacy compliance. In this study, we collected and evaluated 1,304 privacy policies and 140 security policies for Japanese companies. The results revealed that over 90\% of privacy policies sufficiently describe the handling of personal information by first parties, user rights, and security measures, and over 90\% insufficiently describe the data retention and specific audience. These differences in the number of descriptions are dependent on industry guidelines and business characteristics. Moreover, security policies were found to improve the compliance rates of 46 out of 140 companies by describing security practices not included in privacy policies.
Authored by Keika Mori, Tatsuya Nagai, Yuta Takata, Masaki Kamizono
Privacy Policies - Privacy policy is a legal document in which the users are informed about the data practices used by the organizations. Past research indicates that the privacy policies are long, include incomplete information, and are hard to read. Research also shows that users are not inclined to read these long and verbose policies. The solution that we are proposing in this paper is to build tools that can assist users with finding relevant content in the privacy policies for their queries using semantic approach. This paper presents the development of domain ontology for privacy policies so that the relevant sentences related to privacy question can be automatically identified. For this study, we built an ontology and also validated and evaluated the ontology using qualitative and quantitative methods including competency questions, data driven, and user evaluation. Results from the evaluation of ontology depicted that the amount of text to read was significantly reduced as the users had to only read selected text that ranged from 1\% to 30\% of a privacy policy. The amount of content selected for reading depended on the query and its associated keywords. This finding shows that the time required to read a policy was significantly reduced as the ontology directed the user to the content related to a given user query. This finding was also confirmed by the results of the user study session. The results from the user study session indicated that the users found ontology helpful in finding relevant sentences as compared to reading the entire policy.
Authored by Jasmin Kaur, Rozita Dara, Ritu Chaturvedi
Privacy Policies - In the era of the Internet of things (IoT), smart logistics is quietly rising, but user privacy security has become an important factor hindering its development. Because privacy policy plays a positive role in protecting user privacy and improving corporate reputation, it has become an important part of smart logistics and the focus of express companies. In this paper, through the construction of the privacy policy evaluation index system of express companies, aiming at qualitative indicators that are difficult to evaluate, we introduce the cloud model evaluation method that can combine the qualitative and quantitative together, and comprehensively evaluate the privacy policy of five express companies in China from four indicators: general situation, user informed consent, information security control and personal rights protection. The results show that: Overall, the privacy policies of the five express companies have not reached the "good" level, and there is a certain gap between the privacy policies of different express companies. From the comparison of indicators, the five express companies generally score relatively good; However, the overall score of information security control index is relatively poor, and the other two indexes are quite different. Cloud model evaluation method has strong applicability for the evaluation of express company privacy policy, which provides a reference for improving the privacy policy formulation and improving the privacy protection level of China’s express delivery industry in the era of IoT.
Authored by Qian Zhang, Weihong Xie, Xinxian Pan
Privacy Policies - Data privacy laws like the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) provide guidelines for collecting personal information from individuals and processing it. These frameworks also require service providers to inform their customers on how clients data is gathered, used, protected, and shared with other parties. A privacy policy is a legal document used by service providers to inform users about how their personal information is collected, stored, and shared with other parties. It is expected that the privacy policies adhere to the data privacy regulations. However, it has been observed that some policies may deviate from the practices recommended by data protection regulations. Detecting instances where a policy may violate a certain regulation is quite challenging because the privacy policy text is long and complex, and there are numerous regulations. To address this problem, we have designed an approach to automatically detect whether a policy violates the articles of GDPR. This paper demonstrates how we have used Natural Language Inference (NLI) tasks to compare privacy content against the GDPR to detect privacy policies text in violation of GDPR. We provide two designs using the Stanford Natural Language Inference (SNLI) and the Multi-Genre Natural Language Inference (MultiNLI) datasets. The results from both designs are promising as our approach detected the deviations with 76\% accuracy.
Authored by Abdullah Alshamsan, Shafique Chaudhry
Privacy Policies - Privacy policies, despite the important information they provide about the collection and use of one’s data, tend to be skipped over by most Internet users. In this paper, we seek to make privacy policies more accessible by automatically classifying text samples into web privacy categories. We use natural language processing techniques and multiple machine learning models to determine the effectiveness of each method in the classification method. We also explore the effectiveness of these methods to classify privacy policies of Internet of Things (IoT) devices.
Authored by Jasmine Carson, Lisa DiSalvo, Lydia Ray