Privacy Policies - Smart contracts running on blockchain potentially disclose all data to the participants of the chain. Therefore, because privacy is important in many areas, smart contracts may not be considered a good option. To overcome this limitation, this paper introduces Stone, a privacy preservation system for smart contracts. With Stone, an arbitrary Solidity smart contract can be combined with a separate privacy policy in JSON, which prevents the storage data in the contract from being publicised. Because this approach is convenient for policy developers as well as smart contract programmers, we envision that this approach will be practically acceptable for real-world applications.
Authored by Jihyeon Kim, Dahyeon Jeong, Jisoo Kim, Eun-Sun Cho
Privacy Policies - The motive behind this research paper is to outline recently introduced social media encryption policies and the impact that they will have on user privacy. With close to no Data Protection Laws in the country, all social media platforms pose a threat to one’s privacy. The various new privacy policies that have been put in place across different social media platforms, tend to take away the user’s choice on whether they want their data shared with other social media apps or no. Seeing how WhatsApp, Facebook and Instagram are all Facebook owned, any data shared across one platform crosses over with the database of another, regardless of whether you have an account or not, completely taking away from the concept of consensual sharing of data. This paper will further discuss how the nature of encryption in India will significantly affect India’s newly recognised fundamental right, the Right to Privacy. Various policy developments bring in various user violation concerns and that will be the focus of this research paper.
Authored by Akshit Talwar, Alka Chaudhary, Anil Kumar
Privacy Policies - Privacy policies inform users of the data practices and access protocols employed by organizations and their digital counterparts. Research has shown that users often feel that these privacy policies are lengthy and complex to read and comprehend. However, it is critical for people to be aware of the data access practices employed by the organizations. Hence, much research has focused on automatically extracting privacy-specific artifacts from the policies, predominantly by using natural language classification tools. However, these classification tools are designed primarily for the classification of paragraphs or segments of the policies. In this paper, we report on our research where we identify the gap in classifying policies at a segment level, and provide an alternate definition of segment classification using sentence classification. To this aid, we train and evaluate sentence classifiers for privacy policies using BERT and XLNet. Our approach demonstrates improvements in prediction quality of existing models and hence, surpasses the current baselines for classification models, without requiring additional parameter and model tuning. Using our sentence classifiers, we also study topical structures in Alexa top 5000 website policies, in order to identify and quantify the diffusion of information pertaining to privacy-specific topics in a policy.
Authored by Andrick Adhikari, Sanchari Das, Rinku Dewri
Privacy Policies - Privacy policy statements are an essential approach to self-regulation by website operators in the area of personal privacy protection. However, these policies are often lengthy and difficult to understand, with users appearing to actually read the privacy policy in only a few cases. To address these obstacles, we propose a framework, Privacy Policy Analysis Framework for Automatic Annotation and User Interaction (PPAI) that stores, classifies, and categorizes queries on natural language privacy policies. At the core of PPAI is a privacy-centric language model that consists of a smaller fine-grained dataset of privacy policies and a new hierarchy of neural network classifiers that take into account privacy practices with high-level aspects and finegrained details. Our experimental results show that the eight readability metrics of the dataset exhibit a strong correlation. Furthermore, PPAI’s neural network classifier achieves an accuracy of 0.78 in the multi-classification task. The robustness experiments reached higher accuracy than the baseline and remained robust even with a small amount of labeled data.
Authored by Han Ding, Shaohong Zhang, Lin Zhou, Peng Yang
Predictive Security Metrics - Most IoT systems involve IoT devices, communication protocols, remote cloud, IoT applications, mobile apps, and the physical environment. However, existing IoT security analyses only focus on a subset of all the essential components, such as device firmware or communication protocols, and ignore IoT systems’ interactive nature, resulting in limited attack detection capabilities. In this work, we propose IOTA, a logic programmingbased framework to perform system-level security analysis for IoT systems. IOTA generates attack graphs for IoT systems, showing all of the system resources that can be compromised and enumerating potential attack traces. In building IOTA, we design novel techniques to scan IoT systems for individual vulnerabilities and further create generic exploit models for IoT vulnerabilities. We also identify and model physical dependencies between different devices as they are unique to IoT systems and are employed by adversaries to launch complicated attacks. In addition, we utilize NLP techniques to extract IoT app semantics based on app descriptions. IOTA automatically translates vulnerabilities, exploits, and device dependencies to Prolog clauses and invokes MulVAL to construct attack graphs. To evaluate vulnerabilities’ system-wide impact, we propose two metrics based on the attack graph, which provide guidance on fortifying IoT systems. Evaluation on 127 IoT CVEs (Common Vulnerabilities and Exposures) shows that IOTA’s exploit modeling module achieves over 80\% accuracy in predicting vulnerabilities’ preconditions and effects. We apply IOTA to 37 synthetic smart home IoT systems based on real-world IoT apps and devices. Experimental results show that our framework is effective and highly efficient. Among 27 shortest attack traces revealed by the attack graphs, 62.8\% are not anticipated by the system administrator. It only takes 1.2 seconds to generate and analyze the attack graph for an IoT system consisting of 50 devices.
Authored by Zheng Fang, Hao Fu, Tianbo Gu, Pengfei Hu, Jinyue Song, Trent Jaeger, Prasant Mohapatra
Predictive Security Metrics - With the emergence of Zero Trust (ZT) Architecture, industry leaders have been drawn to the technology because of its potential to handle a high level of security threats. The Zero Trust Architecture (ZTA) is paving the path for a security industrial revolution by eliminating location-based implicant access and focusing on asset, user, and resource security. Software Defined Perimeter (SDP) is a secure overlay network technology that can be used to implement a Zero Trust framework. SDP is a next-generation network technology that allows network architecture to be hidden from the outside world. It also hides the overlay communication from the underlay network by employing encrypted communications. With encrypted information, detecting abnormal behavior of entities on an overlay network becomes exceedingly difficult. Therefore, an automated system is required. We proposed a method in this paper for understanding the normal behavior of deployed polices by mapping network usage behavior to the policy. An Apache Spark collects and processes the streaming overlay monitoring data generated by the built-in fabric API in order to do this mapping. It sends extracted metrics to Prometheus for storage, and then uses the data for machine learning training and prediction. The cluster-id of the link that it belongs to is predicted by the model, and the cluster-ids are mapped onto the policies. To validate the legitimacy of policy, the labeled polices hash is compared to the actual polices hash that is obtained from blockchain. Unverified policies are notified to the SDP controller for additional action, such as defining new policy behavior or marking uncertain policies.
Authored by Waleed Akbar, Javier Rivera, Khan Ahmed, Afaq Muhammad, Wang-Cheol Song
Predictive Security Metrics - Network security personnel are expected to provide uninterrupted services by handling attacks irrespective of the modus operandi. Multiple defensive approaches to prevent, curtail, or mitigate an attack are the primary responsibilities of a security personnel. Considering the fact that, predicting security attacks is an additional technique currently used by most organizations to accurately measure the security risks related to overall system performance, several approaches have been used to predict network security attacks. However, high predicting accuracy and difficulty in analyzing very large amount of dataset and getting a reliable dataset seem to be the major constraints. The uncertain behavior would be subjected to verification and validation by the network administrator. KDDD CUPP 99 dataset and NSL KDD dataset were both used in the research. NSL KDD provides 0.997 average micro and macro accuracy, having average LogLoss of 0.16 and average LogLossReduction of 0.976. Log-Loss Reduction ranges from infinity to 1, where 1 and 0 represent perfect prediction and mean prediction respectively. Log-Loss reduction should be as close to 1 as possible for a good model. LogLoss in the classification is an evaluation metrics that characterized the accuracy of a classifier. Log-loss is a measure of the performance of a classifier where the prediction input is a probability value between “0.00 to 1.00”. It should be as close to zero as possible. This paper proposes a FastTree Model for predicting network security incidents. Therefore, ML.NET Framework and FastTree Regression Technique have a high prediction accuracy and ability to analyze large datasets of normal, abnormal and uncertain behaviors.
Authored by Marcus Magaji, Abayomi Jegede, Nentawe Gurumdimma, Monday Onoja, Gilbert Aimufua, Ayodele Oloyede
Predictive Security Metrics - Predicting vulnerabilities through source code analysis and using it to guide software maintenance can effectively improve software security. One effective way to predict vulnerabilities is by analyzing library references and function calls used in code. In this paper, we extract library references and function calls from project files through source code analysis, generate sample sets for statistical learning based on these data. Design and train an integrated learning model that can be used for prediction. The designed model has a high accuracy rate and accomplishes the prediction task well. It also proves the correlation between vulnerabilities and library references and function calls.
Authored by Yiyi Liu, Minjie Zhu, Yilian Zhang, Yan Chen
Predictive Security Metrics - Across the globe, renewable generation integration has been increasing in the last decades to meet ever-increasing power demand and emission targets. Wind power has dominated among various renewable sources due to the widespread availability and advanced low-cost technologies. However, the stochastic nature of wind power results in power system reliability and security issues. This is because the uncertain variability of wind power results in challenges to various system operations such as unit commitment and economic dispatch. Such problems can be addressed by accurate wind power forecasts to great extent. This attracted wider investigations for obtaining accurate power forecasts using various forecasting models such as time series, machine learning, probabilistic, and hybrid. These models use different types of inputs and obtain forecasts in different time horizons, and have different applications. Also, different investigations represent forecasting performance using different performance metrics. Limited classification reviews are available for these areas and detailed classification on these areas will help researchers and system operators to develop new accurate forecasting models. Therefore, this paper proposes a detailed review of those areas. It concludes that even though quantum investigations are available, wind power forecasting accuracy improvement is an ever-existing research problem. Also, forecasting performance indication in financial term such as deviation charges can be used to represent the economic impact of forecasting accuracy improvement.
Authored by Sandhya Kumari, Arjun Rathi, Ayush Chauhan, Nigarish Khan, Sreenu Sreekumar, Sonika Singh
Predictive Security Metrics - A threat source that might exploit or create a hole in an information system, system security procedures, internal controls, or implementation is a computer operating system vulnerability. Since information security is a problem for everyone, predicting it is crucial. The typical method of vulnerability prediction involves manually identifying traits that might be related to unsafe code. An open framework for defining the characteristics and seriousness of software problems is called the Common Vulnerability Scoring System (CVSS). Base, Temporal, and Environmental are the three metric categories in CVSS. In this research, neural networks are utilized to build a predictive model of Windows 10 vulnerabilities using the published vulnerability data in the National Vulnerability Database. Different variants of neural networks are used which implements the back propagation for training the operating system vulnerabilities scores ranging from 0 to 10. Additionally, the research identifies the influential factors using Loess variable importance in neural networks, which shows that access complexity and polarity are only marginally important for predicting operating system vulnerabilities, while confidentiality impact, integrity impact, and availability impact are highly important.
Authored by Freeh Alenezi, Tahir Mehmood
Predictive Security Metrics - This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong PUFs. These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial PUF complexity analyses, and are much easier and more efficient to apply: They do not require detailed knowledge of various ML methods, substantial computation times, or the availability of an internal parametric model of the studied PUF. Our metrics also can be standardized particularly easily. This avoids the sometimes inconclusive or contradictory findings of existing ML-based security test, which may result from the usage of different or non-optimized ML algorithms and hyperparameters, differing hardware resources, or varying numbers of challenge-response pairs in the training phase.
Authored by Fynn Kappelhoff, Rasmus Rasche, Debdeep Mukhopadhyay, Ulrich Rührmair
Predictive Security Metrics - Software developers mostly focus on functioning code while developing their software paying little attention to the software security issues. Now a days, security is getting priority not only during the development phase, but also during other phases of software development life cycle (starting from requirement specification till maintenance phase). To that end, research have been expanded towards dealing with security issues in various phases. Current research mostly focused on developing different prediction models and most of them are based on software metrics. The metrics based models showed higher precision but poor recall rate in prediction. Moreover, they did not analyze the roles of individual software metric on the occurrences of vulnerabilities separately. In this paper, we target to track the evolution of metrics within the life-cycle of a vulnerability starting from its born version through the last affected version till fixed version. In particular, we studied a total of 250 files from three major releases of Apache Tomcat (8, 9 , and 10). We found that four metrics: AvgCyclomatic, AvgCyclomaticStrict, CountDeclM ethod, and CountLineCodeExe show significant changes over the vulnerability history of Tomcat. In addition, we discovered that Tomcat team prioritizes in fixing threatening vulnerabilities such as Denial of Service than less severe vulnerabilities. The results of our research will potentially motivate further research on building more accurate vulnerability prediction models based on the appropriate software metrics. It will also help to assess developer’s mindset about fixing different types of vulnerabilities in open source projects.
Authored by Erik Maza, Kazi Sultana
Predictive Security Metrics - Security metrics for software products give a quantifiable assessment of a software system s trustworthiness. Metrics can also help detect vulnerabilities in systems, prioritize corrective actions, and raise the level of information security within the business. There is a lack of studies that identify measurements, metrics, and internal design properties used to assess software security. Therefore, this paper aims to survey security measurements used to assess and predict security vulnerabilities. We identified the internal design properties that were used to measure software security based on the internal structure of the software. We also identified the security metrics used in the studies we examined. We discussed how software refactoring had been used to improve software security. We observed that a software system with low coupling, low complexity, and high cohesion is more secure and vice versa. Current research directions have been identified and discussed.
Authored by Abdullah Almogahed, Mazni Omar, Nur Zakaria, Abdulwadood Alawadhi
Outsourced Database Security - The growing power of cloud computing prompts data owners to outsource their databases to the cloud. In order to meet the demand of multi-dimensional data processing in big data era, multi-dimensional range queries, especially over cloud platform, have received extensive attention in recent years. However, since the third-party clouds are not fully trusted, it is popular for the data owners to encrypt sensitive data before outsourcing. It promotes the research of encrypted data retrieval. Nevertheless, most existing works suffer from single-dimensional privacy leakage which would severely put the data at risk. Up to now, although a few existing solutions have been proposed to handle the problem of single-dimensional privacy, they are unsuitable in some practical scenarios due to inefficiency, inaccuracy, and lack of support for diverse data. Aiming at these issues, this paper mainly focuses on the secure range query over encrypted data. We first propose an efficient and private range query scheme for encrypted data based on homomorphic encryption, which can effectively protect data privacy. By using the dualserver model as the framework of the system, we not only achieve multi-dimensional privacy-preserving range query but also innovatively realize similarity search based on MinHash over ciphertext domains. Then we perform formal security analysis and evaluate our scheme on real datasets. The result shows that our proposed scheme is efficient and privacy-preserving. Moreover, we apply our scheme to a shopping website. The low latency demonstrates that our proposed scheme is practical.
Authored by Wentao Wang, Yuxuan Jin, Bin Cao
Outsourced Database Security - Dynamic Spectrum Access (DSA) paradigm enabled through Cognitive Radio (CR) appliances is extremely well suited to solve the spectrum shortage problem. Crowd-sensing has been effectively used for dynamic spectrum access sensing by leveraging the power of the masses. Specifically in the DSA context, crowd-sensing allows end users to query a DSA database which is updated through crowd-sensing workers. Despite recent research proposals that address the privacy and confidentiality concerns of the querying user and crowd-sensing workers, personalized privacy-preserving database updates through crowdsensing workers remains an open problem. To this end we propose a personalized privacy-preserving database update scheme for the crowd-sensing model based on lightweight homomorphic encryption. We provide substantial experiments based on reallife mobility data sets which show that the proposed protocol provides realistic efficiency and security.
Authored by Laura Truong, Erald Troja, Nikhil Yadav, Syed Bukhari, Mehrdad Aliasgari
Outsourced Database Security - With the rapid development of information technology, it becomes more and more popular for the use of electronic information systems in medical institutions. To protect the confidentiality of private EHRs, attribute-based encryption (ABE) schemes that can provide one-to-many encryption are often used as a solution. At the same time, blockchain technology makes it possible to build distributed databases without relying on trusted third-party institutions. This paper proposes a secure and efficient attribute-based encryption with outsourced decryption scheme based on blockchain, which can realize flexible and fine-grained access control and further improve the security of blockchain data sharing.
Authored by Fugeng Zeng, Qigang Deng, Dongxiu Wang
Outsourced Database Security - Efficient sequencing methods produce a large amount of genetic data, and make it accessible to researchers. This leads genomics to be considered a legitimate big data field. Hence, outsourcing data to the cloud is necessary as the genomic dataset is large. Data owners encrypt sensitive data before outsourcing to maintain data confidentiality and outsourcing aids data owners in resolving the issue of local storage management. Because genomic data is so enormous, safely and effectively performing researchers’ queries is challenging. In this paper, we propose a method, PRESSGenDB, for securely performing string and substring searches on the encrypted genomic sequences dataset. We leverage searchable symmetric encryption (SSE) and design a new method to handle these queries. In comparison to the state-of-the-art methods, PRESSGenDB supports various types of queries over genomic sequences such as string search and substring searches with and without a given requested start position. Moreover, it supports strings of alphabets as sequences rather than just a binary sequence of 0, 1s. It can search for substrings (patterns) over a whole dataset of genomic sequences rather than just one sequence. Furthermore, by comparing PRESSGenDB’s search complexity analytically with the state-ofthe-art, we show that it outperforms the recent efficient works.
Authored by Sara Jafarbeiki, Amin Sakzad, Shabnam Kermanshahi, Ron Steinfeld, Raj Gaire
Outsourced Database Security - Verifiable Dynamic Searchable Symmetric Encryption (VDSSE) enables users to securely outsource databases (document sets) to cloud servers and perform searches and updates. The verifiability property prevents users from accepting incorrect search results returned by a malicious server. However, we discover that the community currently only focuses on preventing malicious behavior from the server but ignores incorrect updates from the client, which are very likely to happen since there is no record on the client to check. Indeed most existing VDSSE schemes are not sufficient to tolerate incorrect updates from the client. For instance, deleting a nonexistent keyword-identifier pair can break their correctness and soundness.
Authored by Dandan Yuan, Shujie Cui, Giovanni Russello
Outsourced Database Security - Applications today rely on cloud databases for storing and querying time-series data. While outsourcing storage is convenient, this data is often sensitive, making data breaches a serious concern. We present Waldo, a time-series database with rich functionality and strong security guarantees: Waldo supports multi-predicate filtering, protects data contents as well as query filter values and search access patterns, and provides malicious security in the 3-party honest-majority setting. In contrast, prior systems such as Timecrypt and Zeph have limited functionality and security: (1) these systems can only filter on time, and (2) they reveal the queried time interval to the server. Oblivious RAM (ORAM) and generic multiparty computation (MPC) are natural choices for eliminating leakage from prior work, but both of these are prohibitively expensive in our setting due to the number of roundtrips and bandwidth overhead, respectively. To minimize both, Waldo builds on top of function secret sharing, enabling Waldo to evaluate predicates non-interactively. We develop new techniques for applying function secret sharing to the encrypted database setting where there are malicious servers, secret inputs, and chained predicates. With 32-core machines, Waldo runs a query with 8 range predicates over 218 records in 3.03s, compared to 12.88s for an MPC baseline and 16.56s for an ORAM baseline. Compared to Waldo, the MPC baseline uses 9 − 82× more bandwidth between servers (for different numbers of records), while the ORAM baseline uses 20 − 152× more bandwidth between the client and server(s) (for different numbers of predicates).
Authored by Emma Dauterman, Mayank Rathee, Raluca Popa, Ion Stoica
Outsourced Database Security - The outsourced data inside the data dispersion middle server are calm and unsecure when compared with the current methods and security measures. Lost in Client get to benefits control tends to unsecure data sharing inside the stockroom. Existing Login affirmation is executed by utilizing extraordinary username and mystery word as substance organize. But this system faces colossal challenges from software engineers; organize interlopers or irregular works out where people can get the user’s mystery word easily by a number of hacking techniques. In this way, this paper proposes the system for multilevel secured login confirmation system utilizing OTP, picture hotspot security and capture methodologies. The building for picture hot spot is utilized to avoid the unauthorized client looking over the system and it as well avoid from hacking the watchword and unusual works out inside the stockroom So that we propose a Methodology based on guidelines such as Multilevel secured confirmation system to secure from harmful clients Secured Client control benefits for data scrutinized and sort in and Taking after the client conduct plan based on development log and Within the occasion that any unordinary activity is done by the individuals who are getting to data stockroom, the admin will be educated and this irregular development will be captured by keeping up a log record of all the clients. Cutting edge shows up has been proposed utilizing four level security techniques by checking the Picture Hotspot Security. AES Calculation is utilized to scramble and translate the login inconspicuous components in database for more information security to administer information proprietorship and security. For blended information capacity in information stockroom framework utilizing progressed record security and Information advantage Official.
Authored by Gunasekar M, Vishva C
Outsourced Database Security - Inference attacks on statistical databases represent a complex issue in institutions and corporates since it is hard to detect and prevent, especially when it is committed by an internal adversary. The issue has been manifested further with the widespread of data analytics techniques in industry and academia, besides outsourced services. Even when the released statistical data has been anonymized and the identifying attributes are removed, targeted individuals can be spotted in such data. Therefore, preventing sensitive statistical data leakage is crucial for protecting the privacy of individuals or events, but such measures should not form utilization obstacles or degrade the data utility. This paper proposes an antiinference technique for preserving the privacy of sensitive data in statistical databases. Unlike existing solutions, which either require considerable computing resources or trade-off between statistical data accuracy and its privacy, our solution is designed to maintain the accuracy while privacy is ensured.
Authored by Amer Aljaedi
Outsourced Database Security - Cyber attacks are causing tremendous damage around the world. To protect against attacks, many organizations have established or outsourced Security Operation Centers (SOCs) to check a large number of logs daily. Since there is no perfect countermeasure against cyber attacks, it is necessary to detect signs of intrusion quickly to mitigate damage caused by them. However, it is challenging to analyze a lot of logs obtained from PCs and servers inside an organization. Therefore, there is a need for a method of efficiently analyzing logs. In this paper, we propose a recommendation system using the ATT\&CK technique, which predicts and visualizes attackers’ behaviors using collaborative filtering so that security analysts can analyze logs efficiently.
Authored by Masaki Kuwano, Momoka Okuma, Satoshi Okada, Takuho Mitsunaga
Outsourced Database Security - The outsourcing of databases is very popular among IT companies and industries. It acts as a solution for businesses to ensure availability of the data for their users. The solution of outsourcing the database is to encrypt the data in a form where the database service provider can perform relational operations over the encrypted database. At the same time, the associated security risk of data leakage prevents many potential industries from deploying it. In this paper, we present a secure outsourcing database search scheme (BASDB) with the use of a smart contract for search operation over index of encrypted database and storing encrypted relational database in the cloud. Our proposed scheme BASDB is a simple and practical solution for effective search on encrypted relations and is well resistant to information leakage against attacks like search and access pattern leakage.
Authored by Partha Chakraborty, Puspesh Kumar, Mangesh Chandrawanshi, Somanath Tripathy
Oscillating Behaviors - A single-axis Microelectromechanical system gravimeter has recently been developed at the University of Glasgow. The sensitivity and stability of this device was demonstrated by measuring the Earth tides. The success of this device was enabled in part by its extremely low resonant frequency. This low frequency was achieved with a geometric anti-spring design, fabricated using well-established photolithography and dry etch techniques. Analytical models can be used to calculate the results of these non-linear oscillating systems, but the power of finite element analysis has not been fully utilised to explore the parameter space before now. In this article finite element models are used to investigate the behaviour of geometric anti-springs. These computer models provide the ability to investigate the effect of the fabrication material of the device: anisotropic \textless100\textgreater crystalline silicon. This is a parameter that is difficult to investigate analytically, but finite element modelling is used to take anisotropy into account. The finite element models are then used to demonstrate the design of a three-axis gravimeter enabling the gravity tensor to be measured - a significantly more powerful tool than the original single-axis device.
Authored by Richard Middlemiss, Paul Campsie, William Cunningham, Rebecca Douglas, Victoria McIvor, Vinod Belwanshi, James Hough, Sheila Rowan, Douglas Paul, Abhinav Prasad, Giles Hammond
Oscillating Behaviors - In this paper, we examine the asymptotic behavior of an equation that describes two rotors installed on a common oscillating platform. Namely, we establish analytic criteria for self-synchronization of the rotors by means of the Popov method of “a priori integral indices”.
Authored by Vera Smirnova, Anton Proskurnikov, Natalia Utina