In view of the characteristics that rolling bearing is prone to failure under actual working conditions, and it is difficult to classify the fault category and fault degree, the deep belief network is introduced to diagnose the rolling bearing fault. Firstly, principal component analysis is used to reduce the dimension of original input data and delete redundant input information. Then, the dimension reduced data are input into the deep belief network to extract the low dimensional fault feature representation, and the extracted features are input into the classifier for rolling bearing fault pattern recognition. Finally, the diagnosis effect of the proposed network is compared with the existing common shallow neural network. The simulation experiment is carried out through the bearing data in the United States.
Authored by Pengjuan Liu, Jindou Ma
Higher education management has problems producing 100% of graduates capable of responding to the needs of industry while industry also is struggling to find qualified graduates that responded to their needs in part because of the inefficient way of evaluating problems, as well as because of weaknesses in the evaluation of problem-solving capabilities. The objective of this paper is to propose an appropriate classification model to be used for predicting and evaluating the attributes of the data set of the student in order to meet the selection criteria required by the industries in the academic field. The dataset required for this analysis was obtained from a private firm and the execution was carried out using Chimp Optimization Algorithm (COA) based Deep Belief Neural Network (COA-DBNN) and the obtained results are compared with various classifiers such as Logistic Regression (LR), Decision Tree (DT) and Random Forest (RF). The proposed model outperforms other classifiers in terms of various performance metrics. This critical analysis will help the college management to make a better long-term plan for producing graduates who are skilled, knowledgeable and fulfill the industry needs as well.
Authored by N. Premalatha, S. Sujatha
Historically, energy resources are of strategic importance for the social welfare and economic growth. So, predicting crude oil price fluctuations is an important issue. Since crude oil price changes are affected by many risk factors in markets, this price shows more complicated nonlinear behavior and creates more risk levels for investors than in the past. We propose a new method of prediction of crude oil price to model nonlinear dynamics. The results of the experiments show that the superior performance of the model based on the proposed method against statistical previous works is statistically significant. In general, we found that the combination of the IDBN or LSTM model lowered the MSE value to 4.65, which is 0.81 lower than the related work (Chen et al. protocol), indicating an improvement in prediction accuracy.
Authored by Mohammad Heravi, Mahsa Khorrampanah, Monireh Houshmand
The topological structure of the network relationship is described by the network diagram, and the formation and evolution process of the network is analyzed by using the cost-benefit method. Assuming that the self-interested network member nodes can connect or break the connection, the network topology model is established based on the dynamic random pairing evolution network model. The static structure of the network is studied. Respecting the psychological cognition law of college students and innovating the core value cultivation model can reverse the youth's identification dilemma with the core values, and then create a good political environment for the normal, healthy, civilized and orderly network participation of the youth. In recognition of the atmosphere, an automatic learning algorithm of Bayesian network structure that effectively integrates expert knowledge and data-driven methods is realized.
Authored by Lan Ming
Aim: Object Detection is one of the latest topics in today’s world for detection of real time objects using Deep Belief Networks. Methods & Materials: Real-Time Object Detection is performed using Deep Belief Networks (N=24) over Convolutional Neural Networks (N=24) with the split size of training and testing dataset 70% and 30% respectively. Results: Deep Belief Networks has significantly better accuracy (81.2%) compared to Convolutional Neural Networks (47.7%) and attained significance value of p = 0.083. Conclusion: Deep Belief Networks achieved significantly better object detection than Convolutional Neural Networks for identifying real-time objects in traffic surveillance.
Authored by G. Vinod, Dr. G. Padmapriya
In recent years, radar automatic target recognition (RATR) technology based on high-resolution range profile (HRRP) has received extensive attention in various fields. However, insufficient data on non-cooperative targets seriously affects recognition performance of this technique. For HRRP target recognition under few-shot condition, we proposed a novel gaussian deep belief network based on model-agnostic meta-learning (GDBN-MAML). In the proposed method, GDBN allowed real-value data to be transmitted over the entire network, which effectively avoided feature loss due to binarization requirements of conventional deep belief network (DBN) for data. In addition, we optimized the initial parameters of GDBN by multi-task learning based on MAML. In this way, the number of training samples required by the model for new recognition tasks could be reduced. We applied the proposed method to the HRRP recognition experiments of 3 types of 3D simulated aircraft models. The experimental results showed that the proposed method had higher recognition accuracy and generalization performance under few-shot condition compared with conventional deep learning methods.
Authored by Zuyu Ren, Weidong Jiang, Xinyu Zhang
To detect human behaviour and measure accuracy of classification rate. Materials and Methods: A novel deep belief network with sample size 10 and support vector machine with sample size of 10. It was iterated at different times predicting the accuracy percentage of human behaviour. Results: Human behaviour detection utilizing novel deep belief network 87.9% accuracy compared with support vector machine 87.0% accuracy. Deep belief networks seem to perform essentially better compared to support vector machines \$(\textbackslashmathrmp=0.55)(\textbackslashtextPiˆ0.05)\$. The deep belief algorithm in computer vision appears to perform significantly better than the support vector machine algorithm. Conclusion: Within this human behaviour detection novel deep belief network has more precision than support vector machine.
Authored by D Ankita, Rashmita Khilar, Naveen Kumar
Automatic speech recognition (ASR) models are used widely in applications for voice navigation and voice control of domestic appliances. ASRs have been misused by attackers to generate malicious outputs by attacking the deep learning component within ASRs. To assess the security and robustnesss of ASRs, we propose techniques within our framework SPAT that generate blackbox (agnostic to the DNN) adversarial attacks that are portable across ASRs. This is in contrast to existing work that focuses on whitebox attacks that are time consuming and lack portability. Our techniques generate adversarial attacks that have no human audible difference by manipulating the input speech signal using a psychoacoustic model that maintains the audio perturbations below the thresholds of human perception. We propose a framework SPAT with three attack generation techniques based on the psychoacoustic concept and frame selection techniques to selectively target the attack. We evaluate portability and effectiveness of our techniques using three popular ASRs and two input audio datasets using the metrics- Word Error Rate (WER) of output transcription, Similarity to original audio, attack Success Rate on different ASRs and Detection score by a defense system. We found our adversarial attacks were portable across ASRs, not easily detected by a state-of the-art defense system, and had significant difference in output transcriptions while sounding similar to original audio.
Authored by Xiaoliang Wu, Ajitha Rajan
Classic black-box adversarial attacks can take advantage of transferable adversarial examples generated by a similar substitute model to successfully fool the target model. However, these substitute models need to be trained by target models' training data, which is hard to acquire due to privacy or transmission reasons. Recognizing the limited availability of real data for adversarial queries, recent works proposed to train substitute models in a data-free black-box scenario. However, their generative adversarial networks (GANs) based framework suffers from the convergence failure and the model collapse, resulting in low efficiency. In this paper, by rethinking the collaborative relationship between the generator and the substitute model, we design a novel black-box attack framework. The proposed method can efficiently imitate the target model through a small number of queries and achieve high attack success rate. The comprehensive experiments over six datasets demonstrate the effectiveness of our method against the state-of-the-art attacks. Especially, we conduct both label-only and probability-only attacks on the Microsoft Azure online model, and achieve a 100% attack success rate with only 0.46% query budget of the SOTA method [49].
Authored by Jie Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, Chao Wu
Black-box adversarial attack has aroused much research attention for its difficulty on nearly no available information of the attacked model and the additional constraint on the query budget. A common way to improve attack efficiency is to transfer the gradient information of a white-box substitute model trained on an extra dataset. In this paper, we deal with a more practical setting where a pre-trained white-box model with network parameters is provided without extra training data. To solve the model mismatch problem between the white-box and black-box models, we propose a novel algorithm EigenBA by systematically integrating gradient-based white-box method and zeroth-order optimization in black-box methods. We theoretically show the optimal directions of perturbations for each step are closely related to the right singular vectors of the Jacobian matrix of the pretrained white-box model. Extensive experiments on ImageNet, CIFAR-10 and WebVision show that EigenBA can consistently and significantly outperform state-of-the-art baselines in terms of success rate and attack efficiency.
Authored by Linjun Zhou, Peng Cui, Xingxuan Zhang, Yinan Jiang, Shiqiang Yang
Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art white-box and blackbox model inversion attacks, and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack. Our code is available online.11https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
Authored by Mostafa Kahla, Si Chen, Hoang Just, Ruoxi Jia
Adversarial attacks have recently been proposed to scrutinize the security of deep neural networks. Most blackbox adversarial attacks, which have partial access to the target through queries, are target-specific; e.g., they require a well-trained surrogate that accurately mimics a given target. In contrast, target-agnostic black-box attacks are developed to attack any target; e.g., they learn a generalized surrogate that can adapt to any target via fine-tuning on samples queried from the target. Despite their success, current state-of-the-art target-agnostic attacks require tremendous fine-tuning steps and consequently an immense number of queries to the target to generate successful attacks. The high query complexity of these attacks makes them easily detectable and thus defendable. We propose a novel query-efficient target-agnostic attack that trains a generalized surrogate network to output the adversarial directions iv.r.t. the inputs and equip it with an effective fine-tuning strategy that only fine-tunes the surrogate when it fails to provide useful directions to generate the attacks. Particularly, we show that to effectively adapt to any target and generate successful attacks, it is sufficient to fine-tune the surrogate with informative samples that help the surrogate get out of the failure mode with additional information on the target’s local behavior. Extensive experiments on CIFAR10 and CIFAR-100 datasets demonstrate that the proposed target-agnostic approach can generate highly successful attacks for any target network with very few fine-tuning steps and thus significantly smaller number of queries (reduced by several order of magnitudes) compared to the state-of-the-art baselines.
Authored by Raha Moraffah, Huan Liu
Most existing deep neural networks (DNNs) are inexplicable and fragile, which can be easily deceived by carefully designed adversarial example with tiny undetectable noise. This allows attackers to cause serious consequences in many DNN-assisted scenarios without human perception. In the field of speaker recognition, the attack for speaker recognition system has been relatively mature. Most works focus on white-box attacks that assume the information of the DNN is obtainable, and only a few works study gray-box attacks. In this paper, we study blackbox attacks on the speaker recognition system, which can be applied in the real world since we do not need to know the system information. By combining the idea of transferable attack and query attack, our proposed method NMI-FGSM-Tri can achieve the targeted goal by misleading the system to recognize any audio as a registered person. Specifically, our method combines the Nesterov accelerated gradient (NAG), the ensemble attack and the restart trigger to design an attack method that generates the adversarial audios with good performance to attack blackbox DNNs. The experimental results show that the effect of the proposed method is superior to the extant methods, and the attack success rate can reach as high as 94.8% even if only one query is allowed.
Authored by Junjian Zhang, Hao Tan, Binyue Deng, Jiacen Hu, Dong Zhu, Linyi Huang, Zhaoquan Gu
The widespread adoption of eCommerce, iBanking, and eGovernment institutions has resulted in an exponential rise in the use of web applications. Due to a large number of users, web applications have become a prime target of cybercriminals who want to steal Personally Identifiable Information (PII) and disrupt business activities. Hence, there is a dire need to audit the websites and ensure information security. In this regard, several web vulnerability scanners are employed for vulnerability assessment of web applications but attacks are still increasing day by day. Therefore, a considerable amount of research has been carried out to measure the effectiveness and limitations of the publicly available web scanners. It is identified that most of the publicly available scanners possess weaknesses and do not generate desired results. In this paper, the evaluation of publicly available web vulnerability scanners is performed against the top ten OWASP11OWASP® The Open Web Application Security Project (OWASP) is an online community that produces comprehensive articles, documentation, methodologies, and tools in the arena of web and mobile security. vulnerabilities and their performance is measured on the precision of their results. Based on these results, we proposed an Integrated Multi-Agent Blackbox Security Assessment Tool (SAT) for the security assessment of web applications. Research has proved that the vulnerabilities assessment results of the SAT are more extensive and accurate.
Authored by Jahanzeb Shahid, Zia Muhammad, Zafar Iqbal, Muhammad Khan, Yousef Amer, Weisheng Si
Speech recognition technology has been applied to all aspects of our daily life, but it faces many security issues. One of the major threats is the adversarial audio examples, which may tamper the recognition results of the acoustic speech recognition system (ASR). In this paper, we propose an adversarial detection framework to detect adversarial audio examples. The method is based on the transformer self-attention mechanism. Spectrogram features are extracted from the audio and divided into patches. Position information are embedded and then fed into transformer encoder. Experimental results show that the method achieves good performance with the detection accuracy of above 96.5% under the white-box attacks and blackbox attacks, and noisy circumstances. Even when detecting adversarial examples generated by the unknown attacks, it also achieves satisfactory results.
Authored by Yunchen Li, Da Luo
Co-salient object detection (CoSOD) has recently achieved significant progress and played a key role in retrieval-related tasks. However, it inevitably poses an entirely new safety and security issue, i.e., highly personal and sensitive content can potentially be extracting by powerful CoSOD methods. In this paper, we address this problem from the perspective of adversarial attacks and identify a novel task: adversarial co-saliency attack. Specially, given an image selected from a group of images containing some common and salient objects, we aim to generate an adversarial version that can mislead CoSOD methods to predict incorrect co-salient regions. Note that, compared with general white-box adversarial attacks for classification, this new task faces two additional challenges: (1) low success rate due to the diverse appearance of images in the group; (2) low transferability across CoSOD methods due to the considerable difference between CoSOD pipelines. To address these challenges, we propose the very first blackbox joint adversarial exposure and noise attack (Jadena), where we jointly and locally tune the exposure and additive perturbations of the image according to a newly designed high-feature-level contrast-sensitive loss function. Our method, without any information on the state-of-the-art CoSOD methods, leads to significant performance degradation on various co-saliency detection datasets and makes the co-salient objects undetectable. This can have strong practical benefits in properly securing the large number of personal photos currently shared on the Internet. Moreover, our method is potential to be utilized as a metric for evaluating the robustness of CoSOD methods.
Authored by Ruijun Gao, Qing Guo, Felix Juefei-Xu, Hongkai Yu, Huazhu Fu, Wei Feng, Yang Liu, Song Wang
Secured data transmission between one to many authorized users is achieved through Broadcast Encryption (BE). In BE, the source transmits encrypted data to multiple registered users who already have their decrypting keys. The Untrustworthy users, known as Traitors, can give out their secret keys to a hacker to form a pirate decoding system to decrypt the original message on the sly. The process of detecting the traitors is known as Traitor Tracing in cryptography. This paper presents a new Black Box Tracing method that is fully collusion resistant and it is designated as Traitor Tracing in Broadcast Encryption using Vector Keys (TTBE-VK). The proposed method uses integer vectors in the finite field Zp as encryption/decryption/tracing keys, reducing the computational cost compared to the existing methods.
Authored by Sahana S, Sridhar Venugopalachar
Smart phones have become the preferred way for Chinese Internet users currently. The mobile phone traffic is large from the operating system. These traffic is mainly generated by the services. In the context of the universal encryption of the traffic, classification identification of mobile encryption services can effectively reduce the difficulty of analytical difficulty due to mobile terminals and operating system diversity, and can more accurately identify user access targets, and then enhance service quality and network security management. The existing mobile encryption service classification methods have two shortcomings in feature selection: First, the DL model is used as a black box, and the features of large dimensions are not distinguished as input of classification model, which resulting in sharp increase in calculation complexity, and the actual application is limited. Second, the existing feature selection method is insufficient to use the time and space associated information of traffic, resulting in less robustness and low accuracy of the classification. In this paper, we propose a feature enhancement method based on adjacent flow contextual features and evaluate the Apple encryption service traffic collected from the real world. Based on 5 DL classification models, the refined classification accuracy of Apple services is significantly improved. Our work can provide an effective solution for the fine management of mobile encryption services.
Authored by Hui Zhang, Jianing Ding, Jianlong Tan, Gaopeng Gou, Junzheng Shi
A long-standing open question in computational learning theory is to prove NP-hardness of learning efficient programs, the setting of which is in between proper learning and improper learning. Ko (COLT’90, SICOMP’91) explicitly raised this open question and demonstrated its difficulty by proving that there exists no relativizing proof of NP-hardness of learning programs. In this paper, we overcome Ko’s relativization barrier and prove NP-hardness of learning programs under randomized polynomial-time many-one reductions. Our result is provably non-relativizing, and comes somewhat close to the parameter range of improper learning: We observe that mildly improving our inapproximability factor is sufficient to exclude Heuristica, i.e., show the equivalence between average-case and worst-case complexities of N P. We also make progress on another long-standing open question of showing NP-hardness of the Minimum Circuit Size Problem (MCSP). We prove NP-hardness of the partial function variant of MCSP as well as other meta-computational problems, such as the problems MKTP* and MINKT* of computing the time-bounded Kolmogorov complexity of a given partial string, under randomized polynomial-time reductions. Our proofs are algorithmic information (a.k. a. Kolmogorov complexity) theoretic. We utilize black-box pseudorandom generator constructions, such as the Nisan-Wigderson generator, as a one-time encryption scheme secure against a program which “does not know” a random function. Our key technical contribution is to quantify the “knowledge” of a program by using conditional Kolmogorov complexity and show that no small program can know many random functions.
Authored by Shuichi Hirahara
Verifiable Dynamic Searchable Symmetric Encryption (VDSSE) enables users to securely outsource databases (document sets) to cloud servers and perform searches and updates. The verifiability property prevents users from accepting incorrect search results returned by a malicious server. However, we discover that the community currently only focuses on preventing malicious behavior from the server but ignores incorrect updates from the client, which are very likely to happen since there is no record on the client to check. Indeed most existing VDSSE schemes are not sufficient to tolerate incorrect updates from the client. For instance, deleting a nonexistent keyword-identifier pair can break their correctness and soundness. In this paper, we demonstrate the vulnerabilities of a type of existing VDSSE schemes that fail them to ensure correctness and soundness properties on incorrect updates. We propose an efficient fault-tolerant solution that can consider any DSSE scheme as a black-box and make them into a fault-tolerant VDSSE in the malicious model. Forward privacy is an important property of DSSE that prevents the server from linking an update operation to previous search queries. Our approach can also make any forward secure DSSE scheme into a fault-tolerant VDSSE without breaking the forward security guarantee. In this work, we take FAST [1] (TDSC 2020), a forward secure DSSE, as an example, implement a prototype of our solution, and evaluate its performance. Even when compared with the previous fastest forward private construction that does not support fault tolerance, the experiments show that our construction saves 9× client storage and has better search and update efficiency.
Authored by Dandan Yuan, Shujie Cui, Giovanni Russello
Although the public cloud is known for its incredible capabilities, consumers cannot totally depend on cloud service providers to keep personal data because to the lack of client maneuverability. To protect privacy, data controllers outsourced encryption keys rather than providing information. Crypt - text to conduct out okay and founder access control and provide the encryption keys with others, innate quality Aes (CP-ABE) may be employed. This, however, falls short of effectively protecting against new dangers. The public cloud was unable to validate if a downloader could decode using a number of older methods. Therefore, these files should be accessible to everyone having access to a data storage. A malicious attacker may download hundreds of files in order to launch Economic Deny of Sustain (EDoS) attacks, greatly depleting the cloud resource. The user of cloud storage is responsible for paying the fee. Additionally, the public cloud serves as both the accountant and the payer of resource consumption costs, without offering data owners any information. Cloud infrastructure storage should assuage these concerns in practice. In this study, we provide a technique for resource accountability and defense against DoS attacks for encrypted cloud storage tanks. It uses black-box CP-ABE techniques and abides by the access policy of CP-arbitrary ABE. After presenting two methods for different parameters, speed and security evaluations are given.
Authored by Ankur Biswas, K V, Pradeep, Arvind Pandey, Surendra Shukla, Tej Raj, Abhishek Roy
With the advent of the era of Internet of Things (IoT), the increasing data volume leads to storage outsourcing as a new trend for enterprises and individuals. However, data breaches frequently occur, bringing significant challenges to the privacy protection of the outsourced data management system. There is an urgent need for efficient and secure data sharing schemes for the outsourced data management infrastructure, such as the cloud. Therefore, this paper designs a dual-server-based data sharing scheme with data privacy and high efficiency for the cloud, enabling the internal members to exchange their data efficiently and securely. Dual servers guarantee that none of the servers can get complete data independently by adopting secure two-party computation. In our proposed scheme, if the data is destroyed when sending it to the user, the data will not be restored. To prevent the malicious deletion, the data owner adds a random number to verify the identity during the uploading procedure. To ensure data security, the data is transmitted in ciphertext throughout the process by using searchable encryption. Finally, the black-box leakage analysis and theoretical performance evaluation demonstrate that our proposed data sharing scheme provides solid security and high efficiency in practice.
Authored by Xingqi Luo, Haotian Wang, Jinyang Dong, Chuan Zhang, Tong Wu
This article analyzes the analysis of the joint data security architecture that integrates artificial intelligence and cloud computing in the era of big data. The article discusses and analyzes the integrated applications of big data, artificial intelligence and cloud computing. As an important part of big data security protection, joint data security Protecting the technical architecture is not only related to the security of joint data in the big data era, but also has an important impact on the overall development of the data era. Based on this, the thesis takes the big data security and joint data security protection technical architecture as the research content, and through a simple explanation of big data security, it then conducts detailed research on the big data security and joint data security protection technical architecture from five aspects and thinking.
Authored by Jikui Du
Big Data (BD) is the combination of several technologies which address the gathering, analyzing and storing of massive heterogeneous data. The tremendous spurt of the Internet of Things (IoT) and different technologies are the fundamental incentive behind this enduring development. Moreover, the analysis of this data requires high-performance servers for advanced and parallel data analytics. Thus, data owners with their limited capabilities may outsource their data to a powerful but untrusted environment, i.e., the Cloud. Furthermore, data analytic techniques performed on external cloud may arise various security intimidations regarding the confidentiality and the integrity of the aforementioned; transferred, analyzed, and stored data. To countermeasure these security issues and challenges, several techniques have been addressed. This survey paper aims to summarize and emphasize the security threats within Big Data framework, in addition, it is worth mentioning research work related to Big Data Analytics (BDA).
Authored by Hany Habbak, Khaled Metwally, Ahmed Mattar
Cloud computing has become an integral part of medical big data. The cloud has the capability to store the large data volumes has attracted more attention. The integrity and privacy of patient data are some of the issues that cloud-based medical big data should be addressed. This research work introduces data integrity auditing scheme for cloud-based medical big data. This will help minimize the risk of unauthorized access to the data. Multiple copies of the data are stored to ensure that it can be recovered quickly in case of damage. This scheme can also be used to enable doctors to easily track the changes in patients' conditions through a data block. The simulation results proved the effectiveness of the proposed scheme.
Authored by A. Vineela, N. Kasiviswanath, Shoba Bindu