data compression Recently Published Documents
Total documents.
- Latest Documents
- Most Cited Documents
- Contributed Authors
- Related Sources
- Related Keywords
Design and development of learning model for compression and processing of deoxyribonucleic acid genome sequence
Owing to the substantial volume of human genome sequence data files (from 30-200 GB exposed) Genomic data compression has received considerable traction and storage costs are one of the major problems faced by genomics laboratories. This involves a modern technology of data compression that reduces not only the storage but also the reliability of the operation. There were few attempts to solve this problem independently of both hardware and software. A systematic analysis of associations between genes provides techniques for the recognition of operative connections among genes and their respective yields, as well as understandings into essential biological events that are most important for knowing health and disease phenotypes. This research proposes a reliable and efficient deep learning system for learning embedded projections to combine gene interactions and gene expression in prediction comparison of deep embeddings to strong baselines. In this paper we preform data processing operations and predict gene function, along with gene ontology reconstruction and predict the gene interaction. The three major steps of genomic data compression are extraction of data, storage of data, and retrieval of the data. Hence, we propose a deep learning based on computational optimization techniques which will be efficient in all the three stages of data compression.
A combination of least significant bit and deflate compression for image steganography
Steganography is one of the cryptography techniques where secret information can be hidden through multimedia files such as images and videos. Steganography can offer a way of exchanging secret and encrypted information in an untypical mechanism where communicating parties can only interpret the secret message. The literature has shown a great interest in the least significant bit (LSB) technique which aims at embedding the secret message bits into the most insignificant bits of the image pixels. Although LSB showed a stable performance of image steganography yet, many works should be done on the message part. This paper aims to propose a combination of LSB and Deflate compression algorithm for image steganography. The proposed Deflate algorithm utilized both LZ77 and Huffman coding. After compressing the message text, LSB has been applied to embed the text within the cover image. Using benchmark images, the proposed method demonstrated an outperformance over the state of the art. This can proof the efficacy of using Deflate as a data compression prior to the LSB embedding.
DDCA-WSN: A Distributed Data Compression and Aggregation Approach for Low Resources Wireless Sensors Networks
Developing an efficient secure query processing algorithm on encrypted databases using data compression.
Abstract Distributed computing includes putting aside the data utilizing outsider storage and being able to get to this information from a place at any time. Due to the advancement of distributed computing and databases, high critical data are put in databases. However, the information is saved in outsourced services like Database as a Service (DaaS), security issues are raised from both server and client-side. Also, query processing on the database by different clients through the time-consuming methods and shared resources environment may cause inefficient data processing and retrieval. Secure and efficient data regaining can be obtained with the help of an efficient data processing algorithm among different clients. This method proposes a well-organized through an Efficient Secure Query Processing Algorithm (ESQPA) for query processing efficiently by utilizing the concepts of data compression before sending the encrypted results from the server to clients. We have addressed security issues through securing the data at the server-side by an encrypted database using CryptDB. Encryption techniques have recently been proposed to present clients with confidentiality in terms of cloud storage. This method allows the queries to be processed using encrypted data without decryption. To analyze the performance of ESQPA, it is compared with the current query processing algorithm in CryptDB. Results have proven the efficiency of storage space is less and it saves up to 63% of its space.
Telemetry Data Compression Algorithm Using Balanced Recurrent Neural Network and Deep Learning
Telemetric information is great in size, requiring extra room and transmission time. There is a significant obstruction of storing or sending telemetric information. Lossless data compression (LDC) algorithms have evolved to process telemetric data effectively and efficiently with a high compression ratio and a short processing time. Telemetric information can be packed to control the extra room and association data transmission. In spite of the fact that different examinations on the pressure of telemetric information have been conducted, the idea of telemetric information makes pressure incredibly troublesome. The purpose of this study is to offer a subsampled and balanced recurrent neural lossless data compression (SB-RNLDC) approach for increasing the compression rate while decreasing the compression time. This is accomplished through the development of two models: one for subsampled averaged telemetry data preprocessing and another for BRN-LDC. Subsampling and averaging are conducted at the preprocessing stage using an adjustable sampling factor. A balanced compression interval (BCI) is used to encode the data depending on the probability measurement during the LDC stage. The aim of this research work is to compare differential compression techniques directly. The final output demonstrates that the balancing-based LDC can reduce compression time and finally improve dependability. The final experimental results show that the model proposed can enhance the computing capabilities in data compression compared to the existing methodologies.
Lossless Genome Data Compression Using V-Gram
A deep learning scheme for efficient multimedia iot data compression, data compression algorithms for sensor networks with periodic transmission schemes.
The operating state of switch cabinet is significant for the reliability of the whole power system, collecting and monitoring its data through the wireless sensor network is an effective method to avoid accidents. This paper proposes a data compression method based on periodic transmission model under the condition of limited energy consumption and memory space resources in the complex environment of switch cabinet sensor networks. Then, the proposed method is rigorously and intuitively shown by theoretical derivation and algorithm flow chart. Finally, numerical simulations are carried out and compared with the original data. The comparisons of compression ratio and error results indicate that the improved algorithm has a better effect on the periodic sensing data with interference and can make sure the change trend of data by making certain timing sequence.
A New Transparent Cloud-Based Model for Sharing Medical Images with Data Compression and Proactive Resource Elasticity
Damage detection and localization under variable environmental conditions using compressed and reconstructed bayesian virtual sensor data.
Structural health monitoring (SHM) with a dense sensor network and repeated vibration measurements produces lots of data that have to be stored. If the sensor network is redundant, data compression is possible by storing the signals of selected Bayesian virtual sensors only, from which the omitted signals can be reconstructed with higher accuracy than the actual measurement. The selection of the virtual sensors for storage is done individually for each measurement based on the reconstruction accuracy. Data compression and reconstruction for SHM is the main novelty of this paper. The stored and reconstructed signals are used for damage detection and localization in the time domain using spatial or spatiotemporal correlation. Whitening transformation is applied to the training data to take the environmental or operational influences into account. The first principal component of the residuals is used to localize damage and also to design the extreme value statistics control chart for damage detection. The proposed method was studied with a numerical model of a frame structure with a dense accelerometer or strain sensor network. Only five acceleration or three strain signals out of the total 59 signals were stored. The stored and reconstructed data outperformed the raw measurement data in damage detection and localization.
Export Citation Format
Share document.
Help | Advanced Search
Computer Science > Machine Learning
Title: an introduction to neural data compression.
Abstract: Neural compression is the application of neural networks and other machine learning methods to data compression. Recent advances in statistical machine learning have opened up new possibilities for data compression, allowing compression algorithms to be learned end-to-end from data using powerful generative models such as normalizing flows, variational autoencoders, diffusion probabilistic models, and generative adversarial networks. The present article aims to introduce this field of research to a broader machine learning audience by reviewing the necessary background in information theory (e.g., entropy coding, rate-distortion theory) and computer vision (e.g., image quality assessment, perceptual metrics), and providing a curated guide through the essential ideas and methods in the literature thus far.
Submission history
Access paper:.
- Other Formats
References & Citations
- Google Scholar
- Semantic Scholar
BibTeX formatted citation
Bibliographic and Citation Tools
Code, data and media associated with this article, recommenders and search tools.
- Institution
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
IMAGES
VIDEO