The compressibility and the randomness of compressed data based on Fibonacci code : a novel approach /

The tremendous growth of data generated daily has made the science of data compression an important and renewable field. It has become the first way to reduce the volume of data to optimize the use of storage units and accelerate the process of transferring data across various types of networks, chi...

Full description

Saved in:
Bibliographic Details
Main Author: Al-Khayyat, Kamal Ahmed Mulhi (Author)
Format: Thesis
Language:English
Published: Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia , 2021
Subjects:
Online Access:http://studentrepo.iium.edu.my/handle/123456789/11044
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The tremendous growth of data generated daily has made the science of data compression an important and renewable field. It has become the first way to reduce the volume of data to optimize the use of storage units and accelerate the process of transferring data across various types of networks, chiefly the World Wide Web, thus reducing the cost of transport and storage. Compressed data grows with the same frequency as the data itself, which, in turn, created an urgent necessity to understand and analyze the compressed files themselves, and since efforts are focused only on inventing and developing new compression algorithms, few efforts remain trying to understand and analyze compressed files. This research invests in compressed files introducing a new way to analyze and understand compressed data from new angles. This analysis contributes to solutions to practical problems, including the problem of servers in classifying files before actually compressing them with what is known as compressibility. The issue of studying compressibility in systems servers is a sensitive and important issue, given that they provide for the optimum utilization of the physical and programmatic server resources. This research presented a new method by which server systems can distinguish between compressed files from uncompressed files on the one hand, and on the other hand, distinguish between compressed files that need more compression and those that do not need all of this in one frame. Moreover, as the randomness study programs cannot distinguish compressed data from uncompressed data in most cases, this study provided an integrated package of methods for studying the randomness of compressed files called (RTCD). This package can analyze the randomness of compressed files from new practical angles and open the way for the ability to compare compressed files with each other and distinguish between them successfully. This package includes quantitative and graphical measures all set to be standard in practice. The analysis in this study relies on the use of the Fibonacci code as a strong analytical basis capable of knowing the common characteristics of compressed files and can thus distinguish them from uncompressed files successfully. Moreover, the difference in these characteristics within the compressed files circle enables one to know the files that still need more compression. Compared to the well-known techniques that study compressibility and those that study randomness of data, this analysis shows its distinction and its ability to overcome the deficiencies of these methods.
Item Description:Abstracts in English and Arabic.
"A thesis submitted in fulfilment of the requirement for the degree of Doctor of Philosophy in Computer Science." --On title page.
Physical Description:xviii, 234 leaves : colour illustrations ; 30 cm.
Bibliography:Includes bibliographical references (leaves 223-233).