Kanwar Ahmad Mustafa
Department Of Electronics and Communication
The University of Lahore
1-Km Thokar Raiwand Road Lahore, Pakistan punjabians50@hotmail.com Abstract—Signal compression is concern with the reduction of amount of data i.e., efficient transmission speed of data can be achieved. Redundant data is removed in compression and added during decompression. It includes different techniques which help to get the job done. These techniques include Lossless and Lossy compression and they can be used to compress text, video, audio etc. Keywords—LossyCompression, Lossless Compression, Huffman Algorithm, DCT and JPEG. I. INTRODUCTION
Compression is a process by which size of data is reduced. There are two main types of data compression lossless and lossy compression. These are further divided into different methods which are Huffman, run-length and Lempel-ziv. JPEG file compression applied with the help of DCT matrix. Sometimes the given data contains some data which has no relevant information, or restates/repeats the known information it is thus said to contain data redundancy. The following paper can be organized as follow section II briefly describe about compression and principles. Section III describes the lossless compression. Section IV describes the lossy compression and section V describes the conclusion of data compression.
II. DATA COMPRESSION Data compression is the representation of an information source (e.g. a data file, a speech signal, an image, or a video signal) as accurately as possible using the fewest number of bits. Data compression is about storing and sending a smaller number of bits. Although many methods are used for this purpose, in general these methods can be divided into two broad categories: lossless and lossy methods. Compression is possible because information usually contains redundancies, or information that is often repeated. Examples include reoccurring letters; numbers or
References: [1]. “The essentials of computer organization and architecture” by Linda Null and Julia Nobur. [2]. Abramson, N. 1963. Information Theory and Coding. McGraw-Hill, New York. [3]. Ash, R. B.1965. Information Theory.IntersciencePublishers, New York. [4]. A.K .Jain,“Fundamentals of Digital Image Processing,” New Jersey:Prentice Hall Inc.,1989. [5]. Cormack, G. V., and Horspool, R. N. 1984. Algorithms for Adaptive Huffman Codes. Inform. Process.Lett. 18, 3 (Mar.), 159-165. [6]. Cortesi, D.1982. An Effective Text-Compression Algorithm. BYTE 7, 1 (Jan.), 397-403. [7]. Gonzalez, R. C., and Wintz, P. 1977. Digital Image Processing.Addison-Wesley, Reading, Mass. [8]. McIntyre, D. R., and Pechura, M. A. 1985. Data Compression Using Static Huffman Code-Decode Tables. Commun.ACM 28, 6 (June), 612-616. [9]. Rao, K. R., and YIP, P. 1990. “Discrete Cosine Transform: Algorithms, Advantages, Applications”, Academic Press.