Jsea20121000001 59114627
Jsea20121000001 59114627
Jsea20121000001 59114627
org/journal/jsea)
ABSTRACT
The development of multimedia and digital imaging has led to high quantity of data required to represent modern imagery. This requires large disk space for storage, and long time for transmission over computer networks, and these two are relatively expensive. These factors prove the need for images compression. Image compression addresses the problem of reducing the amount of space required to represent a digital image yielding a compact representation of an image, and thereby reducing the image storage/transmission time requirements. The key idea here is to remove redundancy of data presented within an image to reduce its size without affecting the essential information of it. We are concerned with lossless image compression in this paper. Our proposed approach is a mix of a number of already existing techniques. Our approach works as follows: first, we apply the well-known Lempel-Ziv-Welch (LZW) algorithm on the image in hand. What comes out of the first step is forward to the second step where the Bose, Chaudhuri and Hocquenghem (BCH) error correction and detected algorithm is used. To improve the compression ratio, the proposed approach applies the BCH algorithms repeatedly until inflation is detected. The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms. Keywords: Image Compression; LZW; BCH
1. Introduction
Image applications are widely used, driven by recent advances in the technology and breakthroughs in the price and performance of the hardware and the firmware. This leads to an enormous increase in the storage space and the transmitting time required for images. This emphasizes the need to provide efficient and effective image compression techniques. In this paper we provide a method which is capable of compressing images without degrading its quality. This is achieved through minimizing the number of bits required to represent each pixel. This, in return, reduces the amount of memory required to store images and facilitates transmitting image in less time. Image compression techniques fall into two categories: lossless or lossy image compression. Choosing which of these two categories depends on the application and on the compression degree required [1,2]. Lossless image compression is used to compress images in critical applications as it allows the exact original image to be reconstructed from the compressed one withCopyright 2012 SciRes.
out any loss of the image data. Lossy image compression, on the other hand, suffers from the loss of some data. Thus, repeatedly compressing and decompressing an image results in poor quality of image. An advantage of this technique is that it allows for higher compression ratio than the lossless [3,4]. Compression is achieved by removing one or more of the three basic data redundancies: 1) Coding redundancy, which is presented when less than optimal code words are used; 2) Interpixel redundancy, which results from correlations between the pixels of an image; 3) Psychovisual redundancy, which is due to data that are ignored by the human visual system [5]. So, image compression becomes a solution to many imaging applications that require a vast amount of data to represent the images, such as document imaging management systems, facsimile transmission, image archiving, remote sensing, medical imaging, entertainment, HDTV, broadcasting, education and video teleconferencing [6]. One major difficulty that faces lossless image compression is how to protect the quality of the image in a
JSEA
753
way that the decompressed image appears identical to the original one. In this paper we are concerned with lossless image compression based on LZW and BCH algorithms, which compresses different types of image formats. The proposed method repeats the compression three times in order to increase the compression ratio. The proposed method is an implementation of the lossless image compression. The steps of our approach are as follows: first, we perform a preprocessing step to convert the image in hand into binary. Next, we apply the LZW algorithm on the image to compress. In this step, the codes from 0 to 255 represent 1-character sequences consisting of the corresponding 8-bit character, and the codes from 256 through 4095 are created in a dictionary for sequences encountered in the data as it is encoded. The code for the sequence (without that character) is emited, and a new code (for the sequence with that character) is added to the dictionary [7]. Finally, we use the BCH algorithm to increase image compression ratio. An error correction method is used in this step where we store the normal data and first parity data in a memory cell array, the normal data and first parity data form BCH encoded data. We also generate the second parity data from the stored normal data. To check for errors, we compare the first parity data with the second parity data as in [8,9]. Notice that we repeat compressing by the BCH algorithm until the required level of compression is achieved. The method of decompression is done in reversible order that produces image identical to original one.
2. Literature Review
A large number of data compression algorithms have been developed and used throughout the years. Some of which are of general use, i.e., can be used to compress files of different types (e.g., text files, image files, video files, etc.). Others are developed to compress efficiently a particular type of files. It has been realized that, according to the representation form of the data at which the compression process is performed, below is reviewing some of the literature review in this field. In [10], the authors present lossless image compression with four modular components: pixel sequence, prediction, error modeling, and coding. They used two methods that clearly separate the four modular components. These method are called Multi-Level Progressive Method (MLP), and Partial Precision Matching Method (PPMM) for lossless compression, both involving linear predictions, modeling prediction errors by estimating the variance of a Laplace distribution (symmetric exponential), and coding using arithmetic coding applied to pre-computed distributions [10]. In [11], a composite modeling method (hybrid compression algorithm for binary image) is used to reduce
Copyright 2012 SciRes.
the number of data coded by arithmetic coding, which code the uniform areas with less computation and apply arithmetic coding to the areas. The image block is classified into three categories: all-white, all-black, and mixed, then image processed 16 rows at a time, which is then operated by two global and local stages [11]. In [12], the authors propose an algorithm that works by applying a reversible transformation on the fourteen commonly used files of the Calgary Compression Corpus. It does not process its input sequentially, but instead processes a block of texts as a single unit, to form a new block that contains the same characters, but is easier to compress by simple compression algorithms, group characters together based on their contexts. This technique makes use of the context on only one side of each character so that the probability of finding a character closer to another instance of the same character is increased substantially. The transformation does not itself compress the data, but reorder it to make it easy to compress with simple algorithms such as move-to-front coding in combination with Huffman or arithmetic coding [12]. In [13], the authors present Lossless grayscale image compression methodTMWis based on the use of linear predictors and implicit segmentation. The compression process is split into an analysis step and a coding step. In the analysis step, a set of linear predictors and other parameters suitable for the image are calculated in the analysis step in a way that minimizes the length of the encoded image which is included in the compressed file and subsequently used for the coding step. To do the actual encoding, obviously, the chosen parameter set has to be considered as a part of the encoded image and has to be stored or transmitted alongside with the result of the Coding Stage [13]. In [14], the authors propose a lossless compression scheme for binary images which consists of a novel encoding algorithm and uses a new edge tracking algorithm. The proposed scheme consists of two major steps: the first step encodes binary image data using the proposed encoding method that encodes image data to only characteristic vector information of objects in image by using a new edge tracing method. Unlike the existing approaches, our method encodes information of edge lines obtained using the modified edge tracing method instead of directly encoding whole image data. The second is compressing the encoded image Huffman and Lempel-ZivWelch (LZW) [14]. In [15], the author presents an algorithm for lossless binary image compression which consists of two modules, called Two Modules Based Algorithm (TMBA), the first module: direct redundancy exploitation and the second: improved arithmetic coding [15]. In [16], a two-dimensional dictionary-based on lossless image compression scheme for grayscale images is
JSEA
754
introduced. The proposed scheme reduces a correlation in image data by finding two-dimensional blocks of pixels that are approximately matched throughout the data and replacing them with short codewords. In [16], the two-dimensional Lempel-Ziv image compression scheme (denoted GS-2D-LZ) is proposed. This scheme is designed to take advantage of the two-dimensional correlations in the image data. It relies on three different compression strategies, namely: two-dimensional block matching, prediction, and statistical encoding. In [17], the authors presented a lossless image compression method that is based on Multiple-Tables Arithmetic Coding (MTAC) method to encode a gray-level image, first classifies the data and then encodes each cluster of data using a distinct code table. The MTAC method employs a median edge detector (MED) to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image [17]. In [18], the authors used a lossless method of image compression and decompression is proposed. It uses a simple coding technique called Huffman coding. A software algorithm has been developed and implemented to compress and decompress the given image using Huffman coding techniques in a MATLAB platform. They concern with compressing images by reducing the number of bits per pixel required to represent it, and to decrease the transmission time for images transmission. The image is reconstructed back by decoding it using Huffman codes [18].
This paper uses the adaptive bit-level text compression schema based on humming code data compression used in [19]. Our schema consists of six steps repeated to increase image compression rate. The compression ratio is found by multiplying the compression ratio for each loop, and are referred to this schema by HCDC (K) where (K) represents the number of repetition [19]. In [20], the authors presented a lossless image compression based on BCH combined with Huffman algorithm [20].
JSEA
755
30
237
52
44
160
70
249
149
133
149
After dividing the image into blocks of 7 bits, the system implements the BCH code that checks each block if it is a codeword or not by matching the block with 16 standards codeword in the BCH. The first iteration shows that we found four codewords. This block is compressed by using BCH algorithm which is converted to blocks of 4 bit each.
1 1 0 1 0 1 0 1 1 1 0 1 1 1 1 1 0 1 0 1 0 1 1 1 0 1 1 1 > > > > 1000 1111 1011 1111
When implementing the BCH algorithm, the file (Map 1) initializes. If the block is a codeword, it is added to the file 1 and adds 0 if the block is a non-codeword. In this example Map 1 is: Map 1 = 0 1 0 0 1 0 0 0 1 0 1 This operation is repeated three times. The file (Map 3, Map 2, and Map 1) is compressed by RLE before attaching to the header of the image to gain more compression ratio.
756
757
Out1 = Compress matrix by LZW algorithm function norm 2l zw (Bn); Convert matrix compress by LZW into binary Set N = 7, k = 4 WHILE (there is a codeword) and (round 3) xxx = the size of the (Out 1) remd = matrix size mod N; div = matrix size /N; FOR i = 1 to xxx-remd step N FOR R = i to i N 1 divide the image into blocks of size 7 save into parameter msg = out 1 [R] END FOR R c2 = convert (msg) to Galoris field; origin = c2 d2 = decoding by BCH decoder (bchdec (c2, n, k,)) c2 = Encode by BCH encoder for test bchenc (d2, n, k) IF (c2 == origin) THEN // original message parameter INCREMENT the parameter test (the number of codeword found) by 1; add the compressed block d2 to the matrix CmprsImg add 1 to the map[round] matrix ELSE add the original block (origin) to the matrix CmprsImg add 0 to the map[round] matrix ENDIF END FOR i Pad and Add remd bits to the matrix CmprsImg and encode it Final map file = map [round] to reuse map file in the iteration FOR stp = 1 to 3 Compress map by RLE encoder and put in parameter map_RLE [stp] = RLE (map [stp]) END FOR stp INCREMENT round by 1 ENDWHILE END
Depending on the value of the map file, in the positions 4 and 7, the value is 1 which means that the system will read 4 bit from the compressed image. This means that a codeword and by compressing it by BCH, it reconstructs 7 bit from 16 codewords valid in BCH that match it, and the remained value of the file is 0. This mean it a non codeword reads 7 bit from the compressed image. The compressed image is:
1 1 0 0 0 0 1 1 0 1 1 1 0 1 0 0 1 0 0 1 0 0 0 1 1 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1
3.4. Decompression
It is reversible steps to the compression stage to reconstruct the images. At first the system decompress the attach file (map) by RLE decoder because we depend on its values to know which block in the compress image is a code word to be decompressed. That means if the value of the map file is 1, then it reads 4 bit block from the compressed image which means its a codeword then decompressed by BCH encoder. If the value is 0, it reads the 7 bit block from the compressed image which means that it is not a codeword. This operation is repeated three times, after that the output from BCH is decompressed using LZW algorithm. The below example explains these steps.
Copyright 2012 SciRes.
Decompression procedure shown in the Figure 3 is implemented to find the original image from the compressed image and it is performed as follows: Input: compressed image, attach file mapi Output: original image Begin Initial parameter SET P = ( ) // set empty value to matrix P SET j = 1 SET n = 7 SET k = 4 SET round = 3 // number of iteration Rle_matrix = RLE decoder (mapi) WHILE round > 0 FOR i = 0 to length of (Rle_matrix) IF Rle_matrix [i] = 1 THEN encode by BCH FOR s = j to j k 1 encode compress image by BCH encoder and put in parameter (c2) c2 = bchenc (CmprsImg (s)), n, k) INCREMENT j by 4 add c2 to matrix p ENDFOR s ELSE //block is not compress then read it as it FOR s1 = j to j n 1 add uncompress block from CmprsImg [s1] to matrix p INCREMENT j by 7 ENDFOR s1 ENDIF
JSEA
758
ENDFOR i Decrement parameter round by 1 ENDWHILE LZW_dec = decompress matrix p by LZW Image Post processing Original_image = bin2dec (LZW_dec) //convert from
binary to decimal to reconstruct original image END The above steps explain the implementation of the compression and decompression of the proposed methods using combination of LZW algorithm and BCH algorithm after many testing before reaching this final decision.
759
ENDFOR i Decrement parameter round by 1 ENDWHILE LZW_dec = decompress matrix p by LZW Image Post processing Original_image = bin2dec (LZW_dec) //convert from binary to decimal to reconstruct original image END The above steps explain the implementation of the compression and decompression of the proposed methods using combination of LZW algorithm and BCH algorithm after many testing before reaching this final decision. The next section shows the result from using this method by using MatLab platform; calculating the compression ratio by using this equation:
Cr original size compress size
Use the same dataset and the same size to compare between proposed method and LZW, RLE, Huffman, and then compare it depending on the bit that needs to represent each pixel according to the equation below:
BPP 1 Cr 8
(q) Is the number of bit represent each pixel in uncompressed image, (S0) the size of the original data and (Sc) the size of the compressed data. Also compare it with the standards of the compression technique, and finally explain the test (the codeword found in image) comparing it with the original size of the image in bit.
Table 1. Compression with typical compression methods based on compression ratio which divide original image size by size of compressed image.
Image Airplane Barbara Lenna U3 Peppers2 Gold-hill Zelda28_tif Boat house28.tiff F-128.jpg Camera.jpg Woman blonde Walk bridge Pirate Lake Living room Woman darkhair Mini-fenn0043 AVG RLE 1.2162 1.0871 1.0892 1.0098 1.0969 1.1149 1.0701 1.1023 1.1867 2.1014 1.1441 1.0701 1.0398 1.064 1.0875 1.0629 1.1333 1.1337 1.156111 LZW 1.6052 1.2894 1.3811 1.0098 1.3311 1.3812 1.3925 1.3893 1.4515 2.5747 1.483 1.4074 1.2212 1.3462 1.3151 1.3425 1.5197 1.571 1.445106 HUFF 1.1857 1.4187 1.0795 1.1009 1.4187 1.0749 1.1017 1.1192 1.1039 1.4219 1.1285 1.1145 1.0475 1.0911 1.0633 1.0845 1.0964 1.2339 1.160267 LZW & BCH 1.8193 1.461 1.5719 1.1406 1.5095 1.572 1.57 1.5637 1.6493 2.9018 1.6779 1.5953 1.3845 1.5249 1.4949 1.5186 1.7234 1.7763 1.636383
JSEA
760
Lossless Image Compression Technique Using Combination Methods Table 2. Compression results of images in bit/pixel.
Image Airplane Barbara Lenna U3 Peppers2 Goldhill Zelda28_tif Boat House28.tiff F-128.jpg Camera.jpg Woman_Blonde Walkbridge Pirate Lake Livingroom Woman_darkhair Mini-fenn0043 AVG RLE 6.577865 7.359028 7.34484 7.92236 7.293281 7.175531 7.475936 7.257552 6.741383 3.806985 6.992396 7.475937 7.693787 7.518797 7.356322 7.526578 7.059031 7.056541 7.090786 LZW 4.983803 6.204436 5.792484 7.92236 6.010066 5.792065 5.745062 5.758295 5.511539 3.107158 5.394471 5.68424 6.550934 5.942653 6.083188 5.959032 5.264197 5.092298 5.711016 HUFF 6.747069 5.638965 7.410838 7.266782 5.638965 7.442553 7.261505 7.147963 7.247033 5.626274 7.089056 7.178107 7.637232 7.33205 7.523747 7.376671 7.296607 6.483508 6.963607 LZW & BCH 4.397766 5.47622 5.090454 7.038986 5.300354 5.089417 5.096191 5.116516 4.851074 2.9018 4.768494 5.015198 5.778625 5.24707 5.352234 5.268677 4.642334 4.504769 5.05201
compressing the image by using RLE algorithm, LZW algorithm or Huffman algorithm. Here in Figure 4, we illustrate the comparison based on compression ratio between the proposed algorithm (BCH and LZW) and the standard image compression algorithms (RLE, Huffman and LZW) which can be distinguished by color. And Figure 5 explains the size of original image compared with image after compressed by the standard image compression algorithm and the proposed method. Table 2 shows the results of the compression based on bit per pixel rate for the proposed method, and the standards compression algorithm. Figure 6 explains the result of the above Table 2.
3.6. Discussion
In this section we show the efficiency of the proposed system which uses MatLab to implement the algorithm. In order to demonstrate the compression performance of the proposed method, we compared it with some representative lossless image compression techniques on the set of ISO test images that were made available to the proposer that were shown in the first column in all tables. Table 1 lists compression ratio results of the tested images which calculated depend on size of original image to the size of image after compression; the second column of this table lists the compression ratio result from compress image by the RLE algorithm. Column three and four list the compression ratio result from compress
Copyright 2012 SciRes.
by LZW and Huffman algorithms respectively while the last column lists the compression ratio achieved by the proposed method. In addition the average compression ratio of each method after applied on all tested images (RLE 1.2017, LZW 1.4808, Huffman 1.195782 and BCH and, LZW the average is 1.676091). The average of compression ratio on tested images based on the proposed method is the best ratio achieved, this mean image size is reduced more when compressed by using combination method LZW and BCH compared to the standards of lossless compression algorithm, and Figure 2 can clear the view of the proposed method that has higher compression ratio than the RLE, LZW and Huffman. Figure 3 displays the original image size and the size of image after compressed by each RLE, LZW, Huffman and compress by the proposed method which show it had the less image size which achieves the goal of this paper to utilize storage need to store the image and therefore, reduce time for transmission. The second comparison depends on bit per pixel shown in Table 2. The goal of the image compression is to reduce the size as much as possible, while maintaining the image quality. Smaller files use less space to store, so it is better to have fewer bits need to represent in each pixel. The table tests the same image sets and explains the proposed method that needs fewer numbers of bit per pixel than the other standard image compression and the average bit per pixel of all tested images are 6.904287,
JSEA
761
Figure 4. Comparing the proposed method with (RLE, LZW and Huffman) based on compression ratio.
Figure 5. Comparing the proposed method with (RLE, LZW and Huffman) based on image size.
5.656522, 6.774273 and 5.01157 to RLE, LZW, Huffman and proposed method respectively.
4. Conclusions
This paper was motivated by the desire of improving the effectiveness of lossless image compression by improving the BCH and LZW. We provided an overview of various existing coding standards lossless image compression
Copyright 2012 SciRes.
techniques. We have proposed a high efficient algorithm which is implemented using the BCH coding approach. The proposed method takes the advantages of the BCH algorithm with the advantages of the LZW algorithm which is known for its simplicity and speed. The ultimate goal is to give a relatively good compression ratio and keep the time and space complexity minimum. The experiments were carried on collection of dataset of 20 test images. The results were evaluated by using
JSEA
762
Figure 6. Comparing the proposed method with (RLE, LZW and Huffman) based on bit per pixel.
compression ratio and bits per pixel. The experimental results show that the proposed algorithm improves the compression of images comparing compared with the RLE, Huffman and LZW algorithms, the proposed method average compression ratio is 1.636383, which is better than the standard lossless image compression.
images. With temporal compression only the changes from one frame to the next are encoded as often a large number of the pixels will be the same on a series of frames.
REFERENCES
[1] R. C. Gonzalez, R. E. Woods and S. L. Eddins, Digital Image Processing Using MATLAB, Pearson Prentice Hall, Upper Saddle River, 2003. K. D. Sonal, Study of Various Image Compression Techniques, Proceedings of COIT, RIMT Institute of Engineering & Technology, Pacific, 2000, pp. 799-803. M. Rabbani and W. P. Jones, Digital Image Compression Techniques, SPIE, Washington. doi:10.1117/3.34917 D. Shapira and A. Daptardar, Adapting the Knuth-Morris-Pratt Algorithm for Pattern Matching in Huffman Encoded Texts, Information Processing and Management, Vol. 42, No. 2, 2006, pp. 429-439. doi:10.1016/j.ipm.2005.02.003 H. Zha, Progressive Lossless Image Compression Using Image Decomposition and Context Quantization, Master Thesis, University of Waterloo, Waterloo. W. Walczak, Fractal Compression of Medical Images, Master Thesis, School of Engineering Blekinge Institute of Technology, Sweden. R. Rajeswari and R. Rajesh, WBMP Compression, International Journal of Wisdom Based Computing, Vol. 1, No. 2, 2011. doi:10.1109/ICIIP.2011.6108930 M. Poolakkaparambil, J. Mathew, A. M. Jabir, D. K. Pradhan and S. P. Mohanty, BCH Code Based Multiple Bit Error Correction in Finite Field Multiplier Circuits,
5. Future Work
In this paper, we develop a method for improve image compression based on BCH and LZW. We suggest for future work to use BCH with another compression method and that enable to repeat the compression more than three times, and to investigate how to provide a high compression ratio for given images and to find an algorithm that decrease file (map). The experiment dataset in this paper was somehow limited so applying the developed methods on a larger dataset could be a subject for future research and finally extending the work to the video compression is also very interesting, Video data is basically a three-dimensional array of color pixels, that contains spatial and temporal redundancy. Similarities can thus be encoded by registering differences within a frame (spatial), and/or between frames (temporal) where data frame is a set of all pixels that correspond to a single time moment. Basically, a frame is the same as a still picture. Spatial encoding in video compression is performed by taking advantage of the fact that the human eye is unable to distinguish small differences in color as easily as it can perceive changes in brightness, so that very similar areas of color can be averaged out in a similar way to JPEG
Copyright 2012 SciRes. [2]
[3]
[4]
[5]
[6]
[7]
[8]
JSEA
Lossless Image Compression Technique Using Combination Methods Proceedings of the 12th International Symposium on Quality Electronic Design (ISQED), Santa Clara, 14-16 March 2011, pp. 1-6. doi:10.1109/ISQED.2011.5770792 [9] B. Ranjan, Information Theory, Coding and Cryptography, 2nd Edition, McGraw-Hill Book Company, India, 2008. of Northern British Columbia, Prince George, 2004.
763
[16] N. J. Brittain and M. R. El-Sakka, Grayscale True TwoDimensional Dictionary-Based Image Compression, Journal of Visual Communication and Image Representation, Vol. 18, No. 1, pp. 35-44. [17] R.-C. Chen, P.-Y. Pai, Y.-K. Chan and C.-C. Chang, Lossless Image Compression Based on Multiple-Tables Arithmetic Coding, Mathematical Problems in Engineering, Vol. 2009, 2009, Article ID: 128317. doi:10.1155/2009/128317 [18] J. H. Pujar and L. M. Kadlaskar, A New Lossless Method of Image Compression and Decompression Using Huffman Coding Technique, Journal of Theoretical and Applied Information Technology, Vol. 15, No. 1, 2010. [19] H. Bahadili and A. Rababaa, A Bit-Level Text Compression Scheme Based on the HCDC Algorithm, International Journal of Computers and Applications, Vol. 32, No. 3, 2010. [20] R. Al-Hashemi and I. Kamal, A New Lossless Image Compression Technique Based on Bose, International Journal of Software Engineering and Its Applications, Vol. 5, No. 3, 2011, pp. 15-22.
[10] P. G. Howard and V. J. Scott, New Method for Lossless Image Compression Using Arithmetic Coding, Information Processing & Management, Vol. 28, No. 6, 1992, pp. 749-763. doi:10.1016/0306-4573(92)90066-9 [11] P. Franti, A Fast and Efficient Compression Method for Binary Image, 1993. [12] M. Burrows and D. J. Wheeler, A Block-Sorting Lossless Data Compression Algorithm, Systems Research Center, Vol. 22, No. 5, 1994, pp. [13] B. Meyer and P. Tischer, TMWA New Method for Lossless Image Compression, Australia, 1997. [14] M. F. Talu and . Trkolu, Hybrid Lossless Compression Method for Binary Images, University of Firat, Elazig, Turkey, 2003. [15] L. Zhou, A New Highly Efficient Algorithm for Lossless Binary Image Compression, Master Thesis, University
JSEA