Archive for July, 2006

Loading JP2 and J2K images works!

July 31, 2006

I was finally able to find the error in my jpeg2000 plugin code. The reason for the error was that I was not reading the filename properly (basically reading NULL) and because of this I was getting segmentation faults when it tried to load images. I always assumed that the run function would automatically have the filename and never bothered to check this!  I solved the problem by providing the necessary arguments in the query function.

The good thing is that the load plugin is working smoothly on the select images that I tested it on. It may be a little slow but I can definitely speed it up using row processing. I used the openjpeg library although I could have used the jasper library as well because the problem I was getting was common. But the openjpeg library is specifically geared towards jpeg2000 development and the library functions are a lot more easier to use that the jasper library. The final code is also very clean.

A screenshot of how the image was loaded is shown below.

screenshot

I will be out attending a family function for the next two days and will start working on refining the current load code and then work on saving as jp2 or j2k images.

Advertisements

Inverse Halftoning – contd

July 28, 2006

This update comes a little late as I was busy with working on the jpeg2000 plug-in and forgot to update on my final status on inverse halftoning. I decided on the algorithm to use, implemented it in C, and finally wrote the plug-in which seems to be working well with exception on images of the order 1024 and higher. My guess is that I am not doing memory management well and hopefully I can resolve this issue soon. I started working on jpeg2000 plugin in the meanwhile as I was getting bored of working on halftoning for a while!

Here are some scanned newspaper images on which I tried the algorithm using the plug-in.

news_1 halftoned newspaper image

news_1_ihalf inverse halftoned image

news_2   halftoned newspaper image

news_2_ihalf  inverse halftoned image

The basic algorithm goes as follows:

If we take a one step wavelet transform of a halftoned image, the low pass frequency part will be less noisy whereas the high frequency part will be extremely noisy as it detects many edges in the halftoned images.  The basic idea is to remove the noise in these images and preserve the edges.  In order to preserve the edges, I get the high frequency content from blurred versions of the image.  This way the noise gets reduced and I get  the useful edge information.  I get the blurred image by using 3×3 kernel.  If we increase the radius of the kernel the final image become very blurry.  It can be argued that is process is almost similar to simply blurring the image, but in actuallity there is a slight difference.  I am using the high frequency content from blurred images and low frequency from the original halftoned image.  Further research can be done to improve the sharpness of the images, but for now this method works and I will concentrate on making it memory efficient.  Once I get that working, I will work on further increasing the quality of inverse halftoned images.  Another update on my jpeg2000 work will soon folllow…

Inverse Halftoning – problems

July 16, 2006

Inverse Halftoning is tough!!! Yeah, thats my final conclusion. No I have not given up..I am just frustrated that things are not working properly. I pretty much explained the steps involved in a halftoning process using wavelets in my last post. I realized later that I should not pass the ‘S’ component through a gaussian filter as this makes the final image more blurrier. After all the whole point of the process is to recover edges from the halftoned image.

After recovering them, the more difficult part is to blend it with the image. One paper suggested passing horizontal and vertical edges (from the DWT) through a low pass filter. This will reduce the noise and highlight the main edges. Well it does highlight the main edges but also highlights a lot of extra stuff. Supposedly (as suggested in another paper), this does not happen in error-diffused halftoned images. Moreover, the method of low pass filtering only works well when the halftoned images is error-diffused. A real bummer becuase the authors of the paper claimed the algorithm worked for any kind of halftoned image and that is what got me to implement the algorithm. Anyways, I have got a pretty good grasp of what is going on and should be able to come up with a general algorithm.

Its getting late now.  My plan for tomorrow is to test the algorithm on a newspaper image that I have scanned.  That should be interested because its a more practical situation.

Inverse Halftoning

July 14, 2006

For the past week I had been working on implementing a Rice University based inverse halftoning algorithm which they had published in Matlab. The results were good and I was motivated to implement the same in C. This apparently was a bad decision because of the various complicated steps involved. My aim was to start with the denoising implementation and use the dwt code provided in the same toolkit. But this did not work and I got some random images which was definitly not what I had in mind for the final image. I realized that I should have carefully tested and understood the halftoning process.

The work done at Rice University was based on the original work done by Xiong, Orchard, and Ramchandran in a paper titled “Inverse Halftoning Using Wavelets”. This was the first paper of doing inverse halftoning using wavelets and most of the subsiquent research papers used some form of the the algorithm proposed by them. Unfortunately there is no implementation of the algorithm available online. Fortunately the method is easy enough to be implemented in Matlab. My goal was to first test how the algorithm works and then implement in C. So far I have results of how the code works on a Matlab implementation.

Here are the steps involved in the method:

Step 1: Take DWT of the halftoned image. This will produce four set of images say S, W_H, W_V, W_D. S will contain low frequency information of the image, W_H will be the horizontal high pass image, W_V will be the vertical high pass image, and W_D will be the diagonal high pass image.

Step 2: Pass W_H and W_V through a low pass gaussian filter and ignore W_D.

Step 3: Pass S through an edge preserving filter so as to not get blurry images. Instead of doing this, I also passed S through a gaussian filter. The result as expected was blurry but the amount was not that much. I will implement the particular noise removal filter soon.

Step 4: Calculate inverse DWT using changed wavelets coefficients and ignoring W_D.

The sequence of images starting from the orignial are shown below:

halftoned image1 original image

pepper_s S wavelet image

pepper_h W_H wavelet image

pepper_v W_V wavelet image

Passing each W_H, W_V, S through gaussian filtering and taking inverse we get:

inverse_pepper

final inverse halftoned image

If you compare this with the image I got in my previous post, you will notice a huge difference in the quality. I will improve upon this by changing the filter used for S wavelet coefficients. Once I have the Matlab code ready and I will implement this in C and then make it work for GIMP. This should take about 2-3 days because most of the code for doing Wavelet transforms is already available and I just have to get code for doing filtering (which I have actually written but realized there is a better and faster version online!) and specify the necessary kernel for doing filtering. I should have the final plugin done within 3 days.  I will do testing of the Matlab code on some pictures that I have scanned from newspapers.  That should give a good idea of how this method can actually be used becuase newspaper images are halftoned images.

Starting Halftoning

July 7, 2006

Having done image denoising I have now started implementing inverse image halftoning. This might get a little tricky because I do not have a C implementation and just some Matlab. But I think I can use most of the image denoising code becuase the concepts used are almost the same. While I was reading this paper on image halftoning using wavelets they mentioned that the approach was similar to that of image denoising. Since halftoned data is basically a very noisy but elegant looking image, I decided to try the current denoising plug-in on a halftoned image. The results, as expected, were not great but showed that the concepts could be used with some slight modification. I will be working on that over the next couple of days. Here are the results that I got on using the denoising plug-in:

halftoned image1

halftoned image

Halftoned

image obtained after denoising

Image Denoising – contd

July 4, 2006

Today I have finally completed the image denoising plug-in which I started writing last week. Previously the code could only handle grayscale images and that too of specific resolution (power of 2). Now the implementation is generalized and gave pretty good results on testing under various levels of noise. The main problem I faced during this time was in understanding how the gimp processed colored images using a single array. I had the concept right, but was reading images by the Matlab format which reverses the x and y coordinates. I didn’t realize what I was doing wrong until I asked Simon (my mentor) about it.

Once I fixed that, I was getting results but they did not seem satisfactory for images of dimension say 240×260. I stored this image in a 512×512 array and set the necessary values to the image pixels and set the rest of the values to 0. As you can see, a large part of the image will be 0 and thus the overal image noise becomes small. For this reason it was not denoising the image properly. The solution was to extend the image in each direction to create 2*(240*260) image and read the values of 512×512 array accordingly. This way the noise level remains the same and the denoising is done successfully. Some examples of how the denoising is performing are shown here.

Noisy Image 2 noisy image
Noisy Image 2 denoised image

Blurred 2 blurred image
Noisy Image noisy image 2
Denoised Image denoised image 2

There is a lot of noise in the first image and the corresponding denoised is not that good but this is understandable considering the amount of information lost due to noise.  Notice what happens the the first noisy image is blurred.  The noise is gone but the image is severally blurred which is not good.  Clearly this denoising method is superior and restores the basic image even if the noise is a lot.  The noise in the second image is not that much and hence the denoised image looks much better.