Friday, August 27, 2010

Entropy Filter v1

I have implemented an entropy filter.

I used the same distribution model that I had used for the mode filter. I do not think this is the appropriate model to use and I may want to revisit my mode filter as well.

I sort the pixel values in the 3x3 neighborhood around a pixel. Each pixel value recieves a weight of 1/9. Repeated values get the sum of the repeated observations. The observations divide the domain into a set of spans with the observations at the endpoints of the spans (plus two potential spans at the ends of the domain.) I let the probability over a span be equal to the average weight of the endpoints of the range.The end points of the domain default to zero unless their is an observation at those points. The probability density is the the probability divided by the width of the span.

The problem with this approach becomes obvious if we have a region that is all 1's on a domain of [0, 255]. The span to the left of the observations has a density of 1/1 while the span to the right of the observation has a density of 1/254.

Calculating the entropy for this, we get
E = Sum over i=[0..255] ( - p_i * log2(p_i))
E = - (0.5 * 1/255) * log2(0.5) - 254 * (0.5/254) * log2(0.5/254)
E = - (1/510) * log2(0.5) - (0.5) * log2(1/508)
E = 0.002 + 4.5
E = 4.5

for a distribution that should probably have zero entropy.

The result of this entropy filter has a lot of noise. The edges stand out beautifully, but they stand out with low entropy instead of high entropy.


I think it is back to the drawing board. I am considering two options:
1) Redefine the spans to be the points closest to a unique observation.
2) Convolve the observation impulses with a Gaussian.

The second approach requires tuning of the blurring Gaussian. I do not have any idea how one might determine an optimal Gaussian width through rigorous means.

No comments:

Post a Comment