sub-image searching using genetically-evolved wavelet transforms by chris wedge

28
Sub-image Searching Using Genetically- Evolved Wavelet Transforms By Chris Wedge

Upload: shea-womble

Post on 14-Dec-2015

215 views

Category:

Documents


2 download

TRANSCRIPT

Sub-image Searching Using Genetically-Evolved Wavelet

TransformsBy Chris Wedge

Evolved Transform Coefficients

• Wavelets capable of lossless compression in a perfect conditions

• Dr. Frank Moore showed superior coefficients could be evolved in imperfect conditions

• Very time-intensive

Evolution on Sub-images

• Don Tinsley and Jason Kettell explored evolution with sub-images

• Similar performance gains but with computation time drastically reduced

• Noted performance disparity between sub-image and super-image

• Can disparity be exploited?

Where’s Waldo?

Search Algorithms

• Four algorithms used, though basically all the same: iteratively apply transform– Strict repeated transform application

• Interesting side-effects with quantization– Repeated transform application, but quantize only

once– Repeated transform application, only on Y

• Again, side-effects present– Repeated transform application, only on Y, but

quantize only once• Search focus placed on developing high-

performance transforms

Performance Evaluation

• Two methods of evaluation– Quantitative

• Compare mean-squared error (MSE) values between sub-image and super-image

– “Anecdotal”• MSE comparisons may indicate improvement, but

searching meant for human consumption

Meet The Crew

Simple Evolution

• Evolve versus non-representative sub-image

• Repeatedly apply resulting transforms using search algorithms

Simple Evolution - Parameters

• 24 total runs– Fixed parameters

• Daubechies-4 (D4) wavelet used as basis wavelet• Population (M) = 5000• Generations (G) = 2500• Multi-resolution (MR) = 1• Sub-image size 32x32 pixels (regular is 512x512)

– Variable parameters• 4 images• 3 quantization (Q) levels: 0, 32, 64• 2 threshold (T) levels: 0, 16

Simple Evolution - Results• 24 runs

– 1 run each per Q, T, image combination– Mean runtime 63min 57sec, St Dev 0.00144– MSE reductions over D4 on sub-image

• Q=64: 3.5%, 15.0%, 8.8%, 16.2%*• Q=32: 4.8%, 11.0%, 5.2%, 22.3%*• Q=0:

– T=16: 7.9%, 6.7%, 1.8%, 12.4%*– T=0: 25.6%, 30.3%, 19.3%, 49.4%*

– MSE reductions over D4 on super-image• Q=64: 7.9%, -1.3%, 6.8%, 83%*• Q=32: 3.9%, 6.7%, 4.6%, 84.1%*• Q=0:

– T=16: 7.6%, 1.9%, 1.4%, 95.6%*– T=0: 24.9%, 44.8%, 18.9%, 99.6%*

– *(lenna, goldhill, monet, and dissimilar, respectively)

Simple Evolution - Results

Simple Evolution - Conclusions

• MSE reduction of sub-image over parent image seems somewhat arbitrary– More runs needed to get a better picture, but want a general

method which always works, so little point exploring that

• T has no effect if it is set to a value less than Q– Not surprising. Should have been obvious before running the

tests

• MSE reduction at Q = 0 is by far the highest– Very surprising! Wavelets theoretically capable of lossless

compression. Attributed to imprecision of floating-point arithmetic

• Dissimilar, the toy image, had some impressive results! Unfortunately, they are far from the desired results, and not exactly realistic

Detour!

• Non-representative sub-image did not perform as well as expected

• Representative sub-image performance– Revisiting Tinsley’s, Kettell’s work to concretize

• More runs• More fixed parameters (size, etc)

– But what is representative?• No determination algorithm, criteria mentioned, largely

subjective• Ended up using miniature versions of the image

Detour! - Parameters

• 63 total runs– Fixed parameters

• D4 wavelet used as basis wavelet• M = 5000• G = 2500• MR = 1• T = 0• Mini-image size 32x32 pixels (regular is 512x512)

– Variable parameters• 4 images• 3 Q levels: 0, 32, 64

Detour! - Results

• Evolution versus single image– 60 runs

• 5 per combination of Q level, image• Mean runtime 63min 51sec, St Dev 0.00065• Mean MSE reductions over D4

– Q=64: 5.5% - 8.3%– Q=32: 2.4% - 2.7%– Q=0: 16.7% - 25.1%

Detour! - Results

• Evolution versus all images– 3 runs

• 1 for each Q level– Wanted 5 each, but bugs, time constraints got in the way

• Mean runtime 250min 57sec, St Dev 0.00212• MSE reduction over D4

– Q=64: 6.4% - 9.5%– Q=32: 3.9% - 5.2%– Q=0: 17.3% - 27.2%

Detour! - Results

• 4.5% improvement

Detour! - Conclusions

• MSE improved over D4 in every run– Excluding Q=0, higher Q values may increase

improvement, but need more Q levels tested– Improvement even noticed when transform applied to different images

• Single-image mean runtime approximately 98% lower than in Dr. Moore’s runs

• Four-image evolution outperformed single-image evolution, but more runs needed

• Four-image mean runtime approximately 91% lower than in Dr. Moore’s runs

Back to searching…

• Simple evolution failed to do the trick

• Want very good performance on desired sub-image

• Also want very poor performance on the whole

• But how to do both simultaneously?

Co-evolution

• Alter the GA to use a weighted fitness function– Total fitness = sub-image “goodness” +

super-image “badness”.– Can “weight” the importance of each aspect to

drive evolution

SUBIMAGE_WEIGHT = 50;

SUPERIMAGE_WEIGHT = 100;

fitness[M] = (subMSE / minSubImageMSE) * SUBIMAGE_WEIGHT + (maxParentMSE / parentMSE) *

SUPERIMAGE_WEIGHT;

Co-evolution - Parameters

• 25 total runs so far– Fixed parameters

• D4 basis wavelet• M = 500• G = 200• MR = 1• T = 0• Sub-image size 32x32 pixels (super is 512x512)

– Variable parameters• 3 images• 3 Q levels: 0, 32, 64• 3 weightings used (sub vs super) : 50 vs 100, 100 vs 50, 100

vs 100

Co-evolution - Results

• Mean runtime 242min 26sec, St Dev 0.00149

• 25 runs– 1 for each image, Q level, weight

combination*– Results vary wildly!– *Two Monet runs unfinished

Co-evolution - Results

Co-evolution - Conclusions

• Does seem to be a slight trend in favor of weights

• D4 MSEs of sub-image vs super-image is inconsistent

• Low M, G does not give much time to evolve

Finishing Up…

• Last two Monet co-evolves

• Co-evolution using mini-images as the super-image– Runtime reduction– Increased G, M

That Which Could Not Be

• Finishing additional four-image mini-image runs

• In general, more runs

• More parameter testing explored

• Unless mini co-evolution proves fruitful, unable to get search working

Extensions

• Forward transforms

• Variable-length transforms

• Evolve versus Y, U, V instead of just Y

• Initially seed coefficients randomly

• Adapt for distributed / parallel computing

Fin

Questions?