The Difference Analyzer compares two images and points out dissimilarities. Given any two images, it is able to spot significant differences much faster than the human eye.
The naive approach is to do a direct pixel-by-pixel comparison of the two images, flagging each pixel as different or not-different. However, doing this direct delta comparison of two images is only useful if the images are lined up perfectly. In the slightest misalignment, the pixel-by-pixel algorithm for image comparison fails terribly as seen below.
We developed a solution as visualized below.
Each image is converted to a solid red, green, and blue image using our algorithm open sourced on GitHub. This important step is done so that effects from slight transpositions and rotations are reduced. Then the two converted images are compared pixel-by-pixel for differences (colored in cyan). The density of cyan color represents the possibility of dissimilarity between the images.
I would like to take this project further by supporting the use of video streams. Running this algorithm once per second on a live video stream could offer interesting results. Applications of such an enhancement include use as security cam, head tracking, or machine learning.
May 13, 2012