As mentioned earlier I have been playing around with Principal Component Analysis (PCA)
for photo forensics. The results of this have now landed in my Photo Forensics Tool.
In essence PCA offers a different perspective on the data which allows us to find outliers more easily.
For instance colors that just don’t quite fit into the image will often be more apparent
when looking at the principal components of an image.
Compression artifacts do also tend to be far more visible, especially in the
second and third principal components. Now before you fall asleep, let me give you an example.
This is a photo that I recently took:
To the naked eye this photo does not show any clear signs of manipulation.
Let’s see what we can find by looking at the principal components.
First Principal Component
Still nothing suspicious, let’s check the second one:
Second Principal Component
And indeed this is where I have removed an insect flying in front of the lens
using the inpainting algorithm algorithm (content aware fill in photoshop speak) provided by G’MIC.
If you are interested Pat David has a nice tutorial on how to use this
in the GIMP.
Resistance to Compression
This technique does still work with more heavily compressed images.
To illustrate this I have run the same analysis I did above on the smaller & more compressed
version of the photo used in this article rather than the original.
As you can clearly see the anomaly caused by the manipulation is still present
and quite clear but not as clear as when analyzing a less compressed version of the image.
You can also see that the PCA is quite good at revealing the artifacts caused by (re)compression.
If you found this interesting you should consider reading my article Black and White Conversion using PCA
which introduces a tool which applies the very same techniques to create beautiful black and white conversions of photographs.
If you want another image to play with try the one in this
by Neal Krawetz is interesting. It can be quite revealing. :)
While experimenting with Black and White Conversion using PCA
I also investigated dithering algorithms and played with those.
I found that Stucki Dithering would yield rather pleasant results.
So I created a little application for just that:
Photo by Tuncay (CC BY)
I hope you enjoy playing with it. :)
I have been hacking on my photo forensics tool lately.
I found a
that suggested that performing PCA on the colors of an image might reveal interesting information hidden to the naked eye.
When implementing this feature I noticed that it did a quite good job at doing black & white conversions of photos.
Thinking about this it does actually make some sense, the first principal component
maximizes the variance of the values. So it should result in a wide tonal range in the resulting photograph.
This led me to develop a tool to explore this idea in more detail.
This experimental tool is now available for you to play with:
To give you a quick example let’s start with one of my own photographs:
While the composition with so much empty space is debatable,
I find this photo fairly good example of an image where a straight luminosity conversion fails.
This is because the really saturated colors in the sky look bright/intense even if the straight
up luminosity values do not suggest that.
Hover it to see the results of a straight luminosity conversion instead.
In this case the PCA conversion does (in my opinion) a better job at reflecting the tonality in the sky.
I’d strongly suggest that you experiment with the tool yourself.
If you want a bit more detail on how exactly the conversions work please have a look at the help page.
Do I think this is the best technique for black and white conversions? No.
You will always be able to get better results by manually tweaking the conversion
to fit your vision. Is it an interesting result? I’d say so.
I’ve just released version 1.0 of smartcrop.js.
mainly for generating good thumbnails.
The new version includes much better support for node.js by dropping the canvas dependency
as well as support for face detection by providing annotations.
The API has been cleaned up a little bit and is now using Promises.
Another little takeaway from this release is that I should set up CI even for my little
open source projects. I come to this conclusion after having created a
dependency mess using
npm link locally which lead to everything working
fine on my machine but the published modules being broken. I’ve already set
up travis for smartcrop-gm,
More of my projects are likely to follow.
Back in 2010 I did a little experiment with normal mapping and the canvas element.
The normal mapping technique makes it possible to create interactive lighting effects based on textures.
Looking for an excuse to dive into computer graphics again,
I created a new version of this demo.
This time I used WebGL Shaders and a more advanced physically inspired material
system based on publications by
I also implemented FXAA 3.11 to smooth out some of the aliasing produced by the normal maps.
The results of this experiment are now available as a library called normalmap.js. Check out the demos.
It’s a lot faster and better looking than the old canvas version. Maybe you find a use for it. :)
You can view larger and sharper versions of these demos on 29a.ch/sandbox/2016/normalmap.js/.
You can get the source code for this library on github.
I plan to create some more demos as well as tutorials on creating normalmaps in the future.
I migrated this website HTTPS using certificates by Let’s Encrypt.
This has several benefits. The one I’m most excited about is being able to use
Service Workers to provide offline support
for my little apps.
Let’s Encrypt is an amazing new certificate authority which
allows you to install a SSL/TLS certificate automatically and for free.
This means getting a certificate installed on your server can be as little work as running a command on your server:
The service is currently still in beta but as you can hopefully see the certificates it produces are working just fine.
I encourage you to give it a try.
If anything on this website got broken because of the move to HTTPS, please let me know!
Seven years ago I wrote a piece of software called Play it Slowly.
It allows the user to change the speed and pitch of an audio file independently.
This is useful for example for practicing an instrument or doing transcriptions.
Now I created a new web based version of Play it Slowly called TimeStretch Player.
Open TimeStretch Player
It features (in my opinion) much better audio quality for larger time stretches as well as a 3.14159 × cooler looking user interface. But please not that this is beta stage software it’s still far from perfect and polished.
How the time stretching works
The time stretching algorithm that I use is based on a Phase Vocoder with some simple improvements.
It works by cutting the audio input into overlapping chunks, decomposing those into their individual components using a FFT, adjusting their phase and then resynthesizing them with a different overlap.
Suppose we have a simple wave like this:
We can cut it into overlapping pieces like this:
By changing the overlap between the pieces we can do the time stretching:
This messes up the phases so we need to fix them up:
Now we can just combine the pieces to get a longer version of the original:
In practice things are of course a bit more complicated and there are a lot of compromises to be made. ;)
Much better explanation
If you want a more precise description of the phase vocoder technique than the handwaving above I recommend you to read the paper Improved phase vocoder time-scale modification of audio by Jean Laroche and Mark Dolson.
It explains the basic phase vocoder was well as some simple improvements.
If you do read it and are wondering what the eff the angle sign used in an expression like means:
It denotes the phase - in practice the argument of the complex number of the term.
I think it took me about an hour to figure that out with any certainty.
YAY mathematical notation.
Pure CSS User Interface
I had some fun by creating all the user interface elements using pure CSS, no images are used except for the logo.
It’s all gradients, borders and shadows.
Do I recommend this approach?
Not really, but it sure is a lot of fun!
Feel free to poke around with your browsers dev tools to see how it was done.
While developing the time stretching functionality I also experimented with
some other features like a karaoke mode that can cancel individual parts of an audio file while keeping the rest of the stereo field intact.
This can be useful to remove or isolate parts of songs for instance the vocals or a guitar solo.
However, the quality of the results was not comparable to the time stretching so I decided to remove the feature for now.
But you might get to see that in another app in the future. ;)
I might release the phase vocoder code in a standalone node library in the future but it needs a serious cleanup before that can happen.
People seem to be liking Forensically. I enjoy hacking on it.
So upgraded it with a new tool - noise analysis.
I also updated the help page and implemented histogram equalization for the magnifier/ELA.
Open the noise analysis tool
The basic idea behind this tool is very simple. Natural images are full
of noise. When they are modified this often leaves visible traces in the
noise in an image. But seeing the noise in an image can be hard. This
is where the new tool in forensically comes in. It takes a very simple
noise reduction filter
(a separable Median Filter) and reverses it's results.
Rather than removing the noise it removes the rest of the image.
One of the benefits of this tool is that it can recognize modifications
like Airbrushing, Warps, Deforms, transformed clones that the
aclone detection and error level analysis might not catch.
Please be aware that this is still a work in progress.
Enough talk, let me show you an example. I gave myself a nosejob with the warp tool in gimp, just for you.
As you can see the effect is relatively subtle.
Not so the traces it leaves in the noise!
The resampling done by the warp tool looses some of the high frequency noise, creating a black halo around the region.
Can you find any anomalies in the demo image using noise analysis?
A bit of code
I guess that many of the readers of this blog are fellow coders and hackers.
So here is a cool hack for you.
I found this in some old sorting code I wrote for a programming competition but I don't
remember where I had it from originally. In order to make the median filter fast
we need a fast way to find the median of three variables.
A stupidly slow way to go about this could look like that:
// super slow
[a, b, c].sort()
Now the obvious way to optimize this would be to transform it into a bunch of ifs.
That's going to be faster. But can we do even better? Can we do it without branching?
let max = Math.max(Math.max(a,b),c),
min = Math.min(Math.min(a,b),c),
// the max and the min value cancel out because of the xor, the median remains
median = a^b^c^max^min;
Mina and max can be computed without branching on most if not all cpu architectures.
View & search all my articles