29a.ch experiments by Jonas Wagner

 Experiments

 View all my experiments

Recent Articles

Normalmap.js Javascript Lighting Effects

Written by

Logo Back in 2010 I did a little experiment with normal mapping and the canvas element. The normal mapping technique makes it possible to create interactive lighting effects based on textures. Looking for an excuse to dive into computer graphics again, I created a new version of this demo.

This time I used WebGL Shaders and a more advanced physically inspired material system based on publications by Epic Games and Disney. I also implemented FXAA 3.11 to smooth out some of the aliasing produced by the normal maps. The results of this experiment are now available as a library called normalmap.js. Check out the demos. It’s a lot faster and better looking than the old canvas version. Maybe you find a use for it. :)

Demos

You can view larger and sharper versions of these demos on 29a.ch/sandbox/2016/normalmap.js/.

You can get the source code for this library on github.

Future

I plan to create some more demos as well as tutorials on creating normalmaps in the future.

Comments

Let's encrypt 29a.ch

Written by

I migrated this website HTTPS using certificates by Let’s Encrypt. This has several benefits. The one I’m most excited about is being able to use Service Workers to provide offline support for my little apps.

Let's Encrypt

Let’s Encrypt is an amazing new certificate authority which allows you to install a SSL/TLS certificate automatically and for free. This means getting a certificate installed on your server can be as little work as running a command on your server:

./letsencrypt-auto --apache

The service is currently still in beta but as you can hopefully see the certificates it produces are working just fine. I encourage you to give it a try.

If anything on this website got broken because of the move to HTTPS, please let me know!

Comments

Time Stretching Audio in Javascript

Written by

Seven years ago I wrote a piece of software called Play it Slowly. It allows the user to change the speed and pitch of an audio file independently. This is useful for example for practicing an instrument or doing transcriptions.

Now I created a new web based version of Play it Slowly called TimeStretch Player. It’s written in Javascript and using the WebAudio API.

Screenshot

Open TimeStretch Player

It features (in my opinion) much better audio quality for larger time stretches as well as a 3.14159 × cooler looking user interface. But please not that this is beta stage software it’s still far from perfect and polished.

How the time stretching works

The time stretching algorithm that I use is based on a Phase Vocoder with some simple improvements.

It works by cutting the audio input into overlapping chunks, decomposing those into their individual components using a FFT, adjusting their phase and then resynthesizing them with a different overlap.

Oversimplified Explanation

Suppose we have a simple wave like this:

wave1

We can cut it into overlapping pieces like this:

wave1

By changing the overlap between the pieces we can do the time stretching:

wave1

This messes up the phases so we need to fix them up:

wave1

Now we can just combine the pieces to get a longer version of the original:

wave1

In practice things are of course a bit more complicated and there are a lot of compromises to be made. ;)

Much better explanation

If you want a more precise description of the phase vocoder technique than the handwaving above I recommend you to read the paper Improved phase vocoder time-scale modification of audio by Jean Laroche and Mark Dolson. It explains the basic phase vocoder was well as some simple improvements.

If you do read it and are wondering what the eff the angle sign used in an expression like phase of (t u a, omega k) means: It denotes the phase - in practice the argument of the complex number of the term. I think it took me about an hour to figure that out with any certainty. YAY mathematical notation.

Pure CSS User Interface

I had some fun by creating all the user interface elements using pure CSS, no images are used except for the logo. It’s all gradients, borders and shadows. Do I recommend this approach? Not really, but it sure is a lot of fun! Feel free to poke around with your browsers dev tools to see how it was done.

Future Features

While developing the time stretching functionality I also experimented with some other features like a karaoke mode that can cancel individual parts of an audio file while keeping the rest of the stereo field intact. This can be useful to remove or isolate parts of songs for instance the vocals or a guitar solo. However, the quality of the results was not comparable to the time stretching so I decided to remove the feature for now. But you might get to see that in another app in the future. ;)

Library Release

I might release the phase vocoder code in a standalone node library in the future but it needs a serious cleanup before that can happen.

Comments

Noise Analysis for Image Forensics

Written by

People seem to be liking Forensically. I enjoy hacking on it. So upgraded it with a new tool - noise analysis. I also updated the help page and implemented histogram equalization for the magnifier/ELA.

Screenshot of the noise analysis tool
Open the noise analysis tool

Noise Analysis

The basic idea behind this tool is very simple. Natural images are full of noise. When they are modified this often leaves visible traces in the noise in an image. But seeing the noise in an image can be hard. This is where the new tool in forensically comes in. It takes a very simple noise reduction filter (a separable Median Filter) and reverses it's results. Rather than removing the noise it removes the rest of the image.

One of the benefits of this tool is that it can recognize modifications like Airbrushing, Warps, Deforms, transformed clones that the aclone detection and error level analysis might not catch.

Please be aware that this is still a work in progress.

Example

Enough talk, let me show you an example. I gave myself a nosejob with the warp tool in gimp, just for you.

nose manipulation animation

As you can see the effect is relatively subtle. Not so the traces it leaves in the noise!

noise analysis of nose manipulation

The resampling done by the warp tool looses some of the high frequency noise, creating a black halo around the region.

Can you find any anomalies in the demo image using noise analysis?

A bit of code

I guess that many of the readers of this blog are fellow coders and hackers. So here is a cool hack for you. I found this in some old sorting code I wrote for a programming competition but I don't remember where I had it from originally. In order to make the median filter fast we need a fast way to find the median of three variables. A stupidly slow way to go about this could look like that:

// super slow
[a, b, c].sort()[1]

Now the obvious way to optimize this would be to transform it into a bunch of ifs. That's going to be faster. But can we do even better? Can we do it without branching?

// fast
let max = Math.max(Math.max(a,b),c),
    min = Math.min(Math.min(a,b),c),
    // the max and the min value cancel out because of the xor, the median remains
    median = a^b^c^max^min;

Mina and max can be computed without branching on most if not all cpu architectures. I didn't check how it's handled in javascript engines. But I can show you that the second approach is ~100x faster with this little benchmark.

Comments

Forensically, Photo Forensics for the Web

Written by

Back in 2012 I hacked to together a little tool for performing Error Level Analysis on images. Despite being such a simple tool with, frankly, a bad UI it has been used by over 250'000 people.

A few days ago I randomly stumbled across the paper Detection of Copy-Move Forgery in Digital Images by Jessica Fridrich, David Soukal, and Jan Lukáš. I wanted to see if I could do something similar and make it run in a browser. It took a good bit of tweaking but I ended up with something that works. I took a copy of my photo film emulator as a base for the UI, adapted it a bit, ported the old ELA code and added some new tools. The result is called Forensically.

Screenshot of Forensically
Open Forensically

How to use Forensically

If you want some guidance on how to use forensically you get to pick your poison. On offer is a 12 minute monologue in form of a tutorial video or a whole bunch of cryptic text on the help page. I'm sorry that neither are very good.

How the Clone Detection works

I guess the most interesting feature of this new tool is the clone detection. So let me reveal to you how I made it work. I will try to keep the explanation simple. If there is interest in it I might still write a more technical description of the algorithm later.

The basic idea

Create a Table
Move a window over the image, for each position of the window
    Use all of the pixels in the window as a key
    If the key is already in the table
        We found a clone! Mark it.
    Else
        Add the key to the table

This does actually work, but it will only find perfect copies. We want the matching to be more fuzzy.

Compression

So the next key step is to make the matching more fuzzy. We do this by compressing the key to make it less unique. You can think of this step as converting each of the little blocks into a tiny JPEG and then using those pixels as a key. The actual implementation is using Haar wavelets for this step. You can see the compressed blocks that are used by clicking on Show Quantized Image in the Clone Detection Tool.

This works too but now we have too many results!

Filtering

So the next step is to filter all of the blocks and to throw away the boring ones. This is done by comparing the amount of detail in the high frequencies to a threshold. You can think of it as subtracting a blurred image of the block from the block and then looking at how much is left of the pixels. In practice the blurring is not required because the wavelet step has already done it for us. You can see the rejected blocks as black spots in the quantized image.

At this stage the algorithm works but it does still show a lot of uninteresting copies of blocks that just happen to look similar.

Clustering

So now we take another look at all of the clones that we found. If the distance between the source and destination is too small we reject them. Next we look at clones that start from a similar place and are copied into a similar direction. If we find less than Minimal Cluster Size other clones that are similar we discard the clone as noise.

Source Code

I haven't figured out how I want to license the code and assets yet. But I do plan to release it in some form.

Feedback

As always, feedback is appreciated both on the app and on the post. Would you like future posts to be more in depth and technical or do you like the current format?

Comments

Light Leaks in the Film Emulator

Written by

Inspired by the feature in the recent release of g'mic, I added a new feature to my Film Emulator, light leaks.

Photo with light leak

G'mic seems to use predefined images for the light leaks. I decided to go another route and created a procedural version of it. The benefits are clear: Now big images to download, and an infinite variation of light leaks.

The implementation is also rather straight forward, in fact it shares most of the code with the already existing grain code. It uses simplex noise at a fairly low frequency to create a colored plasma. Three octaves of simplex noise are used for the luminance, and a single octave of simplex noise is used to randomize each color channel. This was just my first approximation but in my opinion it works better than it has any right to, so I'll stick to it for now.

Comments

Javascript Film Emulation

Written by

I hacked together a little analog film emulation tool in Javascript. It's based on the awesome work of Pat David. I wrote it mainly to play with some new tech but I liked the result enough to share it with you. You can try it here:

example image
View the Film Emulator

It also works on android phones running chrome, give it a try!

How the Film Emulation works

I guess the most interesting part for most people is the actual film emulation code. It's using Color Lookup Tables (cluts).

So in simplistic terms:

For every pixel in the image
Take it's color values r, g, b
Look up it's new color value in the lookup table
r', g', b' = colorLookupTable[r, g, b]
Set the pixel to the color values (r', g', b')

In practice there are a few more considerations. Most cluts don't contain values for all 16 777 216 (224) colors in the rgb space. A simplistic solution to this problem would be to always just use the closest color (nearest-neighbor interpolation). This is fast but results in very ugly banding artifacts.

So to keep things fast I use random dithering for the previews and trilinear filtering for the final output. The random dithering is probably a suboptimal choice, but it was easy to implement.

You can find more details about how the lookup tables were create on Pat Davids website.

Technology

As stated at the beginning I wrote this application to play with new technology, so there is a lot going on in this little application.

The entire code is written in Javascript (ES6 to be precise). Which is then converted to more mainstream javascript using babel.js.

It is using the canvas API for accessing the pixel data of images and then processes them in web workers for parallelism using transferable objects to avoid copies.

WebGL would obviously also be suitable for this task, I might even write an implementation in the future

The css makes heavy use of flexible boxes and is written in scss. The icon font was generated using fontello.

The whole thing is built using grunt and browserify.

Of course these are just a few of the bits of tech that I played with to make this append. If you want to know even more, just look at the source.

Source Code

You can find the source code of this tool on github. The code is not licensed under an open source license and does not come with all the data files in order to prevent lazy people from just copying everything and pretending it is their own work. You are of course free to study the code and takes bits and pieces, I consider this fair use. Just attribute them to me properly. If you have grander plans for it and the lack of a license prevents you from following up on them feel free to contact me.

Comments

Full-text search example using lunr.js

Written by
Moon

I did a little experiment today. I added full-text search to this website using lunr.js. Lunr is a simple full-text search engine that can run inside of a web browser using Javascript.

Lunr is a bit like solr, but much smaller and not as bright, as the author Oliver beautifully puts it.

With it I was able to add full text search to this site in less than an hour. That's pretty cool if you ask me. :)

You can try out the search function I built on the articles page of this website.

I also enabled source maps so you can see how I hacked together the search interface. But let me give you a rough overview.

Indexing

The indexing is performed when I build the static site. It's pretty simple.

// create the index
var index = lunr(function(){
    // boost increases the importance of words found in this field
    this.field('title', {boost: 10});
    this.field('abstract', {boost: 2});
    this.field('content');
    // the id
    this.ref('href');
});

// this is a store with some document meta data to display
// in the search results.
var store = {};

entries.forEach(function(entry){
    index.add({
        href: entry.href,
        title: entry.title,
        abstract: entry.abstract,
        // hacky way to strip html, you should do better than that ;)
        content: cheerio.load(entry.content.replace(/<[^>]*>/g, ' ')).root().text()
    });
    store[entry.href] = {title: entry.title, abstract: entry.abstract};
});

fs.writeFileSync('public/searchIndex.json', JSON.stringify({
    index: index.toJSON(),
    store: store
}));

The resulting index is 1.3 MB, gzipping brings it down to a more reasonable 198 KB.

Search Interface

The other part of the equation is the search interface. I went for some simple jQuery hackery.

jQuery(function($) {
    var index,
        store,
        data = $.getJSON(searchIndexUrl);

    data.then(function(data){
        store = data.store,
        // create index
        index = lunr.Index.load(data.index)
    });

    $('.search-field').keyup(function() {
        var query = $(this).val();
        if(query === ''){
            jQuery('.search-results').empty();
        }
        else {
            // perform search
            var results = index.search(query);
            data.then(function(data) {
                $('.search-results').empty().append(
                    results.length ?
                    results.map(function(result){
                        var el = $('<p>')
                            .append($('<a>')
                                .attr('href', result.ref)
                                .text(store[result.ref].title)
                            );
                        if(store[result.ref].abstract){
                            el.after($('<p>').text(store[result.ref].abstract));
                        }
                        return el;
                    }) : $('<p><strong>No results found</strong></p>')
                );
            }); 
        }
    }); 
});

Learn More

If you want to learn more about how lunr works I recommend you to read this article by the author.

If you still want to learn more about search, then I can recommend this great free book on the subject called Introduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze.

Comments

 View & search all my articles