Texture Transfer Based on Texture Descriptor Variations

We published this month with Romain Vergne, Thomas Hurtut and Joƫlle Thollot a research report on our work on texture transfer based on textural variations. This report is the third chapter of my thesis manuscript and can be found here.

The idea of this work is to transfer a reference texture onto an input texture, reproducing the reference texture patterns, but preserving the input texture global variations such as illuminations changes or perspective deformations.

This is done using the input texture and a single reference texture. In order to do so, we use a texture descriptor to characterize the input texture global variations, and a texture synthesis algorithm to reproduce the reference texture patterns while preserving the input global variations.

Examples of results are given below. From left to right, the images are the input, the reference and the result. Note that for each result we manually chose which global properties of the input we wanted to preserve in the result between luminance, scale and orientation.

Advertisements

Local texture-based color transfer and colorization

An extension of our Expressive 2016 paper was published in a special edition of Computers & Graphics this month. This extended paper can be found here.

The main contribution of this extension is the ability to use simple strokes to define image regions and apply color transfer or colorization between these regions only. The image regions are automatically computed from the user provided strokes using our edge-aware texture descriptor.

Here is an example of using these strokes to quickly refine the result of the automatic color transfer approach. First, we compute the automatic color transfer result based on textural properties using the house input image, and the sunset image as a reference.

We are happy with the purplish color of the house and hedge, however the sky remained blue because of the blue sky in the reference. To get a better sunset color in the sky, we use another reference image and two strokes to match the skies of the two images.

As we can see, the masks automatically computed from the stroke (bottom right) accurately separate the sky in the two images. In the final result, only the sky is changed to the purple color of the new reference’s sky, producing a better sunset feeling.

 

 

 

Music Visuals III : Particle Animation Again

In this video, I used again Adobe After Effects and its Trapcode plug-ins to try and improve on the visual interest part of these experiments. I wanted a more complex particle animation, close to particle simulation and different variations to try and add diversity to the visuals.

For the music analysis, I focused this time on low frequencies to isolate different beats and percussions types which where prominent in this track.

An interesting thing I noticed while trying to extract various sound features and link them to different visual features was that it got confusing very quickly. Especially when visualizing different things that vary at the same time. It actually worsened the readability of the visuals I was going for. I found that as the visuals got more complex, the music features represented had to get simpler (more intuitive) to compensate. While it doesn’t mean that having various music and visual features to play with is bad, I think it is important that only one (or very few) thing vary at the same time and that variations are introduced one by one if going for maximum readability.

Automatic Texture Guided Color Transfer and Colorization

This is an image gallery for my work on color transfer and colorization, described here. Additional results can be found here. This work was done at INRIA Grenoble during my PhD thesis and was published at Expressive 2016 where it received an honorable mention. You can download my presentation slides here.

The algorithm uses a reference or example image (bottom left) to recolorize the input image (top left) and create the result image (right).

Color Transfer Results

 

Colorization Results

Music Visuals II : Particle Animation

In this video, I used Adobe After Effects and the Trapcode plug-ins Sound Keys and Particular to generate particles, based on the music frequencies.

The spectrogram is divided into three frequency bands corresponding to low, mid and high frequencies. Each of these bands is linked to a particle emitter and the band’s amplitude controls the spread of the particle emission.

As observed in previous spectrogram-based visuals, beats related to low frequencies are easily recognized in the red emitter spread. However, I find it hard to accurately tell what the two other emitters are reacting to as it is a mix of harmonics from instruments and human voice.

Music Visuals I : Spectrogram

The idea of these music visuals experiments is to explore different ways to generate visuals, as automatically as possible, directly from a music track. Ideally, those visuals should be entertaining and reflect what stands out in the music from a listener point of view.

In this video, I used Matlab to compute the spectrogram of a music track and display it as concentric color rings. Small rings in the center correspond to low frequencies, whereas larger rings show higher frequencies. The ring brightness is directly linked to the corresponding frequency amplitude in the spectrogram.

While the spectrogram contains a lot of information on the frequencies of the sound, it is not intuitive to link what it shows to what we hear. The low frequencies corresponding to beats and percussions are easily distinguished, however higher frequencies contain a mix of harmonics from different instruments such as piano and human voice. While we can easily distinguish them by ear, it is fairly unintuitive to do so in the spectrogram visualization where both are entwined.

In order to improve the visualization readability, more complex information can be extracted from the spectrogram such as melody, pitch, tempo, etc… These are intuitive properties used to describe music to human listeners and therefore should be used to guide an intuitive visualization. However, these properties are not always straightforward to compute and extract from the sound wave data. It gets even harder when the data is a mix of different instruments playing different melodies.

Research on these topics is being done and tools are already available to try and extract these properties from sound data. For example, the Matlab MIR toolbox provides methods for different feature extractions, and other challenges like melody extraction can be tackled using advanced algorithms. While these methods provide impressive results in some cases, they do not work with any input and still fail when dealing with challenging cases.

These tools are promising ways to improve the music analysis part of this, though my priority for future experiments will probably be to improve on the visual generation part to try to get interesting visuals even for simple and easy to compute music features.

Detection of Hedges in a Rural Landscape Using a Local Orientation Feature: From Linear Opening to Path Opening

My work on hedges detection in collaboration with Mathieu Fauvel was published in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. You can find the article here. This work was done at the University of Iceland in Reykjavik during my master thesis.

Two examples of results are shown below for satellite images of rural areas.

The left column shows the Normalized Difference Vegetation Index (NDVI) used to detect vegetation in satellite images. In those images, vegetation is shown in white, whereas non-vegetation areas (roads, buildings, water, etc…) are black. The right column shows in red the pixels detected as hedges by our method.

hedgeDetectionResult