Archive for January, 2015

Simple time-lapse using ffmpeg

Sunday, January 25th, 2015

In my previous post, I’ve described the little tool I’ve put together to help me out trigger my DSLR remotely & how I’ve used it to create basic time-lapse videos.

The video quality of my first attempts were pretty.. miserable. Soo I HAD TO try and find a better approach. Here’s where I am at right now and the next steps on my list to further improve the result quality.

Yesterday, I’ve taken two image sequences: one of my garden (again :p) and another one inside our living room. I’ve used a delay of 10 seconds for both but (somewhat) different settings for the shots.

I didn’t prepare much before taking these image sequences; my goal was only to have some raw material to use as input and to run some experiments with the tools currently at my disposal.

For the garden sequence, I’ve used a small — though not small enough — aperture (8) and a short exposition time (1/125s).

For the living room sequence, I’ve used a smaller aperture (f 11) and a longer exposition time.

Again, my goal wasn’t to have great images as input so don’t mention the images quality ;-)

I’ve gathered 384 shots for the first sequence and  165 for the second (time ran out ^^).

Here are the final videos. As you can see below, those videos are already of MUCH better quality than my first attempts.

First sequence:

Second sequence (fast):

Second sequence (medium):

Second sequence (slow):

Without further ado, let me describe how I went from the RAW input files (DNG) to the resulting video.

First, I had to convert my raw files to a file format that ffmpeg can work with. I’ve used XnConvert, but for CLI lovers, ImageMagick can probably do that too.

Next, I needed to rename all files to follow a very simple naming convention (you’ll see later why that’s useful). Here’s a simple one-liner to do that easily:

find . -name '*.png' | awk 'BEGIN{ a=0 }{ printf "mv %s %04d.png\n", $0, a++ }' | bash

What this command does is that it takes all .png files in the current (or sub) folder(s) and renames them, ensuring that they all contain 4 digits. For example ‘IMGP_20150124001.png’ will become ‘0001.png’ and ‘IMGP_20150124002.png’ will become ‘0002.png’.

The last step (yep, already) is to actually create the video. In the example below, I create the video and directly add a soundtrack to it:

ffmpeg -v info -y -f image2 -r 24 -i ./in/%4d.png -t 00:00:16 -i soundtrack.mp3 -c:a copy
\ -shortest -threads 8 -s:v 1920x1080 -vcodec libx264 -preset veryslow -qp 0 -map 0:0 -map 1:0 ./movie2.mkv

Here’s a breakdown of the command arguments. All arguments in italic are those I didn’t use in the example above but that can come in handy depending on what your goal is:

  • -v info
    • output information messages (default)
    • you can switch that to verbose, debug, … if you need to troubleshoot things
  • -y
    • overwrite existing files without asking
  • -r 24
    • fixed input frames per seconds to 24 (FPS)
      • note that 30fps or more is nicer for the human eye, but I didn’t have enough images to sustain that
    • with this, ffmpeg generates timestamps assuming constant fps
    • notice that the ‘-r’ argument is placed BEFORE the -i (inputs)! This is mandatory, otherwise it’ll specify the output framerate rather than the input, which is not what you want
  • -f image2
    • the input files format; in this case images
  • -i …
    • specifies the input files
    • you can now see why I’ve renamed the input files first; here I used a number mask ‘%4d’ which will match all our correctly named input files
    • notice that the images are loaded from a “in” sub-folder; I used that idiom to separate the input and the output
  • -i soundtrack.mp3
    • in this example, I add another input, which is an audio file that will be added to the video
  • -c:a copy
    • instructs ffmpeg to copy the input audio file without re-encoding it
    • I can do this since the video container that I’ve chosen — mkv — can hold the mp3 file as is
  • -shortest
    • stop when the shortest input is finished (i.e., stop when we run out of images or when the audio input file is finished
  • threads 8
    • self-explanatory :)
  • -qscale:a 0
  • -s:v 1920×1080
    • output video (:v) resolution, in this case full hd
  • -vcodec libx264
    • output video encoding: x.264
    • (since we all love x.264, right??!)
  • -t 00:00:16
    • duration of the video (hh:mm:ss)
    • I had 384 input images that I wanted to display at 24 fps thus 384/24 = 15.x seconds
    • since I specified the ‘-shortest’ option, I don’t care if the total duration is a bit too long
  • -b:v 2M (or more :p)
    • video (:v) bitrate. Useful if you must limit the file size
    • in my case I didn’t care about the output file size given the low number of input material
  • -preset veryslow
    • since I didn’t care about output file size, I went for the highest quality possible
    • x.264 lossless
  • -qp 0
  • -map 0:0 -map 1:0
    • map the video and audio tracks (only one -map argument is needed if there is no input audio!)
  • -loop 1
    • you can use this option if you want to loop your images (i.e., when the encoder has gone through all input images, it starts over with the first image)
    • this can be useful if you want to loop until something else is done (e.g., until the end of the audio input track if you remove the ‘-shortest’ argument)
  • ./blabla.mkv
    • self explanatory

Note that the order of the arguments DOES matter!

ffmpeg is a very versatile & powerful command line utility, so this barely scratches the surface of what it can do (and I’m by no means a specialist). If you want to know more, check out the official docs.

There you have it! Quite simple heh :)

In the example above, I’ve directly added the audio track using ffmpeg, but I don’t recommend this. You’ll be much better off adding the audio track afterwards; ideally using a video editing software such as Pinnacle Studio, Adobe Premiere, Adobe After Effects and the like ;-)

With these, you’ll be able to make nice transitions, precisely mix & match the audio and video, etc.

So to conclude, this is a much better approach for generating time-lapse videos than what I did at first.. but I realize that this is still very amateurish.

I want to further improve the output video quality (not only encoding-wise), but to do so, I’ll need to:

  • capture more input pictures so that I can make longer videos at a higher frame rate (ideally I’d like to try and generate 60fps videos)
  • think a bit more about the relation between:
    • the total number of frames
    • the delay between each frame
    • the obturation speed
    • the aperture
  • post-process the input frames at least a bit to even out the exposure, etc
    • this should really boost the resulting video quality
  • post-process the video once generated
    • add the audio and sync it correctly
    • add effects such as fade-in, fade-out, ease-in, ease-out
    • add an introduction
    • add a watermark
    • add credits ;-)
  • improve my gear (see my previous post for my ideas about that)
  • give LRTimeLapse a try as it looks like a great software solution with (partly) automated workflows for creating time-lapse videos
    • it seems to be able to work directly with the RAW input files (DNG in my case)
    • it seems to be very well integrated with Lightroom, which is already part of my photographic toolkit
    • finally, it creates the output video file using ffmpeg which is, as you’ve seen above, perfectly fine ;-)

And last but not least, I’ll need to choose something AWESOME to take pictures of ;-)


Time-lapse using Arduino as DSLR remote trigger – v1

Saturday, January 17th, 2015

I’ve been wanting to create time-lapse animations for a while now.

Since I’ve finally started acquiring some electronics components (actually a awful lot thereof — according to my wife :p), I am finally able to actually build some things for myself.. :)

I’ve put together a simple circuit allowing me to trigger my DSLR camera every X units of time. For now the delay between shutters is hardcoded but I might add a potentiometer later on in order to be able to modify it without having to reprogram it. I could save the delay in the EEPROM so that I don’t have to re-enter it each time I start it up.

Since I didn’t want to touch my DSLR at all, I’ve decided to build a remote trigger leveraging the infrared (IR) sensors present on my Pentax K20D.

As this is a simple prototype, I’ve just used an Arduino Uno (5v) with a small breadboard. In the future, this project might be a good candidate for my first PCB.. but we’ll see about that later :p

Current features

  • Trigger my DSLR remotely every X (hardcoded delay) through infrared
  • Light up an LED before triggering the DSRL (just for fun)

Parts

  • 1 Arduino (e.g., Arduino Uno)
  • 1 high output IR LED (I bought this one, but see the issues list..)
  • 1 green LED
  • 1 27 ohm resistor (or more, or less depending on your test results :p)
  • 1 560 ohm resistor
  • 1 breadboard
  • jumper wires

Schematic:

As you can see, the circuit is veeeeery easy to pull together. The IR led is basically just like any other led.. with the distinction that you can’t directly see its output ;-)

Source code:

// Remote DSLR Trigger

// Libraries
#include <multiCameraIrControl.h> // Camera IR control

// Circuit-specifics
const int SENDING_LED_PIN = 13;

// IR Control
Pentax K20D(9); // IR Led on PIN 9

void setup(){   
  pinMode(SENDING_LED_PIN, OUTPUT); // LED = output
}

void loop(){
      digitalWrite(SENDING_LED_PIN, HIGH);
      delay(250);
      digitalWrite(SENDING_LED_PIN, LOW);
      
      K20D.shutterNow();
 }      
  
  delay(60000); // 1 min delay between shots
}

As you can see, the code is also very straightforward thanks to Sebastien Setz’s Arduino IR control library

The library takes care of the modulation necessary so that the DSLR gets the message clearly.. ;-)

Basically, it does the following for Pentax:

Note that the library supports other DSLRs such as Canon, Nikon, etc as well as other functions depending on the models.

Issues

  • One thing that sucks with my current build is the IR distance. It doesn’t work farther than 40-50cm, which, I guess, is due to the IR LED that I’m using. It might not be as powerful as it should (though I ordered a ‘high-output’ IR LED). Some or the library users mentioned removing the resistor did help. Others have used an NPN transistor. Although I’ve tried the NPN transistor, it didn’t help (at all)..

Ideas for a future version:

The project sources including the schematics are available on GitHub

Let’s go take some pictures now.. :)

Update #1:

Okay, I’ve made my two first tries.. Not great but hey, you always need to start somewhere right?

I’ve used AviDemux to create the video but the original was too fast. I didn’t want to bother now finding a clean solution so I hacked my way through by invoking ffmpeg to help me out slow it down:

ffmpeg -i input.mkv -filter:v "setpts=2.0*PTS" output.mkv

This isn’t great because it lowers the video quality but I’ll make better videos when I get better pictures that are worth the hassle =)

Here are the resulting videos:

Update #2 (2015-01-25):

I’ve spent a bit of time finding out how to generate higher quality time-lapse videos. Check out the next post for details :)


Démarrer une voiture dont la batterie est trop faible (pas HS)

Sunday, January 4th, 2015

Encore un post à deux balles qui me servira de rappel pour plus tard… :)

Pour les éventuels mécanos qui liraient ce post: n’hésitez pas à me signaler si je raconte des âneries étant donné que je suis loin de m’y connaître… =)

Quand la batterie d’une voiture est trop faible pour démarrer le moteur, il faut lui filer un petit coup de pouce.

Pour ça le mieux est d’avoir des cables à disposition (avec les pinces crocodileuhhh):

imgres

L’idée est de se servir d’une batterie de secours (soit une autre voiture, soit un chargeur etc. On va partir de la supposition qu’on va démarrer avec l’aide d’une autre voiture dont la batterie fonctionne bien et est chargée.

Pour brancher les cables il faut suivre l’ordre suivant (important):

  • brancher le cable rouge (+) sur la borne (+) de la batterie de la voiture en panne
  • branche le cable rouge (+) sur la borne (+) de la batterie de secours
  • brancher le cable noir (-) sur la borne (-) de la batterie de secours (ou sur le chassis — il y a des endroits prévus pour en général)
  • brancher le cable noir (-) sur la borne (-) de la batterie de la voiture en panne (ou sur le chassis — il y a des endroits prévus pour en général)

Une fois que les cables sont branchés, on peut allumer le moteur de la voiture de secours et attendre quelques minutes. Si on débranche le cable noir (-) de la voiture en panne, on entend normalement un changement de régime du moteur de la voiture de secours. Quand c’est bien branché, ça ralentit un peu le moteur de la voiture de secours (en tout cas j’ai constaté ça sur la mienne, je ne suis pas mécano :p).

Après quelques minutes on peut essayer d’allumer le moteur de la voiture en panne. Si ça ne démarre pas, elle doit au moins avoir l’air un peu plus près de démarrer..

Si elle ne démarre toujours pas après 15-20 minutes, la batterie est peut être HS..

Si la voiture en panne démarre, alors sa batterie est toujours en vie et va être rechargée petit à petit par le moteur qui tourne. Pour aider un peu on peut alors allumer le chauffage, la radio, etc.

A ce moment on peut débrancher les cables (il faut suivre l’ordre inverse pour débrancher).

Le mieux est de laisser la voiture (presque plus) en panne pour que sa batterie se recharge un maximum..


Extracting audio from a video using ffmpeg.. and cutting a part of a video

Friday, January 2nd, 2015

This post will mainly serve as a reminder for the next time I ever need this. The goal is absolutely NOT to create a detailed guide.. ;-)

To extract the audio track out of a video:

ffmpeg -i input.mp4 -c:a copy -vn -sn output.m4a

 

To cut a part of a video (no re-encoding):

ffmpeg -i VID_20140214_171208.mp4 -vcodec copy -ss 00:10:00 -t 00:00:08 result.mp4

The command above cuts from 10:00 and takes 8 seconds of video.