Posts Tagged ‘lrtimelapse’

Simple time-lapse using ffmpeg

Sunday, January 25th, 2015

In my previous post, I’ve described the little tool I’ve put together to help me out trigger my DSLR remotely & how I’ve used it to create basic time-lapse videos.

The video quality of my first attempts were pretty.. miserable. Soo I HAD TO try and find a better approach. Here’s where I am at right now and the next steps on my list to further improve the result quality.

Yesterday, I’ve taken two image sequences: one of my garden (again :p) and another one inside our living room. I’ve used a delay of 10 seconds for both but (somewhat) different settings for the shots.

I didn’t prepare much before taking these image sequences; my goal was only to have some raw material to use as input and to run some experiments with the tools currently at my disposal.

For the garden sequence, I’ve used a small — though not small enough — aperture (8) and a short exposition time (1/125s).

For the living room sequence, I’ve used a smaller aperture (f 11) and a longer exposition time.

Again, my goal wasn’t to have great images as input so don’t mention the images quality ;-)

I’ve gathered 384 shots for the first sequence and  165 for the second (time ran out ^^).

Here are the final videos. As you can see below, those videos are already of MUCH better quality than my first attempts.

First sequence:

Second sequence (fast):

Second sequence (medium):

Second sequence (slow):

Without further ado, let me describe how I went from the RAW input files (DNG) to the resulting video.

First, I had to convert my raw files to a file format that ffmpeg can work with. I’ve used XnConvert, but for CLI lovers, ImageMagick can probably do that too.

Next, I needed to rename all files to follow a very simple naming convention (you’ll see later why that’s useful). Here’s a simple one-liner to do that easily:

find . -name '*.png' | awk 'BEGIN{ a=0 }{ printf "mv %s %04d.png\n", $0, a++ }' | bash

What this command does is that it takes all .png files in the current (or sub) folder(s) and renames them, ensuring that they all contain 4 digits. For example ‘IMGP_20150124001.png’ will become ‘0001.png’ and ‘IMGP_20150124002.png’ will become ‘0002.png’.

The last step (yep, already) is to actually create the video. In the example below, I create the video and directly add a soundtrack to it:

ffmpeg -v info -y -f image2 -r 24 -i ./in/%4d.png -t 00:00:16 -i soundtrack.mp3 -c:a copy
\ -shortest -threads 8 -s:v 1920x1080 -vcodec libx264 -preset veryslow -qp 0 -map 0:0 -map 1:0 ./movie2.mkv

Here’s a breakdown of the command arguments. All arguments in italic are those I didn’t use in the example above but that can come in handy depending on what your goal is:

  • -v info
    • output information messages (default)
    • you can switch that to verbose, debug, … if you need to troubleshoot things
  • -y
    • overwrite existing files without asking
  • -r 24
    • fixed input frames per seconds to 24 (FPS)
      • note that 30fps or more is nicer for the human eye, but I didn’t have enough images to sustain that
    • with this, ffmpeg generates timestamps assuming constant fps
    • notice that the ‘-r’ argument is placed BEFORE the -i (inputs)! This is mandatory, otherwise it’ll specify the output framerate rather than the input, which is not what you want
  • -f image2
    • the input files format; in this case images
  • -i …
    • specifies the input files
    • you can now see why I’ve renamed the input files first; here I used a number mask ‘%4d’ which will match all our correctly named input files
    • notice that the images are loaded from a “in” sub-folder; I used that idiom to separate the input and the output
  • -i soundtrack.mp3
    • in this example, I add another input, which is an audio file that will be added to the video
  • -c:a copy
    • instructs ffmpeg to copy the input audio file without re-encoding it
    • I can do this since the video container that I’ve chosen — mkv — can hold the mp3 file as is
  • -shortest
    • stop when the shortest input is finished (i.e., stop when we run out of images or when the audio input file is finished
  • threads 8
    • self-explanatory :)
  • -qscale:a 0
  • -s:v 1920×1080
    • output video (:v) resolution, in this case full hd
  • -vcodec libx264
    • output video encoding: x.264
    • (since we all love x.264, right??!)
  • -t 00:00:16
    • duration of the video (hh:mm:ss)
    • I had 384 input images that I wanted to display at 24 fps thus 384/24 = 15.x seconds
    • since I specified the ‘-shortest’ option, I don’t care if the total duration is a bit too long
  • -b:v 2M (or more :p)
    • video (:v) bitrate. Useful if you must limit the file size
    • in my case I didn’t care about the output file size given the low number of input material
  • -preset veryslow
    • since I didn’t care about output file size, I went for the highest quality possible
    • x.264 lossless
  • -qp 0
  • -map 0:0 -map 1:0
    • map the video and audio tracks (only one -map argument is needed if there is no input audio!)
  • -loop 1
    • you can use this option if you want to loop your images (i.e., when the encoder has gone through all input images, it starts over with the first image)
    • this can be useful if you want to loop until something else is done (e.g., until the end of the audio input track if you remove the ‘-shortest’ argument)
  • ./blabla.mkv
    • self explanatory

Note that the order of the arguments DOES matter!

ffmpeg is a very versatile & powerful command line utility, so this barely scratches the surface of what it can do (and I’m by no means a specialist). If you want to know more, check out the official docs.

There you have it! Quite simple heh :)

In the example above, I’ve directly added the audio track using ffmpeg, but I don’t recommend this. You’ll be much better off adding the audio track afterwards; ideally using a video editing software such as Pinnacle Studio, Adobe Premiere, Adobe After Effects and the like ;-)

With these, you’ll be able to make nice transitions, precisely mix & match the audio and video, etc.

So to conclude, this is a much better approach for generating time-lapse videos than what I did at first.. but I realize that this is still very amateurish.

I want to further improve the output video quality (not only encoding-wise), but to do so, I’ll need to:

  • capture more input pictures so that I can make longer videos at a higher frame rate (ideally I’d like to try and generate 60fps videos)
  • think a bit more about the relation between:
    • the total number of frames
    • the delay between each frame
    • the obturation speed
    • the aperture
  • post-process the input frames at least a bit to even out the exposure, etc
    • this should really boost the resulting video quality
  • post-process the video once generated
    • add the audio and sync it correctly
    • add effects such as fade-in, fade-out, ease-in, ease-out
    • add an introduction
    • add a watermark
    • add credits ;-)
  • improve my gear (see my previous post for my ideas about that)
  • give LRTimeLapse a try as it looks like a great software solution with (partly) automated workflows for creating time-lapse videos
    • it seems to be able to work directly with the RAW input files (DNG in my case)
    • it seems to be very well integrated with Lightroom, which is already part of my photographic toolkit
    • finally, it creates the output video file using ffmpeg which is, as you’ve seen above, perfectly fine ;-)

And last but not least, I’ll need to choose something AWESOME to take pictures of ;-)