Archive for the ‘IT’ Category

Battling against the 4.7.0 CrashPlan Synology package update

Saturday, May 21st, 2016

If you’re using CrashPlan to backup data on your Synology NAS in headless mode, you’ve probably already had to go through this update nightmare. This is pretty regular unfortunately; each time an update arrives for CrashPlan, the package gets broken in various ways.

Basically, clicking the “update” button always leads to a couple of hours wasted :(

Here’s how I fixed the issue this time, just in case it could help other people! Before you start, make sure you have a good hour in front of you.. ;-)
The commands are assumed to be executed as root…

  • close your eyes and update the package
  • start the package, it’ll download the update file then will crash and burn
  • copy cpio from the CrashPlan package to /bin/cpio: cp /var/packages/CrashPlan/target/bin/cpio /bin/cpio
  • extract the “upgrade” file: 7z e -o./ /var/packages/CrashPlan/target/upgrade.cpi
  • move the upgrade file outside the Crashplan folder
  • uninstall the CrashPlan package
  • install the CrashPlan package again (don’t let it start)
  • move back the upgrade file and put it in the upgrade folder (/var/packages/CrashPlan/target/upgrade)
  • edit install.vars in the CrashPlan folder to point to the correct location of Java on your NAS. To find it, just use ‘which java’. Then put the correct path for the JAVACOMMON property
  • (optional) rename the upgrade file to upgrade.jar (or whatever you like)
  • extract the upgrade file: 7z e -o/var/packages/CrashPlan/target/lib /var/packages/CrashPlan/target/upgrade/upgrade.jar
  • remove the upgrade file (not needed anymore)
  • remove the upgrade.cpi file
  • IF you have enough memory, then add the USR_MAX_HEAP property to /var/packages/CrashPlan/target/syno_package.vars
  • start the CrashPlan package; it should now stay up and running
  • install the latest CrashPlan client version on your machine
  • disable the Crashplan service on your machine
  • get the new Crashplan GUID on your NAS: cat /var/lib/crashplan/.ui_info; echo
  • copy the guid (everything before “,”) in the ‘.ui_info’ file under C:\ProgramData\CrashPlan (assuming you’re on Windows). You must edit the file from a notepad executed as admin. Make sure to replace the IP ( by the one of your NAS
  • Start the CrashPlan client, enter your CrashPlan credentials and passphrase (you do have one, right? :p)
  • Now let CrashPlan sync all your files for a few days :o)

Hope this helps!

Enjoy :)

So you want to be safe(r) while accessing your online bank account?

Saturday, May 14th, 2016

Web browsers

One quick tip: if you want to access sensitive Websites safely (e.g., your online bank, your taxes, …), then:

  • do so in a different Web browser than the one you generally use.
  • make sure that the browser you use for sensitive sites is NOT your default browser (i.e., the one that opens when you click on links in e-mails for example)
  • make sure that your browser is up to date
  • make sure that you never use that browser for anything else
  • do NOT visit anything else (i.e., no other tabs) at the same time
  • quickly check that you don’t have weird extensions or plugins installed (you could very well have been p0wned by any application installed on your machine)
  • make sure that you configure very strict security rules on that browser (e.g., disable caching, passwords/form data storage, etc)

Why does this help? Well if your machine isn’t part of a botnet or infected with hundreds of malwares yet, then the above could still protect you against commonly found vulnerabilities (e.g., cross-site request forgery), vulnerabilities exploited through a different tab in your browser, etc.

Personally I use Google Chrome as my default Web browser and Mozilla Firefox whenever I need to access sensitive sites.

Do NOT consider this as bulletproof though, it’s nothing but ONE additional thing you can do to protect yourself; you’re still exposed to many security risks, the Web is a dangerous place ;-)

Don’t use JSON for configuration files

Monday, April 25th, 2016

For quite some time, I wondered about this: “why the hell are comments forbidden in json files?”.

The short answer is: Douglas Crockford cared about interoperability (

The problem is that nowadays, many CLI tools make us of json files to store their configuration. It’s nice because the syntax is pretty lightweight and because it’s really easy to parse, but that’s where it ends because you know what? Comments are pretty darn useful in configuration files..

Unfortunately, as it stands, many of those tools (or at least the parsers they rely upon) choose not to accept comments. As Douglas states, nothing prevents us from sending json files through a minifier to get a comments-free version but… but it’s just a pain to have to do that before passing json files around; worse so when you need to have the file available on disk for some tool and even worse when that file needs to have a certain name (e.g., tsconfig.json).

Some tools do add support for comments, but then you realize that any surrounding tools must also accept that, which is often not the case or takes a while to get there. So that’s that, and IDEs which will complain if you start adding comments to json files (and rightly so..).

All in all, my opinion about this matter now is that json is just not the answer for configuration files. Since json does not support comments, then don’t use json, use something else, don’t try to hack your way around.

What should we use instead? Who cares, as long as it supports comments and doesn’t force you into hacks just to be able to comment things that need be!

YAML is one option, TOML is another, XML is yet another (though way too verbose) and I’m sure there are a gazillion other ones.

If you’re in the JS world then why not simply JS modules? There you get the benefit of directly supporting more advanced use cases (e.g., configuration composition, logic, etc).

Silence please

Tuesday, April 19th, 2016

As all music copyright holders will tell you, adding music you like (but do not own) to family video clips is copyright infringement. As such, you should remove the audio track entirely to avoid getting into a lawsuit… or worse, getting your video removed from Youtube :)

The command below is will list all streams that exist in your video file.

$ ffmpeg -i yourfile.mp4

ffmpeg version N-60592-gfd982f2 Copyright (c) 2000-2014 the FFmpeg developers
  built on Feb 13 2014 22:05:50 with gcc 4.8.2 (GCC)
  configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
  libavutil      52. 63.101 / 52. 63.101
  libavcodec     55. 52.101 / 55. 52.101
  libavformat    55. 32.101 / 55. 32.101
  libavdevice    55.  9.100 / 55.  9.100
  libavfilter     4.  1.102 /  4.  1.102
  libswscale      2.  5.101 /  2.  5.101
  libswresample   0. 17.104 /  0. 17.104
  libpostproc    52.  3.100 / 52.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'yourfile.mp4':
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41
    creation_time   : 2015-12-22 23:09:46
  Duration: 00:05:27.04, start: 0.000000, bitrate: 5836 kb/s
    Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv), 1280x720 [SAR 1:1 DAR 16:9], 5579 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
      creation_time   : 2015-12-22 23:09:46
      handler_name    : Alias Data Handler
    Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 253 kb/s (default)
      creation_time   : 2015-12-22 23:09:46
      handler_name    : Alias Data Handler

As you can see in the example above, my file contains two streams: the video stream (h264) as 0:0 and a single audio stream as 0:1

To get rid of the audio stream with ffmpeg, I simply needed to ask ffmpeg nicely to copy the file, keeping the 0:0 video stream, ignoring the audio stream and leaving the codecs alone (i.e., not trying to reencode anything):

ffmpeg -i yourfile.mp4 -map 0:0 -acodec copy -vcodec copy yourfile-silent.mp4

If you have multiple video streams or if you want to keep some audio streams, then just adapt the mappings accordingly.

Docker for Windows (beta) and msysgit

Friday, April 15th, 2016

I’ve recently joined the beta program for Docker on Windows (now based on Hyper-V).

I wanted to keep my current config using msysGit but got weird errors when executing Docker commands from msysGit:

I could fix the issue by installing a newer version of msysGit with support for the MSYS_NO_PATHCONV environment variable. With that installed, I then changed my docker alias to a better approach:

    export MSYS_NO_PATHCONV=1
    ("$DOCKER_HOME/docker.exe" "$@")
    export MSYS_NO_PATHCONV=0

Hope this helps!

Static sites? Let’s double that!

Monday, March 14th, 2016

Now that I’ve spent a good deal of time learning about what’s hot in the front-end area, I can go back to my initial goal: renew this Website.. or maybe I can fool around some more? :) In this post, I’ll describe the idea that I’ve got in mind.

One thing that’s been bothering me for a while is the dependency that I currently have on WordPress, PHP and a MySQL database. Of course there are pros and cons to consider, but currently I’m inclined to ditch WordPress, PHP and MySQL in favor of a static site.

Static site generators like Hugo (one of the most popular options at the moment) let you edit your content using flat files (e.g., using Markdown) with a specific folder structure. Once your content is ready for publication, you have to use a CLI/build tool that takes your content (e.g., posts, pages, …) and mixes it with a template.

Once the build is completed, you can upload the output on your Web host; no need for a database, no need for a server-side language, no need for anything more than a good old Apache Web server (or any Web server flavor you like). Neat!

Now what I’m wondering is: can we go further? What if we could create doubly static static sites? :)

Here’s the gist of my idea:
First, we can edit/maintain the content in the same way as with Hugo: through a specific folder structure with flat files. Of course we can add any feature we’d like around that: front matter, variables & interpolation, content editor, … For all that a build/CLI should be useful.. more on that later.

Note that the content could be hosted on GitHub or a similar platform to make the editing/publishing workflow simpler/nicer.

So, we’ve got static content, cool. What else? Well now what if we added a modern client-side Web application able to directly load those flat files and render them nicely?

If we have that then we could upload the static content to any Web host and have that modern Web app load the content directly from the client’s Web browser. The flow would thus be:

  • go to
  • receive the modern Web app files (HTML, CSS, JS)
  • the modern Web app initializes in my Web browser
  • the modern Web app fetches the static content (pages, posts, …)
  • the modern Web app renders the content

Ok, not bad but performance could be an issue! (let’s ignore security for a moment ok? :p).
To work around that, we could imagine loading multiple posts at once and caching them.
If we have a build/CLI could also pack everything together so that the Web app only needs to load a single file (let’s ignore the HTTP 1.1 vs HTTP 2.0 debate for now).

In addition, we could also apply the ‘offline-first’ idea: put pages/posts in local storage on first load; the benefit would be that the application could continue to serve the content offline (we could combine this with service workers).

The ideas above partially mitigate the performance issue, but first render would still take long and SEO would remain a major problem since search engines are not necessarily great with modern client-side Web apps (are they now?). To fix that, we could add server-side rendering (e.g., using Angular Universal).

Server-side rendering is indeed nice, but it requires a specific back-end (let’s assume node). Personally I consider this to be a step back from the initial vision above (i.e., need for a server-side language), but the user experience is more important. Note that since dedicated servers are still so pricey with OVH, it would be a good excuse to go for DigitalOcean.. :)

Another important issue to think about is that without a database, we don’t have any way to make queries for content (e.g., search a keyword in all posts, find the last n posts, …). Again, if we have a build/CLI, then it could help work around the issue; it could generate an index of the static content you throw at it.

The index could contain results for important queries, post order, … By loading/caching that index file, the client-side Web app could act more intelligently and provide advanced features such as those provided by WordPress and WordPress widgets (e.g., full text search, top n posts, last n posts, tag cloud, …).

Note that for search though, one alternative might be Google Search (or Duck Duck Go, whatever), depending on how well it can handle client-side Web apps :)

In addition, the build/CLI could also generate content hashes. Content hashes could be used to quickly detect which bits of the content are out of date or new and need to be synchronized locally.

There you have it, the gist of my next OSS project :)

I’ll stop this post here as it describes the high level idea and I’ll publish some additional posts to go more in depth over some of the concepts presented above.


Thursday, February 18th, 2016

I’ve received a YubiKey Neo today and thus I’m going to start experimenting with it. If you care about security but never heard about YubiKey or Universal 2nd Factor (U2F) then you should probably take a look at how awesome that stuff is :)

Here’s a list of things I’m planning on using it as second authentication factor for:

  • Google tools & Google Chrome
  • Windows authentication
  • OpenVPN
  • KeePass
  • Android

I’ll also look at other ways I could leverage U2F… If you’ve got tips & tricks to share, don’t hesitate to tell me!

Modern Web Development – Part Two

Wednesday, February 17th, 2016

In the first part of this series, I’ve explained how I re-discovered the state of the Web platform and specifically of the JavaScript universe.

Around June, I changed my mind about AngularJS and thought that Angular 2 could arrive on time for our project (hint: it didn’t), so I decided to tag the current state of my personal project and tried to actually start developing my site using it.

I spent some time during my holidays to migrate my app to Angular 2. During that time, I banged my head against the wall so many times it still hurts; not because of Angular 2, but because of RxJS and Reactive Programming; those made me feel really stupid for a while :)

Also during that time, I spent time improving my build. The build story was starting to itch me real bad, so at some point I put my project aside and decided to extract the build to a separate project and concentrate on that for a while. That effort led to the creation of modernWebDevBuild” (MWD for friends). MWD was my take at providing a reusable build for creating modern web applications. You could argue that that solution is not modern anymore but hey, I can’t stop time ;-)

If you look at the feature list of modernWebDevBuild, you’ll see that it’s basically Web Starter Kit on steroids with support for TypeScript, tslint, karma, etc.

I’ve put some effort into making it flexible enough so that it doesn’t put too many constraints on the client project structure and I’m pretty sure that, with some help of the community, it could become much more malleable and could be reused across many more projects, independently of whether those are based on Angular 1, Angular 2 or something else.

A while after, I’ve also created a Yeoman generator called modernWebDevGenerator to make it easy to scaffold new projects using modernWebDevBuild. The generated projects include many personal choices (e.g., Angular 2, TypeScript, SystemJS, JSPM, sass, jshint and a rule set, jscs and a rule set, …) and style guidelines (e.g., component approach for Angular and SASS code), but most if not all can be stripped away easily.

In my opinion, modernWebDevBuild was a good shot at providing a reusable build for front-end web development. I’ve used it for multiple projects and could update it easily without having to worry about the build or having to maintain a ton of build-related dependencies and whatnot. That was a relief: fixing an issue meant fixing it once in one place, much better!

For me, the idea of having a complete build as a dependency of a project is something I find immensely valuable.

Recently though, with the project at work (where we’ll use AngularJS for now) we’ve evaluated different solutions for bundling, module loading & build tasks in general which led to the decision of  using webpack. So far, it’s been a blast. I’m not going to explain in detail what webpack is as there are already more than enough articles over it out there, but IMHO it’s the clear winner at the moment. The most important for me is that it has a very active/vibrant community around it busy maintaining & developing tons of plugins. Those plugins add support for pretty much anything that you might need in your front end build. You need transpilation? Check. You need autoprefixing? Check. You need cache busting? Check… well you get the idea.

We’ve decided to fork the Angular 2 Webpack Starter Kit of AngularClass as it was the closest to what we needed to have.

With our project template, our goal is to integrate the whole stack that we’ve decided to use (e.g., Redux, RxJS, JSData, webpack for module bundling/loading, …) and use that template as basis for our next projects.

The thing is that I’d still like to extract the webpack build to a separate project (or at least a part of it). Again, I really believe that it should be possible to provide a reusable build configuration as long as it is flexible enough to accommodate for general use cases. Ultimately the discussion boils down to pragmatism versus the pleasure of reinventing your own wheel each time. Personally, I like round wheels and if one is flat then I don’t want to fix all my cars. What about you?

In the next post, I’ll explain what my new goal is for my site; as I said, I took a different route for a while because I had lots to learn, but now it’s about time for me to go back to my initial goal :)

Modern Web Development – Part one

Wednesday, February 17th, 2016

Since April last year, I’ve been plunging again in the world of Web development.. and what fun it has been! In this series of posts, I’m going to summarize the stuff I’ve done last year in order to hop back on the train and I’ll describe what I’ve learned along the way.

At the time, I published two blog posts, which were my way of condensing my vision for an important project at work aiming to modernize the way we create Web applications by going towards a client-side architecture combined with RESTful Web Services on the back-end.

When I started looking back at how the Web platform had evolved during the 2012-2015 period, the main things I had on my mind were:

  • mobile first & responsive web design
  • client side Web application architecture (which I personally consider to be the part of Web 3.0 — seriously, why not?)
  • the new specs that had reached broad support in modern Web browsers and were gaining a lot of traction
  • the offline first idea that these specs made more realistic

I wanted to learn more about AngularJSnode.jsnpm and sass but that was about it. I remember that at first, I had no precise idea yet about the build tool and the build steps that I wanted/needed… I hadn’t even heard about ES6 yet!

Since then, I’ve learned a ton about ES2015, TypeScript, module systems, module loaders, JS frameworks & the tooling around, front-end state management solutions, front-end build systems, project boilerplates, css style guides, quality assurance for front-end apps, unit & e2e testing libraries, … and the integration of it all…

The funny thing is that… I failed to deliver.

Initially, my personal goal was to create a responsive client-side Web app exploiting the RESTful API of my WordPress installation to replace my current theme, but I changed my mind along the way… So far, my site hasn’t changed one bit. I did improve some things though, but that was more around security than anything else.

So what made me change my mind and where did I spend my time?

At first, I was concentrated on the task at hand and I looked at how the HTML5 boilerplate had evolved as I knew that it was one of the best starting points around for creating modern Web apps. My idea was simple: use HTML5 boilerplate or InitializR to get ModernizR… and add some good old script tags… :p

I started with HTML5 boilerplate, but shortly after, I stumbled upon Web Starter Kit which was fresh out of Google’s oven, was based on HTML5 boilerplate and had some really cool features.

It came out of the box with a nice build which included support for JSCS (JS code style), JSHint (JS code quality), autoprefixing, BrowserSync (if you don’t know that one, DO check it out!), sass and ES6 (that was still the name at that point) with the help of Babel, …

 I really liked their setup and decided to use it as basis for my project; and that’s where my trajectory deviated :)

Given that I’m quite curious, I spent a while deconstructing Web Starter Kit’s build so that I could really understand what made it tick. That made me discover npm, gulp and the whole ecosystem of gulp plugins.

I really enjoyed doing so as it has helped me better grasp the necessary build steps for modern Web apps:

  • transpile code (ts->js, sass->css, …)
  • check quality
  • check style
  • create a production build (bundle, minify, mangle, …)
  • execute unit tests
  • execute end to end tests

At that moment, I was happy with the build as it stood so I continued to focus on developing my app. I took a good look at what ES6 was, what it meant for JavaScript, its ecosystem and how Babel helped (was it still called 6to5 then?). Learning about ES6 features took me a long while and I’m still far from done, but it was well worth it. ES2015 is such an huuuuuuuuuuuge step forward for the language.

I also took a glance at Angular 2 which was still in alpha state. It looked interesting but I believed that it would never be ready in time for our project at work (and it wasn’t). Still, I did spend a few days toying around with the alpha just to get to understand the basic principles.. and I must say that I really really loved what I saw!

That quick research spike also made me discover TypeScript.

Having a strong Java & OO background, TypeScript (TS) directly got me excited. I’m a strong believer in strong (heh) typing, and the fact that TS already supported many ES6 features that weren’t natively supported by Web browsers yet was very appealing to me.

Moreover, having dozens of Java developers in our development teams at work, TypeScript seemed really ideal for us as it supports many features and idioms that our developers are very familiar with (classes, interfaces, generics, strong typing, decorators, …).

If you want to learn more about TypeScript, I definitely recommend the Typescript Deep Dive.

At that point, tsconfig.json wasn’t there yet and the most evident choice to integrate the necessary build step was gulp, as advertised by Dan Walhin’s excellent blog post. If I had read more about npm I might have gone a completely different path (i.e., used npm scripts only).. ^^.

At that point, I had to deviate from what Web Starter Kit offered me in order to add build tasks for TypeScript, tslint, etc. Fiddling with the build made me realize that it was quite brittle, so I refactored it quite a lot and tried to improve things (e.g., separate the build tasks in different files, extract the configuration settings, ensure that it would not break the build on each error, etc). I remember that I wanted to contribute back to Web Starter Kit but realized too late that I had made too many changes at once for them to be able to integrate easily (silly me, bummer).

I went pretty far with actually as at some point, I was using TypeScript to output ES6 code that I then sent through Babel, just so that I could use async/await and other things that TypeScript wasn’t able to transpile to ES5… :)

The exercise helped me see how “immature” and “fragile” the whole JavaScript ecosystem was. What I mean by that is that there seems to be only moving parts and each of those parts don’t necessarily keep happy with each other. Not only do too few people really understand what semver actually means and respect it, but everything that shines bright gets replaced faster than the speed of light :)

As a technologist, I love the pace it imposes for the fun and innovation it brings to the table, but it’s also frustrating for many reasons and (should be) quite scary for enterprises (to some degree). People talk about JavaScript fatigue, which is quite a fun way to put it and I can certainly understand the idea now.

One example that I thought a lot about is the fact that each and every front-end project seems to have its own build chain and build configuration that lives within the project, in complete isolation and has to be maintained.

Of course each and every project has its specificities so there really can’t be ONE rigid & reusable solution to rule them all, but the idea of duplicating so much effort needlessly across a whole community of developers violates the DRY principle as much as anything ever could.

Just try and imagine how many people must have used some Yeoman generator to scaffold projects, which now all have separate builds with tasks that all do the same things but are all defined 20.000 times in a gazillion different ways using variable and unreliable dependency versions… :)

When you scaffold a project using a generator, you end up with a snapshot of the template and of the build provided by the generator at that point in time and then it’s up to you to keep your version up to date and to integrate all improvements and bug fixes, assuming you have time to follow that… you poor thing!

Being part of a core software development team at work, my focus is most often on finding reusable solutions to common problems, limiting effort duplication and what not and thus, the front-end universe’s situation seems quite sad in that regard.

Another point that struck me was how limited the main package management solution was. npm is nice and all, but not being able to define some parent/generic/reusable configuration (e.g., like parent pom files in Maven) is kind of surprising. Again, the DRY principle probably corresponds to DO Repeat Yourself in the frontend universe. I’m sure that front-end experts will tell me that you can work around all that in countless ways, but that’s exactly the issue: I shouldn’t have to invent my solution for a general issue people should be concerned about.

To conclude on a positive note though, I do believe that all the tooling DOES bring added value because it makes it possible to manage dependencies correctly, define build steps which execute tests, generate coverage reports (e.g., using Istanbul), generate production builds etc.

This piece is getting a bit long, so I’ll continue my little story in part two!


OVH et les headers HTTP

Friday, January 15th, 2016

Si un soir d’hiver, il vous prend l’envie d’envoyer des headers HTTP à votre back-end hébergé chez OVH (i.e., si vous êtes aussi cinglés que moi), alors ma petite histoire devrait vous intéresser (en tout cas la conclusion)!

Comme j’ai une forte tendance à vouloir expérimenter, j’ai mis en place un petit système de jetons basés sur les JSON Web Tokens (JWT), de façon à pouvoir en générer à la volée, vérifier leur validité, les renouveler, etc.

Comme je n’ai toujours pas de serveur dédié (à vot’ bon coeur :p), j’ai implémenté ça en PHP (cfr mon post précédent) et j’ai uploadé ça sur un hébergement OVH.

J’avais évidemment tout développé/testé en local et j’étais plutôt content de moi. Or voilà, une fois déployé sur OVH, mon premier essai a été un échec total. Tellement foireux même que j’ai crû à une éclipse lunaire.

Après avoir retourné le bouzin pendant une bonne demi heure, je viens de me rendre compte qu’OVH ne passe pas les headers HTTP sans les chatouiller un peu au passage.

En effet, mon joli header “Authorization” disparaît pûrement et simplement à l’arrivée, tandis qu’une version plus exotique telle que “X-Authorization” devient quand à elle “X_Authorization”.

Alors je m’imagine bien qu’OVH fait ça pour de super bonnes raisons (que je suis curieux de découvrir), mais j’avoue que pour le coup ils ont réussi à me donner la nausée :)

Bref, vous êtes prévenus!