Quick NPM tip and a little rant about node-gyp

Wednesday, July 1st, 2015

Before I start explaining why I’m writing this, here’s my NPM tip of the day: if you encounter errors pertaining to node-gyp “rebuild”, while trying to install an NPM package, then before wasting precious hours of your life, just try to install using the –no-optional flag; if you’re in luck, that’ll just work (as it did for me in most cases).

Now what the heck is node-gyp? That’s a fair question to ask. As they put it in their readme it’s a “cross-platform command-line tool written in Node.js for compiling native addon modules for Node.js … and takes away the pain of dealing with various differences in build platforms”.

Well the way I now see it, it might just do what they say.. for people who need/care about that, but for the rest of the world and especially people like me who just want to install an npm package and get on with their life.. it’s just trouble and needless time waste.

Sometimes when you try to install an NPM package, there will be some dependency in the tree that requires to be built specifically for your platform and at that point, node-gyp (which is one of the dependencies of NPM itself) might come into play. The issue is that to be able to do its job, node-gyp has some prereqs that vary from OS to OS and those prereqs are not part of node/NPM (you’ll soon understand why :p). If you’re one of the good guys and use Linux (you should… and I should too but can’t) then you’ll be alright: python + make will make your day (you’ll also be fine with OSX).

Unfortunately, if you’re a sad panda working on a Windows box just like me, then tough luck!

Here’s a little overview of the ‘light/small’ requirements that node-gyp has on Windows 8+:

  • Python (2.7.x and NOT 3.x+): ok just with this one I already dislike node-gyp
  • Microsoft Visual Studio C++ 2013: ok, now I hate it. Do I really need 7GB just to get an npm dependency on my machine? Wtf (pre-compiled binaries FTW, if I wanted to compile everything myself on my machine, I’d still be using gentoo..)
  • and last but not least, for 64-bit builds… Windows SDK: are you kidding me?!!

Assuming that you’re motivated, then you’ll go ahead and install these.. try again and… still get the same error?! Gee… Well the thing is that quite some people have encountered this problem and have hopped through all kinds of hoops to finally get it to work. Some have had success by uninstalling all Visual C++ redistributable packages (any gamers around here?), reinstalling node-gyp’s dependencies in a specific order, adding environment variables and whatnot..

In my case I was pretty happy to discover that in all cases, the dependencies that needed node-gyp were optional (e.g., for babel, browserify and some others), so simply avoiding them was fine. If you really do need node-gyp to work then I pity you and your disk space ^^. Just take a look at some of these links and may the force be with you.

What also sucks is that npm install rolls back on error even for optional dependencies although it’s not supposed to..

HSTS enabled!

Friday, June 19th, 2015

Hey everyone!

As noted in my previous post, I’ve finally switched my domain to HTTPS. I was reluctant to enable HSTS (HTTP Strict Transport Security) at first but after looking at this talk, I’ve decided to just go with the flow and enable it on CloudFlare:

hsts

Basically it means that, as of right now, you’ll always use HTTPS when visiting my website, even if you try and visit the old HTTP URL. This will occur not only because my Apache server is configured to automatically redirect you to the HTTPS version, but because your browser will automatically go to the HTTPS URL. Why will it do that? Because my site is now sending the HSTS HTTP header:

strict-transport-security:max-age=15552000; includeSubDomains; preload

Basically that header tells your browser: This is an HTTPS enabled website, always use HTTPS if you come back here. Please do this for this domain and all sub-domains for the next six months..

For now, as my site isn’t in the browsers HSTS preload list yet (I’ve just submitted it), you may visit this site once more using plain HTTP but as soon as your browser will see the HSTS HTTP header it’ll remember to always switch to HTTPS.

Why does HSTS matter? Because it will protect YOU against man-in-the-middle attacks.. not that this Website is sensitive in any way, but as a good Web citizen I have to do what I can, right? ;-)

I was hesitant to enable this because I’ve just signed up with CloudFlare and if they decide to drop their free subscription plan then it means that I’ll be forced to find either another similar solution or buy a certificate that I can install on my web host; in my case OVH doesn’t allow importing third party certificates and they charge about 50€ per year for that (which is wayyyyyyyy too much for a personal website).

The bet that I’m making by enabling HSTS now is simply that the free subscription model of CloudFlare will remain available for at least 2-3 years (hopefully much longer) and that in the meantime, given how Mozilla, Google major players and others are pushing for HTTPS everywhere, the overall accessibility/affordability of HTTPS for personal websites will have improved. If I’m wrong well then I’ll either pay if you show me enough love or shut this thing down ;-)

HTTPS everywhere

Thursday, June 18th, 2015

TL;DR CloudFlare is awesome, but don’t underestimate the effort required to fully switch your site to HTTPS

About time… That’s what I keep telling myself; my site won’t be considered insecure by default :)

I’ve finally switched this site to HTTPS and I must say that CloudFlare has made this extremely easy, straightforward and fast.

Now I’ll be able to have fun with Service Workers and other modern Web goodies that require HTTPS.

Here’s what I had to do in order to get the holy green padlock.

First I had to create a (FREE) account on CloudFlare. Once my account was created I entered the domain that I wanted to add and CloudFlare went about finding all the DNS zone entries it could find. That took about a minute and the result was correct.

Next, I had to modify my domain’s DNS zone name servers to replace the OVH ones by those of CloudFlare. It didn’t take long for the switch to actually take place. DNS replication ain’t the fastest of things.

And bam done.. or almost.


As I like tweaking stuff, I had to check out all the features provided by CloudFlare, and the least I can say is that the feature list included in the free tier is just plain impressive!

Here’s what I’ve enabled:

  • SSL w/ SPDY: SSL between clients and CloudFlare as well as between CloudFlare and OVH (although the certificate presented by OVH isn’t trusted it’s still better than nothing)
  • IP firewall: basic but nice given the price :p
  • Automatic minification of JS/CSS/HTML assets
  • Caching
  • Always online: awesome, they’ll continue to serve my static content even if the site goes down
  • A few other nice things

They also provide ways to purge their cached data and to enable a Dev mode that allows to access up-to-date resources, etc

In the future, if I’m convinced that I can keep my site HTTPS-enabled for long, then I’ll also enable HSTS.

I might also give their Rocket Loader feature a try…


Enabling HTTPS for my site is only the first part of the story; there were other changes I needed to make in order to get the almighty green padlock (TM).

I first needed to make sure that my visitors (you guys) visited the site using HTTPS, so I’ve updated my .htaccess file accordingly:

...

RewriteEngine On

# 2015-06-18 - Automatic redirection to https now that CloudFlare is enabled
RewriteCond %{HTTPS} off
# rewrite to HTTPS
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
# rewrite any request to the wrong domain to use www.
RewriteCond %{HTTP_HOST} !^www\.
RewriteRule .* https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

...

With this, I have an automatic http to https redirection. Of course that isn’t going to protect you from MITM attacks but I’m not ready to enable HSTS just yet.


Next I had to update my WordPress configuration to ensure that all generated links use HTTPS (WordPress address & Site address URL).

This fixed a few issues with mixed content but not all of them. I had to go through all my template’s files to ensure that I was using https everywhere; namely I had hardcoded the URL of my FeedBurner RSS feed.


I also noticed that I was still getting errors in the console about mixed content and indeed my site was retrieving some resources using plain HTTP from other domains.

In order to fix this, I had to

  • use my very rusted SQL-fu to replace http by https at all the places it made sense in my posts (e.g., links to Google Photo images, links to my own site, etc)
  • modify one of my WordPress extensions to retrieve its scripts from Google’s CDN using HTTPS
  • get rid of an extension that was using iframes, swf objects and displayed warnings if Flash was missing (oh god..) =)

I also took the opportunity to configure CORS, also through my .htaccess:

...

RewriteEngine On
Header set Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Methods: "GET,POST,OPTIONS,DELETE,PUT"
...

...

And now, just look at this beauty:

Green beauty

Sublime Text plugins that I use

Monday, June 1st, 2015

TL;DR: I’ve started using Sublime Text as my default text editor, it is indeed awesome and I’ve compiled a list of the plugins that I find useful.

For a very long time, my text editor of choice has remained Notepad++ (NPP for friends). NPP is still great (it’s free and open source, it has tabs, extensions, syntax highlighting and some other goodies), but it’s not AWESOME.

I’ve been hearing about Sublime Text for a while now but never really took the time to try it out seriously. Moreover, the last time I checked I’ve noticed that it wasn’t free so I didn’t get any further. Although I do understand the reasons why some developers choose the lucrative path, I’m in general more inclined to use free and preferably open source software (why pay when you can get something just as good for free?).

So Sublime Text’s website URL was hidden somewhere in some dark corner of my bookmarks and was set to remain in there forever, until a Web designer at work gave me a quick demo which led me to reconsider it :)

The first word that now comes to my mind when thinking about Sublime Text is “polished”: the UI is really beautiful and that alone makes it very pleasing to use. Sublime has really neat text selection/edition features (e.g., column and multi-selection editing, auto-completion, …), support for many languages (syntax highlighting), uber fast search and navigation, tabs, macros, etc. I’m not going to list it all here as I’m pretty sure many people took time to do so already.

But even though the out-of-the-box feature-list is quite nice, it is far from enough to make me consider it worthy of replacing NPP which I’m very used to. Really getting to know an editor takes time and I only have that much available.

What really made me change my mind is the ecosystem of Sublime. Over time, as the community has grown, many developers have spent time to develop a ton of extensions, themes and color schemes for it. The package manager for Sublime is called Package Control and contains almost 3K packages, hence at least 100 are probably worth the try :)

Suffice to say, knowing this, I needed to go through the catalog and try out the most popular extensions. In doing so, I’ve realized that Sublime + extensions > NPP + extensions, which is why Sublime is now my default text editor. It’ll take me a few weeks/months to really take advantage of it, but I already enjoy it a lot.

I’m not going to explain here how to install the package manager or install packages; for that you should rather check out the following video.

Without further ado, here’s the list of extensions that I’m currently using along with a small description to give you an idea of why I consider each useful/relevant for productivity (assuming that you’re into software development that is ^^). I’ll create new posts whenever I discover new ones that are of interest.

General:

  • NPM: Easily interact with the Node Package Manager (NPM) from Sublime (e.g., install NPM packages, add project dependencies, …)
  • Gulp: Quickly execute Gulp commands directly from Sublime (I’ll talk about Gulp in a future post)
  • SublimeCodeIntel: Code intelligence and smart autocomplete engine. Supports many languages: JavaScript, Mason, XBL, XUL, RHTML, SCSS, Python, HTML, Ruby, Python3, XML, Sass, XSLT, Django, HTML5, Perl, CSS, Twig, Less, Smarty, Node.js, Tcl, TemplateToolkit, PHP (phew ^^)
  • BracketHighlighter: Highlight brackets in the gutter (bar left of the file contents); very useful to quickly see where any given code block ends
  • Git: Execute git commands directly from Sublime through easy-to-use contextual menus
  • Git Gutter: Show an icon in the gutter indicating whether a line has been inserted/modified or deleted (checked against HEAD by default)
  • SidebarGit: Add Git commands in the sidebar context menu
  • ApplySyntax: Detect file types and apply the correct syntax highlighting automatically
  • Alignment: Easily align multiple selections and multi-line selections
  • AutoFileName: Automatically complete filenames; very useful when referring to project files (e.g., src for an image tag, file name for a CSS import, …)
  • TrailingSpaces: Easily see/remove trailing whitespace (if you’re crazy like me about small details). Check out the options here
  • SublimeLinter: A plugin that provides a framework for linting code in Sublime. Basically this one is a pre-req for some neat plugins (see below). Check out the docs for more information
  • FileDiffs: Show diff between current file or selection(s) in the current file, and clipboard, another file or unsaved changes
  • SidebarEnhancements: Better sidebar context menu
  • ExpandTabsOnSave: Automatically convert tabs to space (or the other way around, depending on your indentation settings)
  • Open Folder: Add an ‘Open folder’ option to the sidebar context menu
  • Pretty JSON: Prettify JSON, validate JSON, etc
  • Indent XML: Fix XML and JSON files indentation
  • JSONLint: JSON linter; checks JSON files for errors and display them in context
  • EditorConfig: Useful to respect the editorconfig file (.editorconfig in the project) which defines a common configuration for text editors
  • Dockerfile Syntax Highlighting: Add syntax highlighting for Dockerfiles

Web development

  • Emmet: Add zen-coding support to Sublime. (e.g., write div*2>span.cool*5 then hit TAB). Emmet is awesome (note that plugins exist for various editors, not only Sublime). Emmet allows me to quickly generate a ton of HTML code without wasting time
  • TypeScript: Add syntax highlighting and autocompletion for TypeScript code
  • JSCS: Check JS code style using node-jscs. To be able to use this you first need to install NodeJS, NPM then JSCS (npm install -g jscs). Check this link out for the complete list of rules that you can configure. Here’s an example from my latest project
  • JSCS-Formatter: Format JS code based on the JS code style that you’ve configured for your project (i.e., through the .jscsrc file) which is pretty neat
  • SublimeLinter-jshint: JSHint linter for SublimeLinter. Shows you what’s wrong with your JS code (requires SublimeLinter
  • SublimeLinter-csslint: CSS linter for SublimeLinter. Shows you what’s wrong with your CSS code (requires SublimeLinter)
  • SublimeLinter-annotations: Make TODOs FIXMEs etc stand out (requires SublimeLinter)
  • Sass: Sass support for Sublime. Adds syntax highlighting and tab/code completion for Sass and SCSS files. It also has Zen Coding shortcuts for many CSS properties
  • SCSS snippets: Additional SCSS snippets (use tab for autocompletion)
  • CSS3: Add CSS3 support. This plugin includes draft specs and provides autocompletion for each and every CSS3 property. It also highlights bad/old CSS
  • Color Highlighter: Highlight hexadecimal colorcodes with their real color. Here’s a small tip; in the plugin configuration (ColorHighlighter.sublime-settings), it’s possible to enable permanent color highlighting, which I find particularly convenient: { “ha_style”: “filled” }
  • Color Picker: What the name says ;-)
  • Autoprefixer: Add CSS vendor prefixes. This plugin is useful for small prototypes but is otherwise better done through a build process (e.g., using Gulp)
  • HTML5: Snippets bundle for HTML5. Useful to add HTML5 tags/attributes (e.g., type <time then hit TAB)
  • JavaScript Snippets: JavaScript snippets: useful to quickly write JS code
  • AngularJS: AngularJS code completion, code navigation, snippets
  • jQuery: jQuery syntax highlighting and autocompletion (snippets)
  • DocBlockr: Add support for easily writing API docs

Visual candies

  • Seti_UI: Awesome theme with custom icons for file types
  • Schemr: Color scheme selector. Makes it easy to switch color schemes
  • Themr: UI theme selector. Makes it easy to switch themes
  • Dayle Rees colour schemes: A ton of color schemes (.. that I’ll probably never use now that I have Seti_UI :p)

As I’ve explained in previous posts, I’m now busy with the creation of a new version of this website using more modern technologies.

With my current set of Sublime Text plugins, I now almost have a full-featured Web-development-oriented IDE at my disposal. For my current/specific development needs, Jetbrain’s WebStorm (commercial IDE) is actually a better alternative (it supports many of what the plugins above bring and has its own plugin repository) but it’s overkill to use it as my all-around text editor and my wife probably won’t appreciate the 50$/y license cost (even though very reasonable) :)

For casual text editing, quick prototyping etc, Sublime Text wins hands down given how fast it starts and how reactive it is overall.

Note that there is another interesting editor called Atom. Atom has been developed by GitHub and is free and open source. Its engine is based on Web technologies (I assume WebKit, Chromium or the like) which is great for hackability and it is gaining a lot of momentum (it has already >2K plugins). I think that it’s still a bit young so I’ll check back in a year or two.. but don’t take my word for it. Try it out and don’t hesitate to tell me if you think it’s actually better than Sublime (and why) =)

Recovering a raid array in “[E]” state on a Synology nas

Tuesday, May 19th, 2015

WARNING: If you encounter a similar issue, try to contact Synology first, they are ultra responsive and solved my issue in less than a business day (although I’m no enterprise customer). Commands that Synology provided me and that I mention below can wipe away all your data, so you’ve been warned :)

TL;DR: If you have a RAID array in [E] (DiskError) state (Synology-specific error state), then the only option seems to re-create the array and run a file system check/repair afterwards (assuming that your disks are fine to begin with).

Recently I’ve learned that Synology introduced Docker support in their 5.2 firmware (yay!), but unfortunately for me, just when I was about to try it out, I noticed an ugly ORANGE led on my NAS where I always like to see GREEN ones..

The NAS didn’t respond at all so I had no choice but to power it off. I first tried gently but that didn’t help so I had to do it the hard way. Once restarted, another disk had an ORANGE led and at that point I understood that I was in for a bit of command-line fun :(

The Web interface was pretty clear with me, my Volume2 was Crashed (that didn’t look like good news :o) and couldn’t be repaired (through the UI that is).

After fiddling around for a while through SSH, I discovered that my NAS created RAID 1 arrays for me (with one disk in each), which I wasn’t aware of; I actually never wanted to use RAID in my NAS!

I guess it makes sense for beginner users as it allows them to easily expand capacity/availability without having to know anything about RAID, but in my case I wasn’t concerned about availability and since RAID is no backup solution (hope you know why!), I didn’t want that at all, I have proper backups (on & off-site).

Well in any case I did have a crashed RAID 1 single disk array so I had to deal with it anyway.. :)

Here’s the output of some commands I ran which helped me better understand what was going on.

The /var/log/messages showed that something was wrong with the filesystem:

May 17 14:59:26 SynoTnT kernel: [   49.817690] EXT4-fs warning (device dm-4): ext4_clear_journal_err:4877: Filesystem error recorded from previous mount: IO failure
May 17 14:59:26 SynoTnT kernel: [   49.829467] EXT4-fs warning (device dm-4): ext4_clear_journal_err:4878: Marking fs in need of filesystem check.
May 17 14:59:26 SynoTnT kernel: [   49.860638] EXT4-fs (dm-4): warning: mounting fs with errors, running e2fsck is recommended
...

Running e2fsck at that point didn’t help.

A check of the disk arrays gave me more information:

> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [
md2 : active raid1 sda3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md6 : active raid1 sdc3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md5 : active raid1 sdf3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md3 : active raid1 sde3[0](E)
      3902296256 blocks super 1.2 [1/1] [E]

md7 : active raid1 sdg3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md4 : active raid1 sdb3[0]
      1948792256 blocks super 1.2 [1/1] [U]

md1 : active raid1 sda2[0] sdb2[2] sdc2[4] sde2[1] sdf2[3] sdg2[5]
      2097088 blocks [8/6] [UUUUUU__]

md0 : active raid1 sda1[0] sdb1[2] sdc1[4] sde1[1] sdf1[3] sdg1[5]
      2490176 blocks [8/6] [UUUUUU__]

unused devices: 

As you can see above, the md3 array was active but in a weird [E] state. After Googling a bit I discovered that the [E] state is specific to Synology, as that guy explains here. Synology doesn’t provide any documentation around this marker; they only state in their documentation that we should contact them if a volume is Crashed.

Continuing, I took a detailed look at the md3 array and the ‘partition’ attached to it, which looked okay; so purely from a classic RAID array point of view, everything was alright!

> mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Fri Jul  5 14:59:33 2013
     Raid Level : raid1
     Array Size : 3902296256 (3721.52 GiB 3995.95 GB)
  Used Dev Size : 3902296256 (3721.52 GiB 3995.95 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sun May 17 18:21:27 2015
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : SynoTnT:3  (local to host SynoTnT)
           UUID : 2143565c:345a0478:e33ac874:445e6e7b
         Events : 22

    Number   Major   Minor   RaidDevice State
       0       8       67        0      active sync   /dev/sde3


> mdadm --examine /dev/sde3
/dev/sde3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 2143565c:345a0478:e33ac874:445e6e7b
           Name : SynoTnT:3  (local to host SynoTnT)
  Creation Time : Fri Jul  5 14:59:33 2013
     Raid Level : raid1
   Raid Devices : 1

 Avail Dev Size : 7804592833 (3721.52 GiB 3995.95 GB)
     Array Size : 7804592512 (3721.52 GiB 3995.95 GB)
  Used Dev Size : 7804592512 (3721.52 GiB 3995.95 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : a2e64ee9:f4030905:52794fc2:0532688f

    Update Time : Sun May 17 18:46:55 2015
       Checksum : a05f59a0 - correct
         Events : 22


   Device Role : Active device 0
   Array State : A ('A' == active, '.' == missing)		

See above, all clean!

So at this point I realized that I only had few options:

  • hope that Synology would help me fix it
  • try and fix it myself using arcane mdadm commands to recreate the array
  • get a spare disk and copy my data to it before formatting the disk, re-creating the shares and putting the data back (booooringgggggg)

To be on the safe side, I saved a copy of the output for each command so that I had at least the initial state of the array. To be honest at this point I didn’t dare go further as I didn’t know what re-creating the raid array could do to my data if I did something wrong (which I probably would have :p).

Fortunately for me, my NAS is still supported and Synology fixed the issue for me (they connected remotely through SSH). I insisted to get the commands they used and here’s what they gave me:

> mdadm -Cf /dev/md3 -e1.2 -n1 -l1 /dev/sde3 -u2143565c:345a0478:e33ac874:445e6e7b
> e2fsck -pvf -C0 /dev/md3

As you can see above, they’ve used mdadm to re-create the array, specifying the same options as those used to initially create it:

  • force creation: -Cf
  • the 1.2 RAID metatada (superblock) style: -e1.2
  • the number of devices (1): -n1
  • the RAID level (1): -l1
  • the device id: /dev/sde3
  • the UUID of the array to create (the same as the one that existed before!): -u2143565c….

The second command simply runs a file system check that repairs any errors automatically.

And tadaaaa, problem solved. Thanks Synology! :)

As a sidenote, here are some useful commands:

# Stop all NAS services except from SSH
> syno_poweroff_task -d

# Unmount a volume
> umount /volume2

# Get detailed information about a given volume
> udevadm info --query=all --name=/dev/mapper/vol2-origin
P: /devices/virtual/block/dm-4
N: dm-4
E: DEVNAME=/dev/dm-4
E: DEVPATH=/devices/virtual/block/dm-4
E: DEVTYPE=disk
E: ID_FS_LABEL=1.42.6-3211
E: ID_FS_LABEL_ENC=1.42.6-3211
E: ID_FS_TYPE=ext4
E: ID_FS_USAGE=filesystem
E: ID_FS_UUID=19ff9f2b-2811-4941-914b-ef8ea3699d33
E: ID_FS_UUID_ENC=19ff9f2b-2811-4941-914b-ef8ea3699d33
E: ID_FS_VERSION=1.0
E: MAJOR=253
E: MINOR=4
E: SUBSYSTEM=block
E: SYNO_DEV_DISKPORTTYPE=UNKNOWN
E: SYNO_KERNEL_VERSION=3.10
E: SYNO_PLATFORM=cedarview
E: USEC_INITIALIZED=395934

That’s it for today, time to play with Docker on my Synology NAS!

Portrait

Tuesday, May 12th, 2015

2015-04-05 - 16h16 - 033 - Claudine.jpg

Let there be light

Tuesday, May 12th, 2015

2015-05-05 - 19h36 - 012.jpg

Bon pied bon oeil

Friday, May 1st, 2015

2015-04-14 - Bernard.jpg

Reveal.js me something

Sunday, April 26th, 2015

tl;dr: I’ve created a project for creating Reveal.JS presentations quickly using Markdown alone

About

I’ve been wanting to play around with Reveal.js quite some time but never quite took the time necessary to read the doc.

Yesterday I did and realized that the only serious editor for Reveal.js is http://slides.com/ which is only free for public decks (which is nice BTW) and well, I’d also like to create my own slide decks without paying just to be able to do so.

Given that Reveal.js is free and open source (MIT license), you can also clone their git repository and create your decks by hand. I like HTML but found Reveal.JS’s syntax a bit too verbose. Luckily, there’s also a way to use Markdown to define the contents of a slide (and the markdown code is converted at runtime using a JS library provided with Reveal.js).

I’ve looked for a way to create Reveal.js presentations quickly based on Markdown alone but couldn’t find one that pleased me.. so I’ve created my very own.

dSebastien’s reveal.js presentations template

presentations-revealjs is a simple to use template for creating Reveal.js presentations using Markdown alone that comes along with a useful build script.

Using it you can:

  • Create your slide deck using markdown alone
  • Edit your metadata in a single configuration file
  • Tweak Reveal.JS as you wish in the provided template
  • Use a few NPM commands to build your presentation and serve it to the world
  • See the results live (thanks to BrowserSync)

Check out the project page for more details as well as usage guidelines =)

A bit more Windows Docker bash-fu

Wednesday, April 22nd, 2015

Feeling bashy enough yet? :)

In my last post, I’ve given you a few useful functions for making your life with Docker easier on Windows. In this post, I’ll give you some more, but before that let’s look a bit a what docker-machine does for us.

When you invoke docker-machine to provision a Docker engine using Virtualbox, it “simply” creates a new VM… Okay though pretty basic, this explanation is valid ^^.

What? Not enough for you? Okay okay, let’s dive a bit deeper =)

Besides the VM, behind the scenes, docker-machine generates multiple things for us:

  • a set of self-signed certificates: used to create a server certificate for the Docker engine in the VM and a client certificate for the Docker client (also used by docker-machine to interact with the engine in the VM)
  • an SSH key-pair (based on RSA): authorized by the SSH daemon and used to authenticate against the VM

Docker-machine uses those to configure the SSH daemon as well as the Docker engine in the VM and stores these locally on your computer. If you run the following command (where docker-local is the name of the VM you’ve created), you’ll see where those files are stored:

command: eval "$(docker-machine env docker-local)"

export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH="C:\\Users\username\.docker\\machine\\machines\\docker-local"
export DOCKER_HOST=tcp://192.168.99.108:2376

As you can see above, the files related to my “docker-local” are all placed under c:\Users\username\.docker\machine\machines\docker-local. Note that DOCKER_TLS_VERIFY is enabled (which is nice). Also note that the DOCKER_HOST (i.e., engine) IP is the one of the VM (we’ll come back to this later on). Finally, the DOCKER_HOST port is 2376, which is Docker’s default.

Using docker-machine you can actually override just about any setting (including the location where the files are stored).

If you take a look at that location, you’ll see that docker-machine actually stores many interesting things in there:

  • a docker-local folder containing the VM metadata and log files
  • boot2docker.iso: the ISO used as basis for the VM (which you can update easily using docker-machine)
  • the CA, server and client certificates (ca.pem, cert.pem, server.pem, …)
  • config.json: more about this below
  • disk.vmdk: the VM’s disk (useful to take in backup if you care (you shouldn’t :p)
  • the SSH key-pair that you can use to authenticate against the VM (id_rsa, id_rsa.pub)

As noted above, there’s also a ‘config.json’ file, which contains everything docker-machine needs to know about that Docker engine:

{
	"DriverName" : "virtualbox",
	"Driver" : {
		"CPU" : -1,
		"MachineName" : "docker-local",
		"SSHUser" : "docker",
		"SSHPort" : 51648,
		"Memory" : 1024,
		"DiskSize" : 20000,
		"Boot2DockerURL" : "",
		"CaCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca.pem",
		"PrivateKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca-key.pem",
		"SwarmMaster" : false,
		"SwarmHost" : "tcp://0.0.0.0:3376",
		"SwarmDiscovery" : ""
	},
	"StorePath" : "C:\\Users\\...\\.docker\\machine\\machines\\docker-local",
	"HostOptions" : {
		"Driver" : "",
		"Memory" : 0,
		"Disk" : 0,
		"EngineOptions" : {
			"Dns" : null,
			"GraphDir" : "",
			"Ipv6" : false,
			"Labels" : null,
			"LogLevel" : "",
			"StorageDriver" : "",
			"SelinuxEnabled" : false,
			"TlsCaCert" : "",
			"TlsCert" : "",
			"TlsKey" : "",
			"TlsVerify" : false,
			"RegistryMirror" : null
		},
		"SwarmOptions" : {
			"IsSwarm" : false,
			"Address" : "",
			"Discovery" : "",
			"Master" : false,
			"Host" : "tcp://0.0.0.0:3376",
			"Strategy" : "",
			"Heartbeat" : 0,
			"Overcommit" : 0,
			"TlsCaCert" : "",
			"TlsCert" : "",
			"TlsKey" : "",
			"TlsVerify" : false
		},
		"AuthOptions" : {
			"StorePath" : "C:\\Users\\...\\.docker\\machine\\machines\\docker-local",
			"CaCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca.pem",
			"CaCertRemotePath" : "",
			"ServerCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\server.pem",
			"ServerKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\server-key.pem",
			"ClientKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\key.pem",
			"ServerCertRemotePath" : "",
			"ServerKeyRemotePath" : "",
			"PrivateKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca-key.pem",
			"ClientCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\cert.pem"
		}
	},
	"SwarmHost" : "",
	"SwarmMaster" : false,
	"SwarmDiscovery" : "",
	"CaCertPath" : "",
	"PrivateKeyPath" : "",
	"ServerCertPath" : "",
	"ServerKeyPath" : "",
	"ClientCertPath" : "",
	"ClientKeyPath" : ""
}

One thing that I want to mention about that file, since I’m only drawing the picture of the current Windows integration of Docker, is the SSHPort. You can see that it’s ‘51648’. That port is the HOST port (i.e., the port I can use from Windows to connect to the SSH server of the Docker VM).

How does this work? Well unfortunately there’s no voodoo magic at work here.

The thing with Docker on Windows is that the Docker engine runs in a VM, which makes things a bit more complicated since the onion has one more layer: Windows > VM > Docker Engine > Containers. Accessing ports exposed to the outside world when running a container will not be as straightforward as it would be when running Docker natively on a Linux box.

When docker-machine provisions the VM, it creates two network interfaces on it; a first one in NAT mode to communicate with the outside world (i.e., that’s the one we’re interested in) and a second one in VPN mode (which we won’t really care about here).

On the first interface, which I’ll further refer to as the “public” interface, docker-machine configures a single port redirection for SSH (port 51648 on the host towards port 22 on the guest). This port forwarding rule is what allows docker-machine and later the Docker client to interact with the Docker engine in the VM (I assume that the port is fixed though it might be selected randomly at creation time, I didn’t check this).

So all is nice and dandy, docker-machine provisions and configures many things for you and now that Microsoft has landed a Docker CLI for Windows, we can get up and running very quickly, interacting with the Docker engine in the VM through the Docker API, via SSH and using certificates for authentication. That’s a mouthful and it’s really NICE.. but.

Yeah indeed there’s always a but :(

Let’s say that you want to start a container hosting a simple Web server serving your pimped AngularJS+Polymer+CSS3+HTML5+whatever-cool-and-trendy-today application. Once started, you probably want to be able to access it in some way (let’s say using your browser or curl if you’re too cool).

Given our example, we can safely assume that the container will EXPOSE port 80 or the like to other containers (e.g., set in the Dockerfile). When you start that container, you’ll want to map that container port to a host port, let’s say.. 8080.

Okay curl http://localhost:8080 … 1..2..3, errr nothing :(

As you might have guessed by now, the annoying thing is that when you start a container in your Docker VM, the host that you’re mapping container ports to… is your VM.

I know it took a while for me to get there but hey, it might not be THAT obvious to everyone right? :)

I’ve mentioned earlier that docker-machine configures a port forwarding rule on the VM after creating it (for SSH, remember?). Can’t we do the same for other ports? Well the thing is that you totally can using VirtualBox’s CLI but it’ll make you understand that the current Windows integration of Docker is “nice” but clearly not all that great.

As stated, we’re going the BASH way. You can indeed achieve the same using your preferred language, whether it is PERL, Python, PowerShell or whatever.

So the first thing we’ll need to do is to make the VirtualBox CLI easily available in our little Bash world:

append_to_path /c/Program\ Files/Oracle/VirtualBox
alias virtualbox='VirtualBox.exe &'
alias vbox='virtualbox'
alias vboxmanage='VBoxManage.exe'
alias vboxmng='vboxmanage'

You’ll find the description of the append_to_path function in the previous post.

Next, we’ll add three interesting functions based on VirtualBox’s CLI; one to check whether the Docker VM is running or not and two other ones to easily add/remove a port redirection to our Docker VM:

is-docker-vm-running()
{
	echo "Checking if the local Docker VM ($DOCKER_LOCAL_VM_NAME) is running"
	vmStatusCheckResult=$(vboxmanage list runningvms)
	#echo $vmStatusCheckResult
	if [[ $vmStatusCheckResult == *"$DOCKER_LOCAL_VM_NAME"* ]]
	then
		echo "The local Docker VM is running!"
		return 0
	else
		echo "The local Docker VM is not running (or does not exist or runs using another account)"
		return 1
	fi
}


# redirect a port from the host to the local Docker VM
# call: docker-add-port-redirection rule_name host_port guest_port
docker-add-port-redirection()
{
	echo "Preparing to add a port redirection to the Docker VM"
	is-docker-vm-running
	if [ $? -eq 0 ]; then
		# vm is running
		vboxmanage modifyvm $DOCKER_LOCAL_VM_NAME --natpf1 "$1,tcp,127.0.0.1,$2, ,$3"
	else
		# vm is not running
		vboxmanage controlvm $DOCKER_LOCAL_VM_NAME natpf1 "$1,tcp,127.0.0.1,$2, ,$3"
	fi
	echo "Port redirection added to the Docker VM"
}
alias dapr='docker-add-port-redirection'


# remove a port redirection by name
# call: docker-remove-port-redirection rule_name
docker-remove-port-redirection()
{
	echo "Preparing to remove a port redirection to the Docker VM"
	is-docker-vm-running
	if [ $? -eq 0 ]; then
		# vm is running
		vboxmanage modifyvm $DOCKER_LOCAL_VM_NAME --natpf1 delete "$1"
	else
		# vm is not running
		vboxmanage controlvm $DOCKER_LOCAL_VM_NAME natpf1 delete "$1"
	fi
	echo "Port redirection removed from the Docker VM"
}
alias drpr='docker-remove-port-redirection'


docker-list-port-redirections()
{
    portRedirections=$(vboxmanage showvminfo $DOCKER_LOCAL_VM_NAME | grep -E 'NIC 1 Rule')
	for i in "${portRedirections[@]}"
	do
		printf "$i\n"
	done
}
alias dlrr='docker-list-port-redirections'
alias dlpr='docker-list-port-redirections'

Note that these functions will work whether the Docker VM is running or not. Since I’m an optimist, I don’t check whether the VM actually exists or not beforehand or if the commands did succeed (i.e., use at your own risk). One caveat is that these functions will not work if you started the Docker VM manually through Virtualbox’s GUI (because it keeps a lock on the configuration). These functions handle tcp port redirections, but adapting the code for udp is a no brainer.

The last function (docker-list-port-redirections) will allow you to quickly list the port redirections that you’ve already configured. You can do the same through Virtalbox’s UI but that’s only interesting if you like moving the mouse around and clicking on buttons, real ITers don’t do that no more (or do they? :p).

With these functions you can also easily create port redirections for port ranges using a simple loop:

for i in { 49152..65534 }; do
    dapr "rule$i" $i $i

Though I would recommend against that. You should rather add a few useful port redirections such as for port 8080, 80 and the like. These can only ‘bother’ while the Docker VM is running and if you’re trying to use redirected ports.

Another option would be to switch the “public” interface from NAT mode to bridge mode, though I’m not too fond of making my local Docker VM a ‘first’ class citizen of my LAN.

Okay, two more functions and I’m done for today :)

Port redirections are nice because they’ll allow you to expose your Docker containers to the outside world (i.e., not only your machine). Although there are situations where you might not want that. In that case, it’s useful to just connect directly to the local Docker VM.

docker-get-local-vm-ip(){
	export DOCKER_LOCAL_VM_IP=$(docker-machine ip $DOCKER_LOCAL_VM_NAME)
	echo "Docker local VM ($DOCKER_LOCAL_VM_NAME) IP: $DOCKER_LOCAL_VM_IP"
}
alias dockerip='docker-get-local-vm-ip'
alias dip='docker-get-local-vm-ip'

docker-open(){
	docker-get-local-vm-ip
	( explorer "http://$DOCKER_LOCAL_VM_IP:$*" )&	
}
alias dop='docker-open'

The ‘docker-get-local-vm-ip’ or ‘dip’ for close friends uses docker-machine to retrieve the IP it knows for the Docker VM. It’s best friend, ‘docker-open’ or ‘dop’ will simply open a browser window (you default one) towards that IP using the port specified in argument; for example ‘docker-open 8080′ will get you quickly towards your local Docker VM on port 8080.

With these functions, we can also improve the ‘docker-config-client’ function from my previous post to handle the case where the VM isn’t running:

docker-config-client()
{
	echo "Configuring the Docker client to point towards the local Docker VM ($DOCKER_LOCAL_VM_NAME)..."
	is_docker_vm_running
	if [ $? -eq 0 ]; then
		eval "$(docker-machine env $DOCKER_LOCAL_VM_NAME)"
		if [ $? -eq 0 ]; then
			docker-get-local-vm-ip
			echo "Docker client configured successfully! (IP: $DOCKER_LOCAL_VM_IP)"
		else
			echo "Failed to configure the Docker client!"
			return;
		fi
	else
		echo "The Docker client can't be configured because the local Docker VM isn't running. Please run 'docker-start' first."
	fi
}
alias dockerconfig='docker-config-client'
alias configdocker='docker-config-client'

Well that’s it for today. Hope this helps ;-)