Bon pied bon oeil

Friday, May 1st, 2015

2015-04-14 - Bernard.jpg

Reveal.js me something

Sunday, April 26th, 2015

tl;dr: I’ve cre­ated a project for cre­at­ing Reveal.JS pre­sen­ta­tions quickly using Mark­down alone

About

I’ve been want­ing to play around with Reveal.js quite some time but never quite took the time nec­es­sary to read the doc.

Yes­ter­day I did and real­ized that the only seri­ous edi­tor for Reveal.js is http://slides.com/ which is only free for pub­lic decks (which is nice BTW) and well, I’d also like to cre­ate my own slide decks with­out pay­ing just to be able to do so.

Given that Reveal.js is free and open source (MIT license), you can also clone their git repos­i­tory and cre­ate your decks by hand. I like HTML but found Reveal.JS’s syn­tax a bit too ver­bose. Luck­ily, there’s also a way to use Mark­down to define the con­tents of a slide (and the mark­down code is con­verted at run­time using a JS library pro­vided with Reveal.js).

I’ve looked for a way to cre­ate Reveal.js pre­sen­ta­tions quickly based on Mark­down alone but couldn’t find one that pleased me.. so I’ve cre­ated my very own.

dSebastien’s reveal.js pre­sen­ta­tions template

presentations-revealjs is a sim­ple to use tem­plate for cre­at­ing Reveal.js pre­sen­ta­tions using Mark­down alone that comes along with a use­ful build script.

Using it you can:

  • Cre­ate your slide deck using mark­down alone
  • Edit your meta­data in a sin­gle con­fig­u­ra­tion file
  • Tweak Reveal.JS as you wish in the pro­vided template
  • Use a few NPM com­mands to build your pre­sen­ta­tion and serve it to the world
  • See the results live (thanks to Browser­Sync)

Check out the project page for more details as well as usage guidelines =)

A bit more Windows Docker bash-fu

Wednesday, April 22nd, 2015

Feel­ing bashy enough yet? :)

In my last post, I’ve given you a few use­ful func­tions for mak­ing your life with Docker eas­ier on Win­dows. In this post, I’ll give you some more, but before that let’s look a bit a what docker-machine does for us.

When you invoke docker-machine to pro­vi­sion a Docker engine using Vir­tu­al­box, it “sim­ply” cre­ates a new VM… Okay though pretty basic, this expla­na­tion is valid ^^.

What? Not enough for you? Okay okay, let’s dive a bit deeper =)

Besides the VM, behind the scenes, docker-machine gen­er­ates mul­ti­ple things for us:

  • a set of self-signed cer­tifi­cates: used to cre­ate a server cer­tifi­cate for the Docker engine in the VM and a client cer­tifi­cate for the Docker client (also used by docker-machine to inter­act with the engine in the VM)
  • an SSH key-pair (based on RSA): autho­rized by the SSH dae­mon and used to authen­ti­cate against the VM

Docker-machine uses those to con­fig­ure the SSH dae­mon as well as the Docker engine in the VM and stores these locally on your com­puter. If you run the fol­low­ing com­mand (where docker-local is the name of the VM you’ve cre­ated), you’ll see where those files are stored:

command: eval "$(docker-machine env docker-local)"

export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH="C:\\Users\username\.docker\\machine\\machines\\docker-local"
export DOCKER_HOST=tcp://192.168.99.108:2376

As you can see above, the files related to my “docker-local” are all placed under c:\Users\username\.docker\machine\machines\docker-local. Note that DOCKER_TLS_VERIFY is enabled (which is nice). Also note that the DOCKER_HOST (i.e., engine) IP is the one of the VM (we’ll come back to this later on). Finally, the DOCKER_HOST port is 2376, which is Docker’s default.

Using docker-machine you can actu­ally over­ride just about any set­ting (includ­ing the loca­tion where the files are stored).

If you take a look at that loca­tion, you’ll see that docker-machine actu­ally stores many inter­est­ing things in there:

  • a docker-local folder con­tain­ing the VM meta­data and log files
  • boot2docker.iso: the ISO used as basis for the VM (which you can update eas­ily using docker-machine)
  • the CA, server and client cer­tifi­cates (ca.pem, cert.pem, server.pem, …)
  • config.json: more about this below
  • disk.vmdk: the VM’s disk (use­ful to take in backup if you care (you shouldn’t :p)
  • the SSH key-pair that you can use to authen­ti­cate against the VM (id_rsa, id_rsa.pub)

As noted above, there’s also a ‘config.json’ file, which con­tains every­thing docker-machine needs to know about that Docker engine:

{
	"DriverName" : "virtualbox",
	"Driver" : {
		"CPU" : -1,
		"MachineName" : "docker-local",
		"SSHUser" : "docker",
		"SSHPort" : 51648,
		"Memory" : 1024,
		"DiskSize" : 20000,
		"Boot2DockerURL" : "",
		"CaCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca.pem",
		"PrivateKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca-key.pem",
		"SwarmMaster" : false,
		"SwarmHost" : "tcp://0.0.0.0:3376",
		"SwarmDiscovery" : ""
	},
	"StorePath" : "C:\\Users\\...\\.docker\\machine\\machines\\docker-local",
	"HostOptions" : {
		"Driver" : "",
		"Memory" : 0,
		"Disk" : 0,
		"EngineOptions" : {
			"Dns" : null,
			"GraphDir" : "",
			"Ipv6" : false,
			"Labels" : null,
			"LogLevel" : "",
			"StorageDriver" : "",
			"SelinuxEnabled" : false,
			"TlsCaCert" : "",
			"TlsCert" : "",
			"TlsKey" : "",
			"TlsVerify" : false,
			"RegistryMirror" : null
		},
		"SwarmOptions" : {
			"IsSwarm" : false,
			"Address" : "",
			"Discovery" : "",
			"Master" : false,
			"Host" : "tcp://0.0.0.0:3376",
			"Strategy" : "",
			"Heartbeat" : 0,
			"Overcommit" : 0,
			"TlsCaCert" : "",
			"TlsCert" : "",
			"TlsKey" : "",
			"TlsVerify" : false
		},
		"AuthOptions" : {
			"StorePath" : "C:\\Users\\...\\.docker\\machine\\machines\\docker-local",
			"CaCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca.pem",
			"CaCertRemotePath" : "",
			"ServerCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\server.pem",
			"ServerKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\server-key.pem",
			"ClientKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\key.pem",
			"ServerCertRemotePath" : "",
			"ServerKeyRemotePath" : "",
			"PrivateKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca-key.pem",
			"ClientCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\cert.pem"
		}
	},
	"SwarmHost" : "",
	"SwarmMaster" : false,
	"SwarmDiscovery" : "",
	"CaCertPath" : "",
	"PrivateKeyPath" : "",
	"ServerCertPath" : "",
	"ServerKeyPath" : "",
	"ClientCertPath" : "",
	"ClientKeyPath" : ""
}

One thing that I want to men­tion about that file, since I’m only draw­ing the pic­ture of the cur­rent Win­dows inte­gra­tion of Docker, is the SSH­Port. You can see that it’s ‘51648’. That port is the HOST port (i.e., the port I can use from Win­dows to con­nect to the SSH server of the Docker VM).

How does this work? Well unfor­tu­nately there’s no voodoo magic at work here.

The thing with Docker on Win­dows is that the Docker engine runs in a VM, which makes things a bit more com­pli­cated since the onion has one more layer: Win­dows > VM > Docker Engine > Con­tain­ers. Access­ing ports exposed to the out­side world when run­ning a con­tainer will not be as straight­for­ward as it would be when run­ning Docker natively on a Linux box.

When docker-machine pro­vi­sions the VM, it cre­ates two net­work inter­faces on it; a first one in NAT mode to com­mu­ni­cate with the out­side world (i.e., that’s the one we’re inter­ested in) and a sec­ond one in VPN mode (which we won’t really care about here).

On the first inter­face, which I’ll fur­ther refer to as the “pub­lic” inter­face, docker-machine con­fig­ures a sin­gle port redi­rec­tion for SSH (port 51648 on the host towards port 22 on the guest). This port for­ward­ing rule is what allows docker-machine and later the Docker client to inter­act with the Docker engine in the VM (I assume that the port is fixed though it might be selected ran­domly at cre­ation time, I didn’t check this).

So all is nice and dandy, docker-machine pro­vi­sions and con­fig­ures many things for you and now that Microsoft has landed a Docker CLI for Win­dows, we can get up and run­ning very quickly, inter­act­ing with the Docker engine in the VM through the Docker API, via SSH and using cer­tifi­cates for authen­ti­ca­tion. That’s a mouth­ful and it’s really NICE.. but.

Yeah indeed there’s always a but :(

Let’s say that you want to start a con­tainer host­ing a sim­ple Web server serv­ing your pimped AngularJS+Polymer+CSS3+HTML5+whatever-cool-and-trendy-today appli­ca­tion. Once started, you prob­a­bly want to be able to access it in some way (let’s say using your browser or curl if you’re too cool).

Given our exam­ple, we can safely assume that the con­tainer will EXPOSE port 80 or the like to other con­tain­ers (e.g., set in the Dock­er­file). When you start that con­tainer, you’ll want to map that con­tainer port to a host port, let’s say.. 8080.

Okay curl http://localhost:8080 … 1..2..3, errr nothing :(

As you might have guessed by now, the annoy­ing thing is that when you start a con­tainer in your Docker VM, the host that you’re map­ping con­tainer ports to… is your VM.

I know it took a while for me to get there but hey, it might not be THAT obvi­ous to every­one right? :)

I’ve men­tioned ear­lier that docker-machine con­fig­ures a port for­ward­ing rule on the VM after cre­at­ing it (for SSH, remem­ber?). Can’t we do the same for other ports? Well the thing is that you totally can using VirtualBox’s CLI but it’ll make you under­stand that the cur­rent Win­dows inte­gra­tion of Docker is “nice” but clearly not all that great.

As stated, we’re going the BASH way. You can indeed achieve the same using your pre­ferred lan­guage, whether it is PERL, Python, Pow­er­Shell or whatever.

So the first thing we’ll need to do is to make the Vir­tu­al­Box CLI eas­ily avail­able in our lit­tle Bash world:

append_to_path /c/Program\ Files/Oracle/VirtualBox
alias virtualbox='VirtualBox.exe &'
alias vbox='virtualbox'
alias vboxmanage='VBoxManage.exe'
alias vboxmng='vboxmanage'

You’ll find the descrip­tion of the append_to_path func­tion in the pre­vi­ous post.

Next, we’ll add three inter­est­ing func­tions based on VirtualBox’s CLI; one to check whether the Docker VM is run­ning or not and two other ones to eas­ily add/remove a port redi­rec­tion to our Docker VM:

is-docker-vm-running()
{
	echo "Checking if the local Docker VM ($DOCKER_LOCAL_VM_NAME) is running"
	vmStatusCheckResult=$(vboxmanage list runningvms)
	#echo $vmStatusCheckResult
	if [[ $vmStatusCheckResult == *"$DOCKER_LOCAL_VM_NAME"* ]]
	then
		echo "The local Docker VM is running!"
		return 0
	else
		echo "The local Docker VM is not running (or does not exist or runs using another account)"
		return 1
	fi
}


# redirect a port from the host to the local Docker VM
# call: docker-add-port-redirection   
docker-add-port-redirection()
{
	echo "Preparing to add a port redirection to the Docker VM"
	is-docker-vm-running
	if [ $? -eq 0 ]; then
		# vm is running
		vboxmanage modifyvm $DOCKER_LOCAL_VM_NAME --natpf1 "$1,tcp,127.0.0.1,$2, ,$3"
	else
		# vm is not running
		vboxmanage controlvm $DOCKER_LOCAL_VM_NAME natpf1 "$1,tcp,127.0.0.1,$2, ,$3"
	fi
	echo "Port redirection added to the Docker VM"
}
alias dapr='docker-add-port-redirection'


# remove a port redirection by name
# call: docker-remove-port-redirection 
docker-remove-port-redirection()
{
	echo "Preparing to remove a port redirection to the Docker VM"
	is-docker-vm-running
	if [ $? -eq 0 ]; then
		# vm is running
		vboxmanage modifyvm $DOCKER_LOCAL_VM_NAME --natpf1 delete "$1"
	else
		# vm is not running
		vboxmanage controlvm $DOCKER_LOCAL_VM_NAME natpf1 delete "$1"
	fi
	echo "Port redirection removed from the Docker VM"
}
alias drpr='docker-remove-port-redirection'


docker-list-port-redirections()
{
    portRedirections=$(vboxmanage showvminfo $DOCKER_LOCAL_VM_NAME | grep -E 'NIC 1 Rule')
	for i in "${portRedirections[@]}"
	do
		printf "$i\n"
	done
}
alias dlrr='docker-list-port-redirections'

Note that these func­tions will work whether the Docker VM is run­ning or not. Since I’m an opti­mist, I don’t check whether the VM actu­ally exists or not before­hand or if the com­mands did suc­ceed (i.e., use at your own risk). One caveat is that these func­tions will not work if you started the Docker VM man­u­ally through Virtualbox’s GUI (because it keeps a lock on the con­fig­u­ra­tion). These func­tions han­dle tcp port redi­rec­tions, but adapt­ing the code for udp is a no brainer.

The last func­tion (docker-list-port-redirections) will allow you to quickly list the port redi­rec­tions that you’ve already con­fig­ured. You can do the same through Virtalbox’s UI but that’s only inter­est­ing if you like mov­ing the mouse around and click­ing on but­tons, real ITers don’t do that no more (or do they? :p).

With these func­tions you can also eas­ily cre­ate port redi­rec­tions for port ranges using a sim­ple loop:

for i in { 49152..65534 }; do
    dapr "rule$i" $i $i

Though I would rec­om­mend against that. You should rather add a few use­ful port redi­rec­tions such as for port 8080, 80 and the like. These can only ‘bother’ while the Docker VM is run­ning and if you’re try­ing to use redi­rected ports.

Another option would be to switch the “pub­lic” inter­face from NAT mode to bridge mode, though I’m not too fond of mak­ing my local Docker VM a ‘first’ class cit­i­zen of my LAN.

Okay, two more func­tions and I’m done for today :)

Port redi­rec­tions are nice because they’ll allow you to expose your Docker con­tain­ers to the out­side world (i.e., not only your machine). Although there are sit­u­a­tions where you might not want that. In that case, it’s use­ful to just con­nect directly to the local Docker VM.

docker-get-local-vm-ip(){
	export DOCKER_LOCAL_VM_IP=$(docker-machine ip $DOCKER_LOCAL_VM_NAME)
	echo "Docker local VM ($DOCKER_LOCAL_VM_NAME) IP: $DOCKER_LOCAL_VM_IP"
}
alias dockerip='docker-get-local-vm-ip'
alias dip='docker-get-local-vm-ip'

docker-open(){
	docker-get-local-vm-ip
	( explorer "http://$DOCKER_LOCAL_VM_IP:$*" )&	
}
alias dop='docker-open'

The ‘docker-get-local-vm-ip’ or ‘dip’ for close friends uses docker-machine to retrieve the IP it knows for the Docker VM. It’s best friend, ‘docker-open’ or ‘dop’ will sim­ply open a browser win­dow (you default one) towards that IP using the port spec­i­fied in argu­ment; for exam­ple ‘docker-open 8080′ will get you quickly towards your local Docker VM on port 8080.

With these func­tions, we can also improve the ‘docker-config-client’ func­tion from my pre­vi­ous post to han­dle the case where the VM isn’t running:

docker-config-client()
{
	echo "Configuring the Docker client to point towards the local Docker VM ($DOCKER_LOCAL_VM_NAME)..."
	is_docker_vm_running
	if [ $? -eq 0 ]; then
		eval "$(docker-machine env $DOCKER_LOCAL_VM_NAME)"
		if [ $? -eq 0 ]; then
			docker-get-local-vm-ip
			echo "Docker client configured successfully! (IP: $DOCKER_LOCAL_VM_IP)"
		else
			echo "Failed to configure the Docker client!"
			return;
		fi
	else
		echo "The Docker client can't be configured because the local Docker VM isn't running. Please run 'docker-start' first."
	fi
}
alias dockerconfig='docker-config-client'
alias configdocker='docker-config-client'

Well that’s it for today. Hope this helps ;-)

Web 3.0 is closer than you might think

Wednesday, April 22nd, 2015

Evo­lu­tion of the Web towards Web 3.0

The real mod­ern Web — I guess we’ll call that ‘Web 3.0′ — is really get­ting closer and closer and guess what? It’s also up to you to make it real.

Since my last Web devel­op­ment project (2012), many things have changed and evolved on the Web; its evo­lu­tion pace is sim­ply aston­ish­ing. HTML5, CSS3 and many WhatWG spec­i­fi­ca­tions are now pretty well sup­ported by major browsers. You could say that we cur­rently live the Web 2.5 era.

The real game changer now is that Microsoft has finally decided to retire IE and intro­duce Spar­tan, a legacy-free mod­ern Web browser.

When you take a look at what Microsoft has announced for Spar­tan and their roadmap, you can only feel GOOD if you’re a Web enthusiast.

Nearly gone are the days where you had to play around IE’s quirks, bat­tle against box model issues, resort to IE-specific con­di­tional tags, use X-UA-Compatible and all that crap just to get a WORKING application/design across browsers. Nearly gone are the days where you had to man­u­ally add 36000 browser-specific pre­fixes to your CSS stylesheets just to get a gra­di­ent and whatnot, …

We’re get­ting closer to a state of the Web browser land­scape where we’ll finally be able to think less about browser com­pat­i­bil­ity issues and con­cen­trate our efforts on actu­ally cre­at­ing use­ful and/or beau­ti­ful things that sim­ply work every­where. OK I’m get­ting a ahead of myself, but still there’s more hope today than there was back in 2012 ;-)

Just take a look at this list. Of course there could be more green stuff but it’s already pretty damn cool.

Who needs native applications?

Some peo­ple already think about the next step, the ‘Ambi­ent Web Era’, an era where the Web will actu­ally be every­where. You could say that we’re already there, but I don’t agree. I think that we’ll only reach that state after the real boom of the Inter­net of Things, when the Web will really be on par with native appli­ca­tions, when I’ll be eas­ily able to cre­ate a Web inter­face for man­ag­ing my heat­ing sys­tem using noth­ing but Web tech­nolo­gies (i.e., with­out all the cur­rent hoops that we need to get through to reach that point).

But before we reach that level, we should observe pro­gres­sive evo­lu­tions. Over time, Web appli­ca­tions will be more and more on par with native appli­ca­tions with means to ‘install’ them prop­erly (e.g., using things such as manifest.webapp, manifest.json and the like) and native capa­bil­i­ties will end up exposed through JavaScript APIs.

Adapt­ing to the mobile world

I hear and read more and more about such ideas as mobile first, respon­sive web design, client-side UIs, offline first, … The mod­ern Web stan­dards and browser ven­dors try to cater for all those things that we’ve missed for so long: means to actu­ally cre­ate engag­ing user expe­ri­ences across devices. For exam­ple, with stan­dards such as IndexedDB, File API and Local Stor­age, we’ll be able to save/load/cache data at the client side to allow our appli­ca­tions to work offline. WebGL and soon WebGL 2.0 allow us to take advan­tage of all the graph­ics chip horse­power while the can­vas ele­ment and asso­ci­ated API allow us to draw in 2D. WebRTC enable real-time audio/video com­mu­ni­ca­tions, then there are also Web­Sock­ets, etc. These are but a few out of many specs that we can actu­ally lever­age TODAY across mod­ern Web browsers!

As far as I’m con­cerned, mobile first is already a real­ity, it’s just a ques­tion of time for aware­ness and matu­rity to rise among the Web devel­op­ers crowd. CSS3 media queries and tons of CSS frame­works make it much eas­ier to adapt our Web UIs to dif­fer­ent device sizes and respon­sive Web design prin­ci­ples are now pretty clearly laid out.

But for me, mobile first is not enough; we need to care about and build appli­ca­tion for peo­ple who live in the mobile world but don’t nec­es­sar­ily have fast/consistent con­nec­tiv­ity (or sim­ply choose to stay offline in some cir­cum­stances). We need to con­sider offline first as well. For exam­ple, although we’re in 2015, I’m still dis­con­nected every 5 min­utes or so while I’m on the train for my daily com­mute (though I live in West­ern Europe).

We must ensure that our Web appli­ca­tions han­dle dis­con­nec­tions grace­fully. One way to do this is for exam­ple to cache data once we’ve loaded it or batch load stuff in advance (e.g., blog posts from the last two months). The term offline first is pretty well cho­sen because, just like secu­rity, it’s dif­fi­cult to add that as an after­thought. When your appli­ca­tion tries to inter­act with the server side (and those inter­ac­tions should be well thought/limited/optimized) it needs to check the con­nec­tiv­ity state first, maybe eval­u­ate the con­nec­tion speed and adapt to the cur­rent net­work con­di­tions. For exam­ple you might choose to load a smaller/lighter ver­sion of some resource if the con­nec­tion is slow.

Offline first ideas have been float­ing around for a while but I think that browser ven­dors can help us much more than they cur­rently do. The offline first approach is still very imma­ture and it’ll take time for the Web devel­op­ment com­mu­nity to dis­cover and describe best prac­tices as well as rel­e­vant UX prin­ci­ples and design pat­terns. Keep an eye on https://github.com/offlinefirst.

Appli­ca­tion Cache can help in some regards but there are also many pit­falls to be aware of; I won’t delve into that, there’s already a great arti­cle about it over at A List Apart.

Ser­vice Work­ers will help us A LOT (back­ground sync, push noti­fi­ca­tions, …). Unfor­tu­nately, even though there are some poly­fills avail­able, I’m not sure that they are pro­duc­tion ready just yet.

There are also online/offline events which are already bet­ter sup­ported and can help you detect the cur­rent browser con­nec­tiv­ity sta­tus.

The device’s bat­tery sta­tus might also need to be con­sid­ered, but the over­all browser sup­port for the Bat­tery Sta­tus API isn’t all that great for now.

In my new project, I’ll cer­tainly try and cre­ate an offline-first expe­ri­ence. I think that I’ll mostly rely on Local­Stor­age, but I’d also like to inte­grate visual indications/user con­trol regard­ing what is online/offline.

Client-side UIs — Why not earlier?

One thing I’ve men­tioned at the begin­ning of the post is fact that client-side UIs gain more and more momen­tum and right­fully so. I believe that UIs will pro­gres­sively shift towards the client side for mul­ti­ple rea­sons, but first let’s review a bit of IT history :)

His­tor­i­cally, Web appli­ca­tion archi­tec­tures have been focus­ing on the server side of things, con­sid­er­ing the client-side as noth­ing more than a dumb ren­derer. There were many impor­tant and obvi­ous rea­sons for this.

Yes­ter­day we were liv­ing in a world where JavaScript was con­sid­ered as a toy script­ing lan­guage only use­ful to per­form basic stuff such as dis­play­ing alert boxes, scrolling text in the sta­tus bar, etc. Next to that, com­put­ers weren’t nearly as pow­er­ful as they are today. Yes­ter­day we were liv­ing in a world where tablets were made of stone and appeared only much later in Star Trek which was still noth­ing more than sucky sci­ence fic­tion (sorry guys, I’m no trekkie :p).

Back in 200x (ew that feels so close yet so dis­tant), browser ren­der­ing engines and JavaScript engines were not nearly as fast as they are today. The rise of the Web 2.0, XHR, JS libs and the death of Flash have pushed browser ven­dors to tackle JS per­for­mance issues and they’ve done a ter­rific job.

Client-side UIs — Why now?

Today we are liv­ing in a world where mobile devices are every­where. You don’t want to depend on the server for every inter­ac­tion between the user and your appli­ca­tion. What you want is for the appli­ca­tion to run on the user’s device as inde­pen­dently as pos­si­ble and only inter­act with the server if and when it’s really needed. Also, when inter­ac­tion is needed, you only want use­ful data to be exchanged because mobile data plans cost $$$.

So why man­age the UI and its state on the server? Well in the past we had very valid rea­sons to do so. But not today, not any­more. Mobile devices of today are much more pow­er­ful than desk­top com­put­ers of the past. We are liv­ing in a world where JavaScript inter­preters embed­ded in our browsers are light­ning fast. ECMAScript has also evolved a lot over time and still is (look at all the cool ES6 stuff com­ing in fast).

More­over, JavaScript has not only evolved as a lan­guage and from a per­for­mance stand­point. With the Web 2.0, JavaScript has become increas­ingly used to enhance the UI and user expe­ri­ence of Web appli­ca­tions in gen­eral. Today, JavaScript is con­sid­ered very dif­fer­ently by soft­ware devel­op­ers com­pared to 10 years ago. Unfor­tu­nately there are still peo­ple who still think that JavaScript is Java on the Web but hey, leave those guys aside and stick with me instead :)

Today we have pack­age and depen­dency man­age­ment tools for front-end code (e.g., NPM, Browser­ify, Web­pack, …). We also have easy means to main­tain a build for front-end code (e.g., Gulp, Grunt, etc). We have JavaScript code qual­ity check­ing tools (e.g., JSHint, JSLint, Sonar inte­gra­tion, …). We have test frame­works and test run­ners for JavaScript code (e.g., Mocha, Karma, Pro­trac­tor, Tes­tac­u­lar, …), etc.

We have a gazil­lion JS libraries, JS frame­works, bet­ter devel­oper tools included in mod­ern browsers, we have bet­ter IDE sup­port and even ded­i­cated IDEs (e.g., Jetbrain’s Web­Storm). And it all just keeps get­ting bet­ter and better.

In short, we have every­thing any pro­fes­sional devel­oper needs to be able to con­sider devel­op­ing full blown appli­ca­tions using JavaScript. Again, over time, stan­dards will evolve and the over­all Web SDK (let’s call it that ok?) will keep expand­ing and extend­ing the Web’s capabilities.

Today we can even develop JavaScript appli­ca­tions on the server side thanks to NodeJS and its entire forks suite :) Some peo­ple don’t under­stand yet why that’s use­ful, but once they start to see more and more code shift towards the client side, they’ll prob­a­bly see the light.

Okay where was I headed? Ok I remem­ber: server-side vs client-side. I think that given the above, we can prob­a­bly agree that the client-side devel­op­ment world is much more mature today than it was at the begin­ning of the Web 2.0 and that client-side UIs make a hell of a lot more sense in today’s world.

My view of a mod­ern Web appli­ca­tion archi­tec­ture is as fol­lows; it might indeed not be applic­a­ble to all use cases but in many cases it cer­tainly can:

  • Client-side UI with HTML, CSS, JS, JS libs and a JS frame­work to keep the code base manageable/maintainable
  • Server-side respon­si­ble expos­ing REST­ful Web Ser­vices adapt­ing the data rep­re­sen­ta­tions to the spe­cific clients
  • Server-side respon­si­ble for enforc­ing the busi­ness rules and inter­act­ing with the rest of the infra­struc­ture pieces

The ben­e­fits of this approach are mul­ti­ple. Since the UI is fully man­aged on the client-side:

  • only the nec­es­sary data needs to be exchanged between the client and the server (i.e., JSON vs full HTML page)
  • the server-side can (and should) become stateless
  • it can more eas­ily adapt to the needs of the mobile-first & offline-first approaches
  • the UI can be much more respon­sive and thus more ‘native-like’
  • Since the UI is com­pletely sep­a­rated from the back-end, it can more eas­ily be replaced

If you care for the Web even just a bit then you know this is the model we need to go towards.

Of course, shift­ing the UI along with all the logic behind it to the client-side clearly means that the JavaScript code bases will become much larger and thus much more com­plex, which poses the ques­tion of code com­plex­ity and main­tain­abil­ity. As I’ve said, ECMAScript con­tin­ues to evolve and ES6 intro­duces many things that are lack­ing today such as mod­u­lar­iza­tion, col­lec­tions, more func­tional pro­gram­ming con­structs and even proxies.

You could say that ES6 hasn’t landed yet, but what pre­vents you from using it already?

One thing that you can­not ignore is the tech­nolo­gies knowl­edge. There’s a huge dif­fer­ence between adding JS val­i­da­tion code to a form and devel­op­ing a whole UI based on Web tech­nolo­gies and JavaScript frame­work. With the for­mer you can sur­vive with a min­i­mal knowl­edge of the lan­guage while with the lat­ter, you’d bet­ter have a deeper understanding.

If you work in a Java or .NET shop and don’t have actual Web devel­op­ers at your dis­posal, then you might not be able to fol­low that path eas­ily. It all depends on your organization’s cul­ture and your peo­ple willingness/capability to learn new things and adapt.

I often like to com­pare the Web tech­nol­ogy stack to the *NIX world: as an IT pro­fes­sional, what do you pre­fer? Learn­ing stuff that’ll remain use­ful and ben­e­fi­cial to you for many years to come or learn­ing stuff that you know will only be true/valid for a 3–4 years period? Or even worse yet: ignore mar­ket trends and mar­ket evo­lu­tion? At some point you’ll have to adapt any­way and that’ll cost you.

Tech­nol­ogy will always evolve but some tech­nolo­gies have much less inter­est for your pro­fes­sional career. If some­one fresh out of school asked me today what to learn/what to focus on, I cer­tainly would rec­om­mend learn­ing as much as pos­si­ble about the Web.

Here I’m not say­ing that the server-side is doomed, far from it. You just can’t expose your every­thing directly to the client-side. You need to place secu­rity bound­aries some­where. If you’re com­fort­able writ­ing server-side code in Java, C#, PHP or what­ever, then con­tinue to do so, noth­ing forces you to switch to Go, NodeJS or any­thing else on that side of the fence. What I’m say­ing is that if you cre­ate a Web appli­ca­tion, then you need to con­sider shift­ing the UI towards the client and if you’re seri­ous about it, you must cre­ate a clear sep­a­ra­tion between both sides; to each its own responsibilities.

That being said, I think that JS on the server side also makes sense, we just need to give that a few more years to become mature. But pic­ture this: you have a server-side appli­ca­tion that inter­acts with your infra­struc­ture, manip­u­lates a JavaScript domain model and exposes Web ser­vices that accept/deliver JS objects. Next to that, you have a client-side appli­ca­tion that uses the same JavaScript domain model and that inter­acts with those Web services.

What you gain there is the abil­ity to share code between your server and client, whether that be the domain model, val­i­da­tion rules or what­ever else. It makes a lot of sense from a design per­spec­tive, but the dif­fi­culty to take advan­tage of that is two-fold: first it needs matu­rity (it’s not for every­one just yet) and sec­ond, you need your organization/people to adapt. Java devel­op­ers are not nec­es­sar­ily good JavaScript devel­op­ers (since yes, those two sim­ply are very dif­fer­ent beasts ;-)).

Another tech­nol­ogy that I think will help us tran­si­tion UIs towards the client side are Web com­po­nents. The more I read about these, the more I think that they’re going to rep­re­sent the next big thing ™ in Web devel­op­ment. It’s unfor­tu­nate that we cur­rently still need to resort to poly­fills to be able to use these, but hope­fully that won’t last too long. Browser ven­dors should finally agree upon a set of primitives/APIs to sup­port that we can use (e.g., will HTML imports pre­vail?). I can’t expand much more on this because I haven’t played much with them yet, but they’re on my radar ^^.

Con­clu­sion & my cur­rent plans

This post rep­re­sents my cur­rent vision of the Web as it stands, its future and why I believe that it’s time to seri­ously con­sider shift­ing the UI towards the client-side (let’s hope that time will prove me right). This was a pretty long one but I think that some­times it makes for a good exer­cise to try and artic­u­late a vision.

As I’ve men­tioned in a pre­vi­ous post, I’ve started a new per­sonal project to replace this now dated Web­site with a more mod­ern version.

Now that I’ve shared my vision for the future of the Web, I can expand a bit more on my plans for this new project. Rather than re-creating a full blown Word­Press theme, I intend to use Word­Press as a sim­ple CMS with­out a frond-end of its own: I’ll keep using the admin­is­tra­tion space to man­age the con­tent but I’ll extract the data (posts, com­ments, pages, tags, etc) through the WP REST API (which will soon become part of WordPress’s core).

My goal is to cre­ate a mod­ern, respon­sive, mobile-first, offline-first and (hope­fully) good look­ing web front-end. Why? Because we can, today! :)

Also, I want to lever­age mul­ti­ple data sources:

On the tech­ni­cal side of things, I plan to use HTML5/CSS3 (haha), Angu­larJS and/or Meteor and/or Poly­mer (I haven’t cho­sen yet).

For the CSS part I intend to use SASS and I think that I’ll use Pure.CSS or Foun­da­tion. I might also take a peek at Foun­da­tion for Apps.. Finally, I’ll try and play with ES6 (through ES 6TO5).

For now I’ve only cre­ated a build based on Googles Web Starter Kit to make my life a bit eas­ier, using NPM and Gulp. I’ve also added basic Docker sup­port (though there’s much room for improve­ment there).

Well that’s it for today; please don’t hes­i­tate to share your thoughts! ;-)

A bit of Windows Docker bash-fu

Monday, April 20th, 2015

In my last post I’ve men­tionned that Microsoft has helped Docker deliver a native Docker client for Win­dows (yay!).

I’ve also promised to share the lit­tle bits that I’ve added to my Win­dows bash pro­file to make my life eas­ier. As I’ve said, I’m a huge fan of MSYS and msys­Git and I use my Git Bash shell all day long, so here comes a bit of Win­dows bash-fu.

For those won­der­ing, I pre­fer Linux and I would use it as my main OS (did so in the past) if I didn’t also like gam­ing. I can’t stand fid­dling around con­fig files to get my games run­ning (hey Wine) and I can’t stand los­ing n FPS just to stay on the free side. Finally I am not too fond of putting mul­ti­ple OSes on my main machine just for the sake of being able to play. The least painful solu­tion for me is sim­ply to use Win­dows and remain almost sane by using Bash.

One thing to note is that my bash pro­file as well as all the tools that I use are syn­chro­nized between my com­put­ers in order to allow me to have a con­sis­tent envi­ron­ment; I’m done rag­ing because I’m in the train and some tool I’ve installed on my desk­top isn’t avail­able on my lap­top.. I’ll describe that setup.. another day :)

So first things first, I’ve installed Docker v1.6.0 on my machine with­out adding it to the path or cre­at­ing any short­cuts (since I’m not going to use that install at all); you can get it from https://github.com/boot2docker/windows-installer/releases/latest.

Once installed, I’ve copied the docker client (docker.exe) to the folder I use to store my shared tools (in this case c:\CloudStation\programs\dev\docker). I have the Docker machine in the same folder (down­loaded from here).

append_to_path(){ # dumb append to path
    PATH=$1":"$PATH
}
...
# Docker
export DOCKER_HOME=$DEV_SOFT_HOME/Docker
append_to_path $DOCKER_HOME

alias docker='docker.exe'

alias docker-machine='docker-machine.exe'
alias dockermachine='docker-machine'
alias dm='docker-machine'

export DOCKER_LOCAL_VM_NAME='docker-local'

In the snip­pet above I sim­ply ensure that the docker client is on my path and that I can invoke it sim­ply using ‘docker’. Same for docker-machine, along with a nice short­cut ‘dm’.

Note that I also set a name for the local Docker VM that I want to man­age; you’ll see below why that’s useful.

docker-config-client()
{
	echo "Configuring the Docker client to point towards the local Docker VM ($DOCKER_LOCAL_VM_NAME)..."
	eval "$(docker-machine env $DOCKER_LOCAL_VM_NAME)"
	if [ $? -eq 0 ]; then
		echo "Docker client configured successfully!"
	else
		echo "Failed to configure the Docker client!"
		return;
	fi
}
alias dockerconfig='docker-config-client'
alias configdocker='docker-config-client'

The ‘docker-config-client’ func­tion allows me to eas­ily con­fig­ure my Docker client to point towards my local Docker VM. I’ve added some aliases because I’ve got a pretty bad memory :)

This func­tion assumes that the local Docker VM already exists and is up an run­ning. This is not always the case, hence the addi­tional func­tions below.

docker-check-local-vm() # check docker-machine status and clean up if necessary
{
	echo "Verifying the status of the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmCheckResult=$(docker-machine ls)
	#echo $dmCheckResult
	if [[ $dmCheckResult == *"error getting state for host $DOCKER_LOCAL_VM_NAME: machine does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is known by docker-machine but does not exist anymore."
		echo "Cleaning docker-machine."
		dmCleanupResult=$(docker-machine rm $DOCKER_LOCAL_VM_NAME)
		
		if [[ $dmCleanupResult == *"successfully removed"* ]]
		then
			echo "docker-machine cleanup successful! Run 'docker-init' to create the local Docker VM."
		fi
		return
	fi
	echo "No problem with the local Docker VM ($DOCKER_LOCAL_VM_NAME) and docker-machine. If the machine does not exist yet you can create it using 'docker-init'"
}
alias dockercheck='docker-check-local-vm'
alias checkdocker='docker-check-local-vm'

The ‘docker-check-local-vm’ sim­ply lists the docker engines known by docker-machine in order to see if there’s a prob­lem with the local Docker VM. Such a prob­lem can occur when docker-machine knows about a given Docker engine and you delete it (e.g., if you remove the Vir­tu­al­box VM then invoke ‘docker-machine ls’, then you’ll get the error).

docker-start()
{
	echo "Trying to start the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmStartResult=$(docker-machine start $DOCKER_LOCAL_VM_NAME)
	#echo $dmStartResult
	if [[ $dmStartResult == *"machine does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not seem to exist."
		docker-check-local-vm
		return
	fi
	
	if [[ $dmStartResult == *"VM not in restartable state"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is probably already running."
		docker-config-client
		return
	fi
	
	if [[ $dmStartResult == *"Waiting for VM to start..."* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) was successfully started!"
		docker-config-client
		return
	fi
	
	if [[ $dmStartResult == *"Host does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not exist. Run 'docker-init' first!"
		return
	fi
}
alias dockerstart='docker-start'
alias startdocker='docker-start'

The ‘docker-start’ func­tion above tries to start my local Docker VM. It first assumes that the machine does exist (because I’m an opti­mist after all).

Since the docker-machine exe­cutable doesn’t return use­ful val­ues, I have to resort to string match­ing; I know that this sucks but don’t for­get we’re on Win­dows.. There’s prob­a­bly a way to han­dle this bet­ter, but it’s enough for me now.

If the VM does not exist, the docker-machine check func­tion is called.

If the VM can­not be started, it might be that the machine is already run­ning; in that case the docker client gets con­fig­ured (same if the start succeeds).

If the VM clearly doesn’t exist then the func­tion stops there and points towards ‘docker-init’ explained afterwards.

docker-stop()
{
	echo "Trying to stop the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmStopResult=$(docker-machine stop $DOCKER_LOCAL_VM_NAME)
	#echo $dmStopResult
	if [[ $dmStopResult == *"Host does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not seem to exist."
		docker-check-local-vm
		return
	fi
	
	if [[ $dmStopResult == *"exit status 1"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is already stopped (or doesn't exist anymore)."
		docker-check-local-vm
		return
	fi
	
	echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) was stopped successfully."
}
alias dockerstop='docker-stop'
alias stopdocker='docker-stop'

The ‘docker-stop’ func­tion stops the local Docker VM if it’s run­ning (pretty obvi­ous eh ^^). In case of error, the docker-machine check func­tion is called (docker-check-local-vm).

docker-init()
{
	echo "Trying to create a local Docker VM called $DOCKER_LOCAL_VM_NAME"
	dmCreateResult=$(docker-machine create --driver virtualbox $DOCKER_LOCAL_VM_NAME)
	#echo $dmCreateResult
	
	if [[ $dmCreateResult == *"has been created and is now the active machine."* ]]
	then
		echo "Local Docker VM ($DOCKER_LOCAL_VM_NAME) created successfully!"
		docker-config-client
		return
	fi
	
	if [[ $dmCreateResult == *"already exists"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) already exists!"
		dockerstart
		return
	fi
}
alias dockerinit='docker-init'
alias initdocker='docker-init'

This last func­tion, ‘docker-init’ helps me pro­vi­sion my local Docker VM and con­fig­ure my Docker client to point towards it.

With these few com­mands, I’m able to quickly configure/start/use a local Docker VM in a way that works nicely on all my machines (remem­ber that I share my bash pro­file & tools across all my computers).

Voilà! :)

Docker, Docker Machine, Windows and msysGit happy together

Sunday, April 19th, 2015

Hey there!

tl;dr The Docker client for Win­dows is here and now it’s the real deal. Thanks MSFT! :)

If you’re one of the poor souls that have to suf­fer with the Win­dows ter­mi­nal (will­ingly or not) on a daily basis but always dream about Bash, then you’re prob­a­bly a fan of MSYS just like me.

If you’re a devel­oper too, then you’re prob­a­bly a fan of msys­Git.. just like me :)

Finally, if you fol­low the IT world trends then you must have heard of Docker already.. unless you’re liv­ing in some sort of cave (with­out Inter­net access). If you enjoy play­ing with the bleed­ing edge… just like me, then chances are that you’ve already given it a try.

If you’ve done so before this month and sur­vived the expe­ri­ence, then kudos because the least I can say is that the Win­dows “inte­gra­tion” wasn’t all that great.

Since Docker lever­ages Linux ker­nel fea­tures so heav­ily, it should not come as a sur­prise that sup­port on Win­dows requires a vir­tual machine to host the Docker engine. The only nat­ural choice for that VM was of course Oracle’s Vir­tu­al­box given that Hyper-V is only avail­able in Win­dows Server or Win­dows 8+ Pro/Enterprise.

Boot2Docker was nice, espe­cially the VM, but the Boot2Docker client made me feel in jail (no pun intended). Who wants to start a spe­cific shell just to be able to play with Docker? I under­stand where that came from, but my first reflex was to try and inte­grate Docker in my usual msys­Git bash shell.

To do so, I had to jump through a few hoops and even though I did suc­ceed, a hugely annoy­ing lim­i­ta­tion remained: I wasn’t able to eas­ily run the docker com­mand using files else­where than under /Users/…

At the time, the docker client was actu­ally never exe­cuted on Win­dows, it was exe­cuted within the VM, through SSH (which is what the Boot2Docker client did). Because of that, docker could only inter­act with files reach­able from the VM (i.e., made avail­able via mount). All the mounting/sharing/SSH (and keys!) required quite a few workarounds.

At the end of the day it was still fun to workaround the quirks because I had to play with Virtualbox’s CLI (e.g., to con­fig­ure port redi­rec­tions), learn a bit more about Docker’s API, …

Well, fast for­ward April 16th and there it comes, Microsoft has helped port the docker client to Win­dows.

With this release and com­bined with Docker machine which is also avail­able for Win­dows, there will be a lot less suf­fer­ing involved in used Docker on Windows :)

In the next post I’ll go through some of the functions/aliases I’ve added to my bash pro­file to make my life easier.

Time for some Web dev

Tuesday, April 14th, 2015

Back on 2009 I wanted to hop onto the blog­ging train again and cre­ated this blog. At the time, I thought that it would be a shame to use an exist­ing Word­Press theme so I’ve decided design and imple­ment my own.

My main focus was on imple­ment­ing a com­plete Word­Press theme, thus under­stand­ing and lever­ag­ing the PHP Word­Press API; I’ve also had tons of fun fool­ing around with jQuery to add some fanci­ness bits (tooltips, rounded cor­ners, ani­ma­tions, effects on the images, form val­i­da­tion …). My goal with the blog was also to cre­ate a nice place for expos­ing a few pic­tures of my own so I’ve spent some time inte­grat­ing a Light­box (which is kind of bro­ken now).

For the design, I’ve used the trendy CSS frame­work of those days: Blue­print — which seems to have been aban­doned later that year :). Blue­print was like 960, it pro­vided a nice grid sys­tem to make it eas­ier to design the UI. Com­bined with a CSS reset stylesheet such as Eric Meyer’s, it allowed to cre­ate nice designs with good browser com­pat­i­bil­ity. These CSS grid sys­tems had fixed sizes and often came with PSD files to kick­start the design work in Pho­to­shop, mak­ing it all pretty straightforward :)

I hadn’t really con­sid­ered mobiles devices dur­ing devel­op­ment (the big shift didn’t occur yet); more­over, CSS 3 media queries weren’t really pro­duc­tion ready at that point and Respon­sive Web design was yet to go mainstream.

In the end I was quite sat­is­fied with the result, know­ing that I’m no designer to start with.

I’m still pretty happy with the theme as it stands, but the fact that it lacks respon­sive­ness is a huge pain point nowa­days. The sit­u­a­tion could be worse, but it’s still far from per­fect on small and large devices. At the time I also didn’t con­sider acces­si­bil­ity at all.

For the curi­ous among you, this theme, known as Mid­night Light, is avail­able on GitHub: https://github.com/dsebastien/midnightlight.

I’ve only felt moti­vated again for the Web in 2012. At that point I was going through pretty tough times at work (lots of stuff to learn, not enough time to do so and a lot of pres­sure to deliver) and so when I was com­ing home I needed to relax. Dia­blo 3 was per­fect for me; I played like crazy and ended up putting around 1500 hours into that damn game :).

At some point I felt the need for a tool to help me opti­mize my play­ing time and that was a per­fect excuse for me to get my hands dirty with the trendy stuff of those days: HTML 5, CSS 3, WhatWG new JS APIs etc. I’ve thus cre­ated ‘D3 XP Farm­ing’, a pretty basic sin­gle page appli­ca­tion cre­ated using HTML 5, CSS3, Local­Stor­age, Mod­ern­izR and a few hun­dred lines of javascript/jQuery code to put the whole thing in motion.

Thus it’s been a long while since I’ve last really devel­oped for the Web. I’ve been think­ing about cre­at­ing a new theme for quite some time but kept the idea at the bot­tom of my todo list.

Last year, I’ve started work­ing again as a soft­ware devel­oper (after 3 years in the dark side of IT Ops). I’ve never really stopped read­ing about soft­ware devel­op­ment, pro­gram­ming lan­guages and the evo­lu­tion of the Web.

In recent months, I’ve been read­ing a lot about the lat­est W3C/What WG stan­dards status/browser sup­port, Web com­po­nents, mobile first & offline first prin­ci­ples, client-side UIs, respon­sive design, NodeJS, NPM, browser news includ­ing stuff about some Spar­tan com­ing to fin­ish off IE, etc etc etc.

This and related dis­cus­sions at work have led me to recon­sider the pri­or­ity of cre­at­ing a new theme for my website ;-)

Hence I hereby offi­cially announce (haha) the cre­ation of a new project of mine (open source as usual): Mid­night Light v2.

For now the design exists only on paper but that won’t last long :)

In the upcom­ing posts I will talk a bit more about the cur­rent project sta­tus and my evil plans =)

Wood Panoramix

Tuesday, March 17th, 2015

2015-03-15 - Panoramique Bois 01.jpg 2015-03-15 - Panoramique Bois 02.jpg

Woody woods

Tuesday, March 17th, 2015

2015-03-08 - Allee bois.jpg 2015-03-08 - Bois.jpg

B&W Portraits

Tuesday, March 17th, 2015

2015-03-15 - Matthieu 02.jpg 2015-03-15 - Matthieu 03.jpg

Get Adobe Flash player
This website uses a Hackadelic PlugIn, Hackadelic SEO Table Of Contents 1.7.3.