Archive for April, 2015

Reveal.js me something

Sunday, April 26th, 2015

tl;dr: I’ve created a project for creating Reveal.JS presentations quickly using Markdown alone

About

I’ve been wanting to play around with Reveal.js quite some time but never quite took the time necessary to read the doc.

Yesterday I did and realized that the only serious editor for Reveal.js is http://slides.com/ which is only free for public decks (which is nice BTW) and well, I’d also like to create my own slide decks without paying just to be able to do so.

Given that Reveal.js is free and open source (MIT license), you can also clone their git repository and create your decks by hand. I like HTML but found Reveal.JS’s syntax a bit too verbose. Luckily, there’s also a way to use Markdown to define the contents of a slide (and the markdown code is converted at runtime using a JS library provided with Reveal.js).

I’ve looked for a way to create Reveal.js presentations quickly based on Markdown alone but couldn’t find one that pleased me.. so I’ve created my very own.

dSebastien’s reveal.js presentations template

presentations-revealjs is a simple to use template for creating Reveal.js presentations using Markdown alone that comes along with a useful build script.

Using it you can:

  • Create your slide deck using markdown alone
  • Edit your metadata in a single configuration file
  • Tweak Reveal.JS as you wish in the provided template
  • Use a few NPM commands to build your presentation and serve it to the world
  • See the results live (thanks to BrowserSync)

Check out the project page for more details as well as usage guidelines =)


A bit more Windows Docker bash-fu

Wednesday, April 22nd, 2015

Feeling bashy enough yet? :)

In my last post, I’ve given you a few useful functions for making your life with Docker easier on Windows. In this post, I’ll give you some more, but before that let’s look a bit a what docker-machine does for us.

When you invoke docker-machine to provision a Docker engine using Virtualbox, it “simply” creates a new VM… Okay though pretty basic, this explanation is valid ^^.

What? Not enough for you? Okay okay, let’s dive a bit deeper =)

Besides the VM, behind the scenes, docker-machine generates multiple things for us:

  • a set of self-signed certificates: used to create a server certificate for the Docker engine in the VM and a client certificate for the Docker client (also used by docker-machine to interact with the engine in the VM)
  • an SSH key-pair (based on RSA): authorized by the SSH daemon and used to authenticate against the VM

Docker-machine uses those to configure the SSH daemon as well as the Docker engine in the VM and stores these locally on your computer. If you run the following command (where docker-local is the name of the VM you’ve created), you’ll see where those files are stored:

command: eval "$(docker-machine env docker-local)"

export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH="C:\\Users\username\.docker\\machine\\machines\\docker-local"
export DOCKER_HOST=tcp://192.168.99.108:2376

As you can see above, the files related to my “docker-local” are all placed under c:\Users\username\.docker\machine\machines\docker-local. Note that DOCKER_TLS_VERIFY is enabled (which is nice). Also note that the DOCKER_HOST (i.e., engine) IP is the one of the VM (we’ll come back to this later on). Finally, the DOCKER_HOST port is 2376, which is Docker’s default.

Using docker-machine you can actually override just about any setting (including the location where the files are stored).

If you take a look at that location, you’ll see that docker-machine actually stores many interesting things in there:

  • a docker-local folder containing the VM metadata and log files
  • boot2docker.iso: the ISO used as basis for the VM (which you can update easily using docker-machine)
  • the CA, server and client certificates (ca.pem, cert.pem, server.pem, …)
  • config.json: more about this below
  • disk.vmdk: the VM’s disk (useful to take in backup if you care (you shouldn’t :p)
  • the SSH key-pair that you can use to authenticate against the VM (id_rsa, id_rsa.pub)

As noted above, there’s also a ‘config.json’ file, which contains everything docker-machine needs to know about that Docker engine:

{
	"DriverName" : "virtualbox",
	"Driver" : {
		"CPU" : -1,
		"MachineName" : "docker-local",
		"SSHUser" : "docker",
		"SSHPort" : 51648,
		"Memory" : 1024,
		"DiskSize" : 20000,
		"Boot2DockerURL" : "",
		"CaCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca.pem",
		"PrivateKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca-key.pem",
		"SwarmMaster" : false,
		"SwarmHost" : "tcp://0.0.0.0:3376",
		"SwarmDiscovery" : ""
	},
	"StorePath" : "C:\\Users\\...\\.docker\\machine\\machines\\docker-local",
	"HostOptions" : {
		"Driver" : "",
		"Memory" : 0,
		"Disk" : 0,
		"EngineOptions" : {
			"Dns" : null,
			"GraphDir" : "",
			"Ipv6" : false,
			"Labels" : null,
			"LogLevel" : "",
			"StorageDriver" : "",
			"SelinuxEnabled" : false,
			"TlsCaCert" : "",
			"TlsCert" : "",
			"TlsKey" : "",
			"TlsVerify" : false,
			"RegistryMirror" : null
		},
		"SwarmOptions" : {
			"IsSwarm" : false,
			"Address" : "",
			"Discovery" : "",
			"Master" : false,
			"Host" : "tcp://0.0.0.0:3376",
			"Strategy" : "",
			"Heartbeat" : 0,
			"Overcommit" : 0,
			"TlsCaCert" : "",
			"TlsCert" : "",
			"TlsKey" : "",
			"TlsVerify" : false
		},
		"AuthOptions" : {
			"StorePath" : "C:\\Users\\...\\.docker\\machine\\machines\\docker-local",
			"CaCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca.pem",
			"CaCertRemotePath" : "",
			"ServerCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\server.pem",
			"ServerKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\server-key.pem",
			"ClientKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\key.pem",
			"ServerCertRemotePath" : "",
			"ServerKeyRemotePath" : "",
			"PrivateKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca-key.pem",
			"ClientCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\cert.pem"
		}
	},
	"SwarmHost" : "",
	"SwarmMaster" : false,
	"SwarmDiscovery" : "",
	"CaCertPath" : "",
	"PrivateKeyPath" : "",
	"ServerCertPath" : "",
	"ServerKeyPath" : "",
	"ClientCertPath" : "",
	"ClientKeyPath" : ""
}

One thing that I want to mention about that file, since I’m only drawing the picture of the current Windows integration of Docker, is the SSHPort. You can see that it’s ‘51648’. That port is the HOST port (i.e., the port I can use from Windows to connect to the SSH server of the Docker VM).

How does this work? Well unfortunately there’s no voodoo magic at work here.

The thing with Docker on Windows is that the Docker engine runs in a VM, which makes things a bit more complicated since the onion has one more layer: Windows > VM > Docker Engine > Containers. Accessing ports exposed to the outside world when running a container will not be as straightforward as it would be when running Docker natively on a Linux box.

When docker-machine provisions the VM, it creates two network interfaces on it; a first one in NAT mode to communicate with the outside world (i.e., that’s the one we’re interested in) and a second one in VPN mode (which we won’t really care about here).

On the first interface, which I’ll further refer to as the “public” interface, docker-machine configures a single port redirection for SSH (port 51648 on the host towards port 22 on the guest). This port forwarding rule is what allows docker-machine and later the Docker client to interact with the Docker engine in the VM (I assume that the port is fixed though it might be selected randomly at creation time, I didn’t check this).

So all is nice and dandy, docker-machine provisions and configures many things for you and now that Microsoft has landed a Docker CLI for Windows, we can get up and running very quickly, interacting with the Docker engine in the VM through the Docker API, via SSH and using certificates for authentication. That’s a mouthful and it’s really NICE.. but.

Yeah indeed there’s always a but :(

Let’s say that you want to start a container hosting a simple Web server serving your pimped AngularJS+Polymer+CSS3+HTML5+whatever-cool-and-trendy-today application. Once started, you probably want to be able to access it in some way (let’s say using your browser or curl if you’re too cool).

Given our example, we can safely assume that the container will EXPOSE port 80 or the like to other containers (e.g., set in the Dockerfile). When you start that container, you’ll want to map that container port to a host port, let’s say.. 8080.

Okay curl http://localhost:8080 … 1..2..3, errr nothing :(

As you might have guessed by now, the annoying thing is that when you start a container in your Docker VM, the host that you’re mapping container ports to… is your VM.

I know it took a while for me to get there but hey, it might not be THAT obvious to everyone right? :)

I’ve mentioned earlier that docker-machine configures a port forwarding rule on the VM after creating it (for SSH, remember?). Can’t we do the same for other ports? Well the thing is that you totally can using VirtualBox’s CLI but it’ll make you understand that the current Windows integration of Docker is “nice” but clearly not all that great.

As stated, we’re going the BASH way. You can indeed achieve the same using your preferred language, whether it is PERL, Python, PowerShell or whatever.

So the first thing we’ll need to do is to make the VirtualBox CLI easily available in our little Bash world:

append_to_path /c/Program\ Files/Oracle/VirtualBox
alias virtualbox='VirtualBox.exe &'
alias vbox='virtualbox'
alias vboxmanage='VBoxManage.exe'
alias vboxmng='vboxmanage'

You’ll find the description of the append_to_path function in the previous post.

Next, we’ll add three interesting functions based on VirtualBox’s CLI; one to check whether the Docker VM is running or not and two other ones to easily add/remove a port redirection to our Docker VM:

is-docker-vm-running()
{
	echo "Checking if the local Docker VM ($DOCKER_LOCAL_VM_NAME) is running"
	vmStatusCheckResult=$(vboxmanage list runningvms)
	#echo $vmStatusCheckResult
	if [[ $vmStatusCheckResult == *"$DOCKER_LOCAL_VM_NAME"* ]]
	then
		echo "The local Docker VM is running!"
		return 0
	else
		echo "The local Docker VM is not running (or does not exist or runs using another account)"
		return 1
	fi
}


# redirect a port from the host to the local Docker VM
# call: docker-add-port-redirection rule_name host_port guest_port
docker-add-port-redirection()
{
	echo "Preparing to add a port redirection to the Docker VM"
	is-docker-vm-running
	if [ $? -eq 0 ]; then
		# vm is running
		vboxmanage modifyvm $DOCKER_LOCAL_VM_NAME --natpf1 "$1,tcp,127.0.0.1,$2, ,$3"
	else
		# vm is not running
		vboxmanage controlvm $DOCKER_LOCAL_VM_NAME natpf1 "$1,tcp,127.0.0.1,$2, ,$3"
	fi
	echo "Port redirection added to the Docker VM"
}
alias dapr='docker-add-port-redirection'


# remove a port redirection by name
# call: docker-remove-port-redirection rule_name
docker-remove-port-redirection()
{
	echo "Preparing to remove a port redirection to the Docker VM"
	is-docker-vm-running
	if [ $? -eq 0 ]; then
		# vm is running
		vboxmanage modifyvm $DOCKER_LOCAL_VM_NAME --natpf1 delete "$1"
	else
		# vm is not running
		vboxmanage controlvm $DOCKER_LOCAL_VM_NAME natpf1 delete "$1"
	fi
	echo "Port redirection removed from the Docker VM"
}
alias drpr='docker-remove-port-redirection'


docker-list-port-redirections()
{
    portRedirections=$(vboxmanage showvminfo $DOCKER_LOCAL_VM_NAME | grep -E 'NIC 1 Rule')
	for i in "${portRedirections[@]}"
	do
		printf "$i\n"
	done
}
alias dlrr='docker-list-port-redirections'
alias dlpr='docker-list-port-redirections'

Note that these functions will work whether the Docker VM is running or not. Since I’m an optimist, I don’t check whether the VM actually exists or not beforehand or if the commands did succeed (i.e., use at your own risk). One caveat is that these functions will not work if you started the Docker VM manually through Virtualbox’s GUI (because it keeps a lock on the configuration). These functions handle tcp port redirections, but adapting the code for udp is a no brainer.

The last function (docker-list-port-redirections) will allow you to quickly list the port redirections that you’ve already configured. You can do the same through Virtalbox’s UI but that’s only interesting if you like moving the mouse around and clicking on buttons, real ITers don’t do that no more (or do they? :p).

With these functions you can also easily create port redirections for port ranges using a simple loop:

for i in { 49152..65534 }; do
    dapr "rule$i" $i $i

Though I would recommend against that. You should rather add a few useful port redirections such as for port 8080, 80 and the like. These can only ‘bother’ while the Docker VM is running and if you’re trying to use redirected ports.

Another option would be to switch the “public” interface from NAT mode to bridge mode, though I’m not too fond of making my local Docker VM a ‘first’ class citizen of my LAN.

Okay, two more functions and I’m done for today :)

Port redirections are nice because they’ll allow you to expose your Docker containers to the outside world (i.e., not only your machine). Although there are situations where you might not want that. In that case, it’s useful to just connect directly to the local Docker VM.

docker-get-local-vm-ip(){
	export DOCKER_LOCAL_VM_IP=$(docker-machine ip $DOCKER_LOCAL_VM_NAME)
	echo "Docker local VM ($DOCKER_LOCAL_VM_NAME) IP: $DOCKER_LOCAL_VM_IP"
}
alias dockerip='docker-get-local-vm-ip'
alias dip='docker-get-local-vm-ip'

docker-open(){
	docker-get-local-vm-ip
	( explorer "http://$DOCKER_LOCAL_VM_IP:$*" )&	
}
alias dop='docker-open'

The ‘docker-get-local-vm-ip’ or ‘dip’ for close friends uses docker-machine to retrieve the IP it knows for the Docker VM. It’s best friend, ‘docker-open’ or ‘dop’ will simply open a browser window (you default one) towards that IP using the port specified in argument; for example ‘docker-open 8080’ will get you quickly towards your local Docker VM on port 8080.

With these functions, we can also improve the ‘docker-config-client’ function from my previous post to handle the case where the VM isn’t running:

docker-config-client()
{
	echo "Configuring the Docker client to point towards the local Docker VM ($DOCKER_LOCAL_VM_NAME)..."
	is_docker_vm_running
	if [ $? -eq 0 ]; then
		eval "$(docker-machine env $DOCKER_LOCAL_VM_NAME)"
		if [ $? -eq 0 ]; then
			docker-get-local-vm-ip
			echo "Docker client configured successfully! (IP: $DOCKER_LOCAL_VM_IP)"
		else
			echo "Failed to configure the Docker client!"
			return;
		fi
	else
		echo "The Docker client can't be configured because the local Docker VM isn't running. Please run 'docker-start' first."
	fi
}
alias dockerconfig='docker-config-client'
alias configdocker='docker-config-client'

Well that’s it for today. Hope this helps ;-)


Web 3.0 is closer than you might think

Wednesday, April 22nd, 2015

Evolution of the Web towards Web 3.0

The real modern Web — I guess we’ll call that ‘Web 3.0’ — is really getting closer and closer and guess what? It’s also up to you to make it real.

Since my last Web development project (2012), many things have changed and evolved on the Web; its evolution pace is simply astonishing. HTML5, CSS3 and many WhatWG specifications are now pretty well supported by major browsers. You could say that we currently live the Web 2.5 era.

The real game changer now is that Microsoft has finally decided to retire IE and introduce Spartan, a legacy-free modern Web browser.

When you take a look at what Microsoft has announced for Spartan and their roadmap, you can only feel GOOD if you’re a Web enthusiast.

Nearly gone are the days where you had to play around IE’s quirks, battle against box model issues, resort to IE-specific conditional tags, use X-UA-Compatible and all that crap just to get a WORKING application/design across browsers. Nearly gone are the days where you had to manually add 36000 browser-specific prefixes to your CSS stylesheets just to get a gradient and whatnot, …

We’re getting closer to a state of the Web browser landscape where we’ll finally be able to think less about browser compatibility issues and concentrate our efforts on actually creating useful and/or beautiful things that simply work everywhere. OK I’m getting a ahead of myself, but still there’s more hope today than there was back in 2012 ;-)

Just take a look at this list. Of course there could be more green stuff but it’s already pretty damn cool.

Who needs native applications?

Some people already think about the next step, the ‘Ambient Web Era’, an era where the Web will actually be everywhere. You could say that we’re already there, but I don’t agree. I think that we’ll only reach that state after the real boom of the Internet of Things, when the Web will really be on par with native applications, when I’ll be easily able to create a Web interface for managing my heating system using nothing but Web technologies (i.e., without all the current hoops that we need to get through to reach that point).

But before we reach that level, we should observe progressive evolutions. Over time, Web applications will be more and more on par with native applications with means to ‘install’ them properly (e.g., using things such as manifest.webapp, manifest.json and the like) and native capabilities will end up exposed through JavaScript APIs.

Adapting to the mobile world

I hear and read more and more about such ideas as mobile first, responsive web design, client-side UIs, offline first, … The modern Web standards and browser vendors try to cater for all those things that we’ve missed for so long: means to actually create engaging user experiences across devices. For example, with standards such as IndexedDB, File API and Local Storage, we’ll be able to save/load/cache data at the client side to allow our applications to work offline. WebGL and soon WebGL 2.0 allow us to take advantage of all the graphics chip horsepower while the canvas element and associated API allow us to draw in 2D. WebRTC enable real-time audio/video communications, then there are also WebSockets, etc. These are but a few out of many specs that we can actually leverage TODAY across modern Web browsers!

As far as I’m concerned, mobile first is already a reality, it’s just a question of time for awareness and maturity to rise among the Web developers crowd. CSS3 media queries and tons of CSS frameworks make it much easier to adapt our Web UIs to different device sizes and responsive Web design principles are now pretty clearly laid out.

But for me, mobile first is not enough; we need to care about and build application for people who live in the mobile world but don’t necessarily have fast/consistent connectivity (or simply choose to stay offline in some circumstances). We need to consider offline first as well. For example, although we’re in 2015, I’m still disconnected every 5 minutes or so while I’m on the train for my daily commute (though I live in Western Europe).

We must ensure that our Web applications handle disconnections gracefully. One way to do this is for example to cache data once we’ve loaded it or batch load stuff in advance (e.g., blog posts from the last two months). The term offline first is pretty well chosen because, just like security, it’s difficult to add that as an afterthought. When your application tries to interact with the server side (and those interactions should be well thought/limited/optimized) it needs to check the connectivity state first, maybe evaluate the connection speed and adapt to the current network conditions. For example you might choose to load a smaller/lighter version of some resource if the connection is slow.

Offline first ideas have been floating around for a while but I think that browser vendors can help us much more than they currently do. The offline first approach is still very immature and it’ll take time for the Web development community to discover and describe best practices as well as relevant UX principles and design patterns. Keep an eye on https://github.com/offlinefirst.

Application Cache can help in some regards but there are also many pitfalls to be aware of; I won’t delve into that, there’s already a great article about it over at A List Apart.

Service Workers will help us A LOT (background sync, push notifications, …). Unfortunately, even though there are some polyfills available, I’m not sure that they are production ready just yet.

There are also online/offline events which are already better supported and can help you detect the current browser connectivity status.

The device’s battery status might also need to be considered, but the overall browser support for the Battery Status API isn’t all that great for now.

In my new project, I’ll certainly try and create an offline-first experience. I think that I’ll mostly rely on LocalStorage, but I’d also like to integrate visual indications/user control regarding what is online/offline.

Client-side UIs – Why not earlier?

One thing I’ve mentioned at the beginning of the post is fact that client-side UIs gain more and more momentum and rightfully so. I believe that UIs will progressively shift towards the client side for multiple reasons, but first let’s review a bit of IT history :)

Historically, Web application architectures have been focusing on the server side of things, considering the client-side as nothing more than a dumb renderer. There were many important and obvious reasons for this.

Yesterday we were living in a world where JavaScript was considered as a toy scripting language only useful to perform basic stuff such as displaying alert boxes, scrolling text in the status bar, etc. Next to that, computers weren’t nearly as powerful as they are today. Yesterday we were living in a world where tablets were made of stone and appeared only much later in Star Trek which was still nothing more than sucky science fiction (sorry guys, I’m no trekkie :p).

Back in 200x (ew that feels so close yet so distant), browser rendering engines and JavaScript engines were not nearly as fast as they are today. The rise of the Web 2.0, XHR, JS libs and the death of Flash have pushed browser vendors to tackle JS performance issues and they’ve done a terrific job.

Client-side UIs – Why now?

Today we are living in a world where mobile devices are everywhere. You don’t want to depend on the server for every interaction between the user and your application. What you want is for the application to run on the user’s device as independently as possible and only interact with the server if and when it’s really needed. Also, when interaction is needed, you only want useful data to be exchanged because mobile data plans cost $$$.

So why manage the UI and its state on the server? Well in the past we had very valid reasons to do so. But not today, not anymore. Mobile devices of today are much more powerful than desktop computers of the past. We are living in a world where JavaScript interpreters embedded in our browsers are lightning fast. ECMAScript has also evolved a lot over time and still is (look at all the cool ES6 stuff coming in fast).

Moreover, JavaScript has not only evolved as a language and from a performance standpoint. With the Web 2.0, JavaScript has become increasingly used to enhance the UI and user experience of Web applications in general. Today, JavaScript is considered very differently by software developers compared to 10 years ago. Unfortunately there are still people who still think that JavaScript is Java on the Web but hey, leave those guys aside and stick with me instead :)

Today we have package and dependency management tools for front-end code (e.g., NPM, Browserify, Webpack, …). We also have easy means to maintain a build for front-end code (e.g., Gulp, Grunt, etc). We have JavaScript code quality checking tools (e.g., JSHint, JSLint, Sonar integration, …). We have test frameworks and test runners for JavaScript code (e.g., Mocha, Karma, Protractor, Testacular, …), etc.

We have a gazillion JS libraries, JS frameworks, better developer tools included in modern browsers, we have better IDE support and even dedicated IDEs (e.g., Jetbrain’s WebStorm). And it all just keeps getting better and better.

In short, we have everything any professional developer needs to be able to consider developing full blown applications using JavaScript. Again, over time, standards will evolve and the overall Web SDK (let’s call it that ok?) will keep expanding and extending the Web’s capabilities.

Today we can even develop JavaScript applications on the server side thanks to NodeJS and its entire forks suite :) Some people don’t understand yet why that’s useful, but once they start to see more and more code shift towards the client side, they’ll probably see the light.

Okay where was I headed? Ok I remember: server-side vs client-side. I think that given the above, we can probably agree that the client-side development world is much more mature today than it was at the beginning of the Web 2.0 and that client-side UIs make a hell of a lot more sense in today’s world.

My view of a modern Web application architecture is as follows; it might indeed not be applicable to all use cases but in many cases it certainly can:

  • Client-side UI with HTML, CSS, JS, JS libs and a JS framework to keep the code base manageable/maintainable
  • Server-side responsible exposing RESTful Web Services adapting the data representations to the specific clients
  • Server-side responsible for enforcing the business rules and interacting with the rest of the infrastructure pieces

The benefits of this approach are multiple. Since the UI is fully managed on the client-side:

  • only the necessary data needs to be exchanged between the client and the server (i.e., JSON vs full HTML page)
  • the server-side can (and should) become stateless
  • it can more easily adapt to the needs of the mobile-first & offline-first approaches
  • the UI can be much more responsive and thus more ‘native-like’
  • Since the UI is completely separated from the back-end, it can more easily be replaced

If you care for the Web even just a bit then you know this is the model we need to go towards.

Of course, shifting the UI along with all the logic behind it to the client-side clearly means that the JavaScript code bases will become much larger and thus much more complex, which poses the question of code complexity and maintainability. As I’ve said, ECMAScript continues to evolve and ES6 introduces many things that are lacking today such as modularization, collections, more functional programming constructs and even proxies.

You could say that ES6 hasn’t landed yet, but what prevents you from using it already?

One thing that you cannot ignore is the technologies knowledge. There’s a huge difference between adding JS validation code to a form and developing a whole UI based on Web technologies and JavaScript framework. With the former you can survive with a minimal knowledge of the language while with the latter, you’d better have a deeper understanding.

If you work in a Java or .NET shop and don’t have actual Web developers at your disposal, then you might not be able to follow that path easily. It all depends on your organization’s culture and your people willingness/capability to learn new things and adapt.

I often like to compare the Web technology stack to the *NIX world: as an IT professional, what do you prefer? Learning stuff that’ll remain useful and beneficial to you for many years to come or learning stuff that you know will only be true/valid for a 3-4 years period? Or even worse yet: ignore market trends and market evolution? At some point you’ll have to adapt anyway and that’ll cost you.

Technology will always evolve but some technologies have much less interest for your professional career. If someone fresh out of school asked me today what to learn/what to focus on, I certainly would recommend learning as much as possible about the Web.

Here I’m not saying that the server-side is doomed, far from it. You just can’t expose your everything directly to the client-side. You need to place security boundaries somewhere. If you’re comfortable writing server-side code in Java, C#, PHP or whatever, then continue to do so, nothing forces you to switch to Go, NodeJS or anything else on that side of the fence. What I’m saying is that if you create a Web application, then you need to consider shifting the UI towards the client and if you’re serious about it, you must create a clear separation between both sides; to each its own responsibilities.

That being said, I think that JS on the server side also makes sense, we just need to give that a few more years to become mature. But picture this: you have a server-side application that interacts with your infrastructure, manipulates a JavaScript domain model and exposes Web services that accept/deliver JS objects. Next to that, you have a client-side application that uses the same JavaScript domain model and that interacts with those Web services.

What you gain there is the ability to share code between your server and client, whether that be the domain model, validation rules or whatever else. It makes a lot of sense from a design perspective, but the difficulty to take advantage of that is two-fold: first it needs maturity (it’s not for everyone just yet) and second, you need your organization/people to adapt. Java developers are not necessarily good JavaScript developers (since yes, those two simply are very different beasts ;-)).

Another technology that I think will help us transition UIs towards the client side are Web components. The more I read about these, the more I think that they’re going to represent the next big thing (TM) in Web development. It’s unfortunate that we currently still need to resort to polyfills to be able to use these, but hopefully that won’t last too long. Browser vendors should finally agree upon a set of primitives/APIs to support that we can use (e.g., will HTML imports prevail?). I can’t expand much more on this because I haven’t played much with them yet, but they’re on my radar ^^.

Conclusion & my current plans

This post represents my current vision of the Web as it stands, its future and why I believe that it’s time to seriously consider shifting the UI towards the client-side (let’s hope that time will prove me right). This was a pretty long one but I think that sometimes it makes for a good exercise to try and articulate a vision.

As I’ve mentioned in a previous post, I’ve started a new personal project to replace this now dated Website with a more modern version.

Now that I’ve shared my vision for the future of the Web, I can expand a bit more on my plans for this new project. Rather than re-creating a full blown WordPress theme, I intend to use WordPress as a simple CMS without a frond-end of its own: I’ll keep using the administration space to manage the content but I’ll extract the data (posts, comments, pages, tags, etc) through the WP REST API (which will soon become part of WordPress’s core).

My goal is to create a modern, responsive, mobile-first, offline-first and (hopefully) good looking web front-end. Why? Because we can, today! :)

Also, I want to leverage multiple data sources:

On the technical side of things, I plan to use HTML5/CSS3 (haha), AngularJS and/or Meteor and/or Polymer (I haven’t chosen yet).

For the CSS part I intend to use SASS and I think that I’ll use Pure.CSS or Foundation. I might also take a peek at Foundation for Apps.. Finally, I’ll try and play with ES6 (through ES 6TO5).

For now I’ve only created a build based on Googles Web Starter Kit to make my life a bit easier, using NPM and Gulp. I’ve also added basic Docker support (though there’s much room for improvement there).

Well that’s it for today; please don’t hesitate to share your thoughts! ;-)


A bit of Windows Docker bash-fu

Monday, April 20th, 2015

In my last post I’ve mentionned that Microsoft has helped Docker deliver a native Docker client for Windows (yay!).

I’ve also promised to share the little bits that I’ve added to my Windows bash profile to make my life easier. As I’ve said, I’m a huge fan of MSYS and msysGit and I use my Git Bash shell all day long, so here comes a bit of Windows bash-fu.

For those wondering, I prefer Linux and I would use it as my main OS (did so in the past) if I didn’t also like gaming. I can’t stand fiddling around config files to get my games running (hey Wine) and I can’t stand losing n FPS just to stay on the free side. Finally I am not too fond of putting multiple OSes on my main machine just for the sake of being able to play. The least painful solution for me is simply to use Windows and remain almost sane by using Bash.

One thing to note is that my bash profile as well as all the tools that I use are synchronized between my computers in order to allow me to have a consistent environment; I’m done raging because I’m in the train and some tool I’ve installed on my desktop isn’t available on my laptop.. I’ll describe that setup.. another day :)

So first things first, I’ve installed Docker v1.6.0 on my machine without adding it to the path or creating any shortcuts (since I’m not going to use that install at all); you can get it from https://github.com/boot2docker/windows-installer/releases/latest.

Once installed, I’ve copied the docker client (docker.exe) to the folder I use to store my shared tools (in this case c:\CloudStation\programs\dev\docker). I have the Docker machine in the same folder (downloaded from here).

append_to_path(){ # dumb append to path
    PATH=$1":"$PATH
}
...
# Docker
export DOCKER_HOME=$DEV_SOFT_HOME/Docker
append_to_path $DOCKER_HOME

alias docker='docker.exe'

alias docker-machine='docker-machine.exe'
alias dockermachine='docker-machine'
alias dm='docker-machine'

export DOCKER_LOCAL_VM_NAME='docker-local'

In the snippet above I simply ensure that the docker client is on my path and that I can invoke it simply using ‘docker’. Same for docker-machine, along with a nice shortcut ‘dm’.

Note that I also set a name for the local Docker VM that I want to manage; you’ll see below why that’s useful.

docker-config-client()
{
	echo "Configuring the Docker client to point towards the local Docker VM ($DOCKER_LOCAL_VM_NAME)..."
	eval "$(docker-machine env $DOCKER_LOCAL_VM_NAME)"
	if [ $? -eq 0 ]; then
		echo "Docker client configured successfully!"
	else
		echo "Failed to configure the Docker client!"
		return;
	fi
}
alias dockerconfig='docker-config-client'
alias configdocker='docker-config-client'

The ‘docker-config-client’ function allows me to easily configure my Docker client to point towards my local Docker VM. I’ve added some aliases because I’ve got a pretty bad memory :)

This function assumes that the local Docker VM already exists and is up an running. This is not always the case, hence the additional functions below.

docker-check-local-vm() # check docker-machine status and clean up if necessary
{
	echo "Verifying the status of the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmCheckResult=$(docker-machine ls)
	#echo $dmCheckResult
	if [[ $dmCheckResult == *"error getting state for host $DOCKER_LOCAL_VM_NAME: machine does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is known by docker-machine but does not exist anymore."
		echo "Cleaning docker-machine."
		dmCleanupResult=$(docker-machine rm $DOCKER_LOCAL_VM_NAME)
		
		if [[ $dmCleanupResult == *"successfully removed"* ]]
		then
			echo "docker-machine cleanup successful! Run 'docker-init' to create the local Docker VM."
		fi
		return
	fi
	echo "No problem with the local Docker VM ($DOCKER_LOCAL_VM_NAME) and docker-machine. If the machine does not exist yet you can create it using 'docker-init'"
}
alias dockercheck='docker-check-local-vm'
alias checkdocker='docker-check-local-vm'

The ‘docker-check-local-vm’ simply lists the docker engines known by docker-machine in order to see if there’s a problem with the local Docker VM. Such a problem can occur when docker-machine knows about a given Docker engine and you delete it (e.g., if you remove the Virtualbox VM then invoke ‘docker-machine ls’, then you’ll get the error).

docker-start()
{
	echo "Trying to start the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmStartResult=$(docker-machine start $DOCKER_LOCAL_VM_NAME)
	#echo $dmStartResult
	if [[ $dmStartResult == *"machine does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not seem to exist."
		docker-check-local-vm
		return
	fi
	
	if [[ $dmStartResult == *"VM not in restartable state"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is probably already running."
		docker-config-client
		return
	fi
	
	if [[ $dmStartResult == *"Waiting for VM to start..."* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) was successfully started!"
		docker-config-client
		return
	fi
	
	if [[ $dmStartResult == *"Host does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not exist. Run 'docker-init' first!"
		return
	fi
}
alias dockerstart='docker-start'
alias startdocker='docker-start'

The ‘docker-start’ function above tries to start my local Docker VM. It first assumes that the machine does exist (because I’m an optimist after all).

Since the docker-machine executable doesn’t return useful values, I have to resort to string matching; I know that this sucks but don’t forget we’re on Windows.. There’s probably a way to handle this better, but it’s enough for me now.

If the VM does not exist, the docker-machine check function is called.

If the VM cannot be started, it might be that the machine is already running; in that case the docker client gets configured (same if the start succeeds).

If the VM clearly doesn’t exist then the function stops there and points towards ‘docker-init’ explained afterwards.

docker-stop()
{
	echo "Trying to stop the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmStopResult=$(docker-machine stop $DOCKER_LOCAL_VM_NAME)
	#echo $dmStopResult
	if [[ $dmStopResult == *"Host does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not seem to exist."
		docker-check-local-vm
		return
	fi
	
	if [[ $dmStopResult == *"exit status 1"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is already stopped (or doesn't exist anymore)."
		docker-check-local-vm
		return
	fi
	
	echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) was stopped successfully."
}
alias dockerstop='docker-stop'
alias stopdocker='docker-stop'

The ‘docker-stop’ function stops the local Docker VM if it’s running (pretty obvious eh ^^). In case of error, the docker-machine check function is called (docker-check-local-vm).

docker-init()
{
	echo "Trying to create a local Docker VM called $DOCKER_LOCAL_VM_NAME"
	dmCreateResult=$(docker-machine create --driver virtualbox $DOCKER_LOCAL_VM_NAME)
	#echo $dmCreateResult
	
	if [[ $dmCreateResult == *"has been created and is now the active machine."* ]]
	then
		echo "Local Docker VM ($DOCKER_LOCAL_VM_NAME) created successfully!"
		docker-config-client
		return
	fi
	
	if [[ $dmCreateResult == *"already exists"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) already exists!"
		dockerstart
		return
	fi
}
alias dockerinit='docker-init'
alias initdocker='docker-init'

This last function, ‘docker-init’ helps me provision my local Docker VM and configure my Docker client to point towards it.

With these few commands, I’m able to quickly configure/start/use a local Docker VM in a way that works nicely on all my machines (remember that I share my bash profile & tools across all my computers).

Voilà! :)


Docker, Docker Machine, Windows and msysGit happy together

Sunday, April 19th, 2015

Hey there!

tl;dr The Docker client for Windows is here and now it’s the real deal. Thanks MSFT! :)

If you’re one of the poor souls that have to suffer with the Windows terminal (willingly or not) on a daily basis but always dream about Bash, then you’re probably a fan of MSYS just like me.

If you’re a developer too, then you’re probably a fan of msysGit.. just like me :)

Finally, if you follow the IT world trends then you must have heard of Docker already.. unless you’re living in some sort of cave (without Internet access). If you enjoy playing with the bleeding edge… just like me, then chances are that you’ve already given it a try.

If you’ve done so before this month and survived the experience, then kudos because the least I can say is that the Windows “integration” wasn’t all that great.

Since Docker leverages Linux kernel features so heavily, it should not come as a surprise that support on Windows requires a virtual machine to host the Docker engine. The only natural choice for that VM was of course Oracle’s Virtualbox given that Hyper-V is only available in Windows Server or Windows 8+ Pro/Enterprise.

Boot2Docker was nice, especially the VM, but the Boot2Docker client made me feel in jail (no pun intended). Who wants to start a specific shell just to be able to play with Docker? I understand where that came from, but my first reflex was to try and integrate Docker in my usual msysGit bash shell.

To do so, I had to jump through a few hoops and even though I did succeed, a hugely annoying limitation remained: I wasn’t able to easily run the docker command using files elsewhere than under /Users/…

At the time, the docker client was actually never executed on Windows, it was executed within the VM, through SSH (which is what the Boot2Docker client did). Because of that, docker could only interact with files reachable from the VM (i.e., made available via mount). All the mounting/sharing/SSH (and keys!) required quite a few workarounds.

At the end of the day it was still fun to workaround the quirks because I had to play with Virtualbox’s CLI (e.g., to configure port redirections), learn a bit more about Docker’s API, …

Well, fast forward April 16th and there it comes, Microsoft has helped port the docker client to Windows.

With this release and combined with Docker machine which is also available for Windows, there will be a lot less suffering involved in used Docker on Windows :)

In the next post I’ll go through some of the functions/aliases I’ve added to my bash profile to make my life easier.


Time for some Web dev

Tuesday, April 14th, 2015

Back on 2009 I wanted to hop onto the blogging train again and created this blog. At the time, I thought that it would be a shame to use an existing WordPress theme so I’ve decided design and implement my own.

My main focus was on implementing a complete WordPress theme, thus understanding and leveraging the PHP WordPress API; I’ve also had tons of fun fooling around with jQuery to add some fanciness bits (tooltips, rounded corners, animations, effects on the images, form validation …). My goal with the blog was also to create a nice place for exposing a few pictures of my own so I’ve spent some time integrating a Lightbox (which is kind of broken now).

For the design, I’ve used the trendy CSS framework of those days: Blueprint — which seems to have been abandoned later that year :). Blueprint was like 960, it provided a nice grid system to make it easier to design the UI. Combined with a CSS reset stylesheet such as Eric Meyer’s, it allowed to create nice designs with good browser compatibility. These CSS grid systems had fixed sizes and often came with PSD files to kickstart the design work in Photoshop, making it all pretty straightforward :)

I hadn’t really considered mobiles devices during development (the big shift didn’t occur yet); moreover, CSS 3 media queries weren’t really production ready at that point and Responsive Web design was yet to go mainstream.

In the end I was quite satisfied with the result, knowing that I’m no designer to start with.

I’m still pretty happy with the theme as it stands, but the fact that it lacks responsiveness is a huge pain point nowadays. The situation could be worse, but it’s still far from perfect on small and large devices. At the time I also didn’t consider accessibility at all.

For the curious among you, this theme, known as Midnight Light, is available on GitHub: https://github.com/dsebastien/midnightlight.

I’ve only felt motivated again for the Web in 2012. At that point I was going through pretty tough times at work (lots of stuff to learn, not enough time to do so and a lot of pressure to deliver) and so when I was coming home I needed to relax. Diablo 3 was perfect for me; I played like crazy and ended up putting around 1500 hours into that damn game :).

At some point I felt the need for a tool to help me optimize my playing time and that was a perfect excuse for me to get my hands dirty with the trendy stuff of those days: HTML 5, CSS 3, WhatWG new JS APIs etc. I’ve thus created ‘D3 XP Farming‘, a pretty basic single page application created using HTML 5, CSS3, LocalStorage, ModernizR and a few hundred lines of javascript/jQuery code to put the whole thing in motion.

Thus it’s been a long while since I’ve last really developed for the Web. I’ve been thinking about creating a new theme for quite some time but kept the idea at the bottom of my todo list.

Last year, I’ve started working again as a software developer (after 3 years in the dark side of IT Ops). I’ve never really stopped reading about software development, programming languages and the evolution of the Web.

In recent months, I’ve been reading a lot about the latest W3C/What WG standards status/browser support, Web components, mobile first & offline first principles, client-side UIs, responsive design, NodeJS, NPM, browser news including stuff about some Spartan coming to finish off IE, etc etc etc.

This and related discussions at work have led me to reconsider the priority of creating a new theme for my website ;-)

Hence I hereby officially announce (haha) the creation of a new project of mine (open source as usual): Midnight Light v2.

For now the design exists only on paper but that won’t last long :)

In the upcoming posts I will talk a bit more about the current project status and my evil plans =)