Posts Tagged ‘windows’

Docker for Windows (beta) and msysgit

Friday, April 15th, 2016

I’ve recently joined the beta program for Docker on Windows (now based on Hyper-V).

I wanted to keep my current config using msysGit but got weird errors when executing Docker commands from msysGit: https://forums.docker.com/t/weird-error-under-git-bash-msys-solved/9210

I could fix the issue by installing a newer version of msysGit with support for the MSYS_NO_PATHCONV environment variable. With that installed, I then changed my docker alias to a better approach:

docker()
{
    export MSYS_NO_PATHCONV=1
    ("$DOCKER_HOME/docker.exe" "$@")
    export MSYS_NO_PATHCONV=0
}

Hope this helps!


PHP composer and… Bash!

Sunday, December 20th, 2015

Bash bash bash!

It’s been a very long while since I’ve last played with PHP.
I’m not really willing to start a new career as PHP integrator, but it’s still cool to see that the language and the tooling around has evolved quite a lot.

Atwood‘s law states that any application that can be written in JavaScript will eventually be written in JavaScript. One could also say that any language will ultimately get its own package manager (hello npm, NuGet, Maven, …).

So here I am, needing multiple PHP libraries and willing to try a PHP package manager :).

Apparently, composer is the coolest kid around in PHP-land. As you know I still like BASH … on Windows, so here’s a quick guide to get PHP and composer available in your Windows bash universe.

First, you need to download the PHP binaries for Windows; you can get those here (always prefer the x64 version).
Once you have the archive, unzip it where you wish then, in the folder, make a copy of “php.ini-development” and call it php.ini. That’s the configuration file that php will load each time it runs on the command line.

Edit php.ini and in it you need to uncomment the following things (for starters):

  • extension_dir = “ext”
  • extension=php_openssl.dll

With the above, you’ll have SSL support and PHP will know where to find its extensions.

Now, create a folder in which you’ll place PHP extensions. In my case, I’ve created a “php_plugins” folder and placed it right next to the folder containing the PHP binaries (I like to keep things clean).

Next, open up you bash profile and add something along those lines:

alias php7='export PHP_HOME=$DEV_SOFT_HOME/php-7.0.1-Win32-VC14-x64;append_to_path ${PHP_HOME}; export PHP_PLUGINS_HOME=$DEV_SOFT_HOME/php_plugins;'
alias php='php.exe'

Make sure to call ‘php7’ at some point in your profile so that PHP is actually added to your path. Personally, I have a “defaults” alias in which I list all the things that I want to be loaded whenever my shell is loaded:

alias defaults='php7; ...'

# Initialization
defaults # Load default tools

Close and reopen your shell. At this point you should have php at your disposal anywhere you are (eeeewwwww scary :p).

Now you’re ready to get composer. Just run the following command to download it:

curl -sS https://getcomposer.org/installer | php

Once that is done, you should have a “composer.phar” file in the current folder; grab it and move it to your “php_plugins” folder.

Finally, edit your bash profile again and add the following alias:

alias composer='php $PHP_PLUGINS_HOME/composer.phar'

Close and reopen your shell. Tadaaaaa, you can type “composer” anywhere and get the job done.. :)


Use bash to open the Windows File Explorer at some location

Wednesday, August 26th, 2015

TL;DR: don’t bother clicking your way through the Windows File Explorer, use bash functions instead! :)

I’ve already blogged in quite some length about my current Windows dev environment and I’ve put enough emphasis on the fact that bash is at the center of my workflow, together with my bash profile & more recently with ConEMU.

I continually improve my bash profile as I discover new things I can do with it, and this post is in that vein.

I often find myself opening the Windows File Explorer (Win + e) to get at some location; for that purpose, I simply pin the often used locations in the ‘Quick access’ list, although that means that I have to go the ‘click-click-click-click’ route and as we know, one can be much more efficient using only the keyboard.

To quickly open the File Explorer at locations I often need to open (e.g., my downloads folder, my movies folder & whatnot), I’ve created the following utility function & aliases:

# Aliases to open the Windows File Explorer at the current location
alias explore='explorer .' # open file explorer here
alias e='explore'
alias E='explore'

# Open File Explorer at the given location
# The location can be a path or UNC (with / rather than \)
# Examples
# openFileExplorerAt //192.168.0.1/downloads
# openFileExplorerAt /c/downloads
# openFileExplorerAt c:/downloads
openFileExplorerAt(){
 pushd $1
 explore
 popd
}

The ‘explore’ alias simply opens the Windows File Explorer at the current shell location while the ‘openFileExplorerAt’ function goes to the path given in argument and opens the File Explorer before going back to the previous shell location.

With the above, I’m able to define functions such as the one below that opens my downloads folder directly:

downloads(){
	openFileExplorerAt //nas.tnt.local/downloads
}

And since i’m THAT lazy, I just alias that to ‘dl’ ^^.

That’s it! :)


ConEmu is my new console replacement

Friday, August 7th, 2015

TL;DR: ConEmu is the BEST console for Windows power users!

Update 2015-08-24:

A recent update to ConEmu has added support for a feature I’ve requested last month, the ability to automatically restore the ConEmu console on the currently active screen (i.e., where the mouse is located), this makes ConEmu even more awesome! :D


 

In a previous post about my Windows dev environment configuration, I’ve explained that I was using AutoHotKey in combination with Console2 to get a quake-like console on Windows. Since then, I discovered ConEmu… and I ain’t going back!

I’ve recently switched from Console2 to ConEmu and because of this change, I no longer need AutoHotKey to show/hide the console since ConEmu show/hide can be bound to a global hotkey (i.e., I can get the same behavior). Altough, I still use AutoHotkey in order to start ConEmu when pressing ‘²’ in case ConEmu isn’t started already.

ConEmu has a gazillion features, one of which being the holy grail for me: an actual Quake-like console with animated dropdown and support for image backgrounds :D. It’s not my goal to describe all it can do but do yourself a favor, just try it out.

Basically the rest of my configuration is as explained in my earlier post apart from the fact that I now use ConEmu rather than Console2. In fine, I’m still using Bash :)

Here’s a link to my ConEmu configuration file

ConEmu configuration highlights:

  • Main
    • Font
      • Consolas
      • 16
      • Clear Type
  • Main > Size & Pos
    • Full screen
    • Centered (not important actually)
    • Long console output: 9999
    • Restore to active monitor (MUST HAVE if you use my configuration). See my update of 2015-08-24 above)
  • Main > Appearance
    • Always on top
    • Auto scrollbars (hidden after a small delay)
    • Quake style slide down (
    • Auto-hide on focus lose
  • Main > Background
    • custom background image (dark.jpg)
  • Main > Tab bar
    • Always show
    • Font
      • Consolas
      • 14
    • Console (text)
      • <%c> %d
      • console id
      • current working directory
  • Main > Confirm
    • No confirmation for new consoles/tabs
    • No confirmation for tab closing
  • Main > Update
    • automatic check on startup
    • Release type: latest
  • Startup
    • {Bash::Git bash} (can’t live without my Bash shell :p)
  • Startup > Tasks
    • {Bash::Git bash}
      • set as default task for new console
      • set as default shell
  • Features
    • Sleep in background
    • Log console output (great!)
  • Features > Text cursor
    • Active console
      • Block
      • Color
      • Blinking
  • Features > Colors
    • Scheme: Solarized Git (I’d love to have a Seti_UI one here)
    • Fade when inactive
  • Features > Transparency
    • Active window transparency: ~90%
  • Features > Status bar
    • Shown
    • Font
      • Consolas
      • 14
    • Selected columns
      • Console title
      • Synchronize cur dir (not sure what this one does)
      • Caps Lock state
      • Num Lock state
      • Active console buffer
      • System time
  • Keys & Macro
    • ²: Minimize/Restore (Quake-style hotkey also)
    • F1: Create new console or new window
  • Keys & Macro > Controls
    • Send mouse events to console
    • Skip click on activation
    • Skip in background
    • Install keyboard hooks
  • Keys & Macro > Mark/Copy
    • Detect line ends
    • Bash margin
    • Trim trailing spaces
    • EOL: CR+LF
    • Text selection: Left Shift
    • Copy on Left Button release
    • Block (rectangular) selection: Left Alt
    • Copying format: Copy plain text only
  • Keys & Macro > Paste
    • All lines
    • Confirm
    • First line Confirm pasting more than 200 chars

Here’s the new version of my AutoHotKey script. Now it:

  • starts ConEmu if not running already
  • lets the ‘²’ key press pass through if ConEmu is running (so as to let ConEmu show/hide the console window
; ConEmu script (start it if it ain't running)
; ConEmu class: VirtualConsoleClass (reference: https://github.com/koppor/autohotkey-scripts/blob/master/ConEmu.ahk)
; Change your hotkey here
;SC029 == ²
SC029::

DetectHiddenWindows, on
IfWinNotExist, ahk_class VirtualConsoleClass
{
	Run "C:/CloudStation/Programs/tools/ConEmu/ConEmu64.exe"
	WinWait ahk_class VirtualConsoleClass
}
else{
	; let the key pass through if ConEmu is active
	; reference: http://www.autohotkey.com/board/topic/2121-hotkey-pass-through/
	Suspend, On
	Send,{SC029}
	Suspend, Off
	return
}
DetectHiddenWindows, off
return

Bonus: here’s the link to the background images that I use (I don’t claim any rights on these ^^).


My development environment on Windows

Thursday, July 30th, 2015

TL;DR Use Bash on Windows like me and you’ll be in heaven too, with penguins and ice creams :)

Update 2015-08-07: I now use ConEmu rather than Console2; apart from this, my configuration is still as described below


In this post I’ll describe my current Windows configuration and development environment. I’ve already covered how I’ve customized my Windows 10 install, but here I’m going to explain what I do to have an efficient workflow and a ‘portable’ configuration.

For those wondering, yes I use Windows as my main operating system (don’t throw the tomatoes just yet). As I’ve said in the past, I do prefer Linux, but I also enjoy gaming and dual boot is just not for me anymore. Moreover having one OS across all the machines I use at home and at work (apart from tablets) is useful.

Before going in the nitty gritty details, here’s a brief overview of my setup:

  • CloudStation (Synology NAS application): synchronizes files between my desktop, NAS, laptop & tablets; you can substitute this with Dropbox or whatever else you like
  • Git (i.e., msysGit) & git bash: because I love git and MSYS
  • Console: a great Windows console enhancement (supports multiple tabs, different shells, different fonts, easy text selection, shortcuts, …
  • AutoHotkey: create macros & scripts. I use it to show/hide my bash console with the ‘²’ key
  • bash profile: if you know *nix, you know this but I’ll cover the basics below
  • a ton of portable apps (or non-portable ones adapted)
  • GitHub: great Git client for Windows

Here are a few examples of things I can do with my setup (on all my Windows machines):

  • hit ‘²’ and start typings commands
  • use common *nix commands such as ls, cat, less, sed, …
  • type ‘e’ and have the File Explorer opened in the current folder
  • type ‘s’ and have Sublime Text opened
  • type ‘s cool’ and have Sublime Text opened with the file ‘cool’ opened it in
  • type ‘npp’ and have Notepad++ opened
  • type ‘n’ and have my notes opened
  • type ‘m’ and have my GMail mailbox opened
  • type ‘g cool’ and have a new browser tab open with the Google search results for ‘cool’
  • type ‘imdb shawshank redemption’ and see the IMDb info about the best movie of all times
  • type ‘wiki’ and have my wiki opened
  • type ‘f’ and get facebook opened
  • type ‘nlfr echt waar’ and have google translate opened with the translation of ‘echt waar’ from dutch to french
  • same with frnl fren …
  • type ‘img mario’ and see pictures of Mario all over
  • type ‘mkcd test’ and have the test folder created and cd into it
  • type ‘ws’ and have WebStorm started
  • type ‘idea’ and have IntelliJ started
  • type ‘write 001’ and have my 001 project opened in Scrivener
  • type ..3 and be 3 levels higher in the file system tree
  • type ‘p’ and have my bash profile opened for edition in Sublime Text
  • type ‘mindmap’ and have my Mindmap opened in FreeMind
  • well you get the idea … :)

The goal of my setup is to strictly limit the number of steps to get my development environment up and running (e.g., after I get a new device or need to reinstall one) and to synchronize my configuration(s) between all devices I work on.

At the heart of my configuration, there is CloudStation, the synchronization app provided by Synology NASes (best NAS devices you can find on the market). I use CloudStation to synchronize the following between my devices:

  • Configuration files (I’ll cover these later)
  • Programs
  • Books I’m currently reading (or plan to read soon)
  • Comic books (only thing that I synchronize w/ my tablet)
  • Pictures (e.g., wallpapers & pictures of my face — if I need to upload one somewhere)
  • Podcasts
  • Book drafts (stuff I’m writing from time to time)
  • My notes.txt file

The most important parts are the config files and programs because that’s the core of my setup.

My CloudStation folder is organized as follows:

  • _FOR_HOME: stuff to bring back home
  • _FOR_WORK: stuff to bring to work
  • _NOW: stuff I’m currently busy with
  • Books
    • Reading
    • Later
  • Configuration
    • Bash: contains my bash profile
    • Dev: contains the configuration for all my dev tools
      • Eclipse: my Eclipse preference files (.epf), code style rules, etc
      • IntelliJ: my portable IntelliJ config (config & plugins)
      • WebStorm: my portable WebStorm config (config & plugins)
      • Git
      • Templates: project templates
    • Home: my *NIX home folder (.gitconfig, .npmrc, .ssh folder, etc are in there)
    • Scrivener
    • XBMC: my portable XBMC config (worth another post in itself)
  • Electronics: my current electronics projects
  • Guitar: that thing with strings that I learn when I find free time (i.e., not often enough)
  • Lightroom: my LR catalog (worth another post in itself)
  • Music: things that I listen again and again
  • Pictures
  • Podcasts
  • Programs
    • dev: JDK, maven, docker-machine, groovy, intellij, mongodb, nodejs, npm, python, eclipse, svn, webstorm, …
    • electronics: arduino IDE, atanua, circuit, fritzing, …
    • emulation: zsnes, project64, …
    • games: minecraft and other small games ;-)
    • readers: e-book readers & comic book readers (e.g., ComicRack)
    • seb: my own tools
    • tools: a huge ton of (portable) apps
    • writing: apps like Scrivener, WriteMonkey, …

To give you an idea of the tools I have in CloudStation, here’s a part of what I use:

  • SublimeText: my current preferred text editor (no it’s not VI, I’m more of a nano guy)
  • Notepad++: my previous preferred text editor
  • SysInternals suite: greatest Windows toolkit ever
  • 7-zip: it does it all
  • KeePass: one passphrase to rule them all
  • kitty: putty portable replacement
  • ADExplorer
  • AntMovieCatalog (again worth another post)
  • AutoHotKey (more on it below)
  • borderless window tool: useful for games that don’t have a fullscreen windowed mode
  • calibre: manage my e-books
  • Console2: awesome Windows console replacement (more on it below)
  • desktops: obsolete with Windows 10 :)
  • ffmpeg: holy grail (or so I thought)
  • exiftool: dump exif
  • ext2explore: let me see EXT partitions
  • fat32format: format stuff
  • folder2iso: sudo make me an iso
  • freemind: can’t live without mindmaps
  • guiformat
  • HexChat: coz IRC is still there in 2015 (yeah I’m tired looking for the links ^^)
  • hfsexploer
  • hxDen
  • ImageMagick: do me some magic with images
  • JDownloader: download all the things
  • jude: sometimes helpful for quick UML drawings
  • libmp3lame: encoding stuffz
  • mplayer: who can live decently without mplayer around?
  • MySQL Workbench
  • netcat: netcat for Windows, weee
  • PortQry: check UDP ports
  • Privoxy: local proxy
  • ProxyGet: dumps info about the currently configured proxy (useful in locked-down environments)
  • PS3Splitter: split large files
  • restoration: restore deleted files (family helper)
  • SolEol: download subtitles easily
  • SQLite
  • SteamMover: move steam folders around
  • SubtitleEdit: fix me thy subtitles
  • twt: CLI for Twitter
  • USBDiskEjector
  • uTorrent
  • wakeMeOnLan: wake up LAN devices
  • wget
  • win32diskimager: create img files
  • WinDirStat: where’s my free space gone??!
  • WinMerge: diff me up
  • WinSplitRevolution: can’t live without this to re-arrange/resize windows around
  • winscp
  • YNAB: yes you do need a budget

Okay so all of that currently sums up to about 30GB so clearly the initial sync time is quite long because there are a huge amount of very small files to sync, but once synchronized, you get an idea of all I have available at my fingertips.

There are apps that I do actually install on my OS for two reasons: either the app is way too large to be copied around or it integrates too deeply with the operating system. Here are some applications that aren’t in my CloudStation folder:

  • Lightroom
  • Photoshop
  • VLC: much easier to install & have the file type associations
  • Winamp: same idea
  • Git
  • Spotify
  • Google Chrome
  • Steam
  • Battle.net
  • Daemon Tools
  • CrashPlan client
  • Dropbox
  • GitHub client
  • VirtualBox

Okay, so far you have an idea of the stuff I carry around with my but you don’t know yet how I use it all. Let’s assume for a moment that my PC goes up in flames and that I need to setup a brand new one.

Here’s what I need to do in order to get back up and running (to the point of being able to work):

  • install the OS (haha)
  • install drivers (hoho)
  • install CloudStation and let it sync all my files
  • install msysGit
  • create a .profile (bash profile) file in my home folder with the following contents
    • source /c/CloudStation/Configuration/Bash/bashProfile.txt
    • this loads my actual bash profile which is part of my CloudStation synchronized files
  • add AutoHotkey to the startup list (i.e., put a shortcut under ‘shell:Startup’)
    • C:\CloudStation\Programs\tools\AutoHotkey111502_x64\AutoHotkey.exe
  • copy my AutoHotkey script to the Documents folder
  • at this point I can already hit ‘²’ and my console opens up, with all my tools available
  • install the few other apps I like to have
  • done!

Okay so let’s see how my console is set up.

So when the OS boots, it now starts AutoHotkey. I’ve written a small script that opens up Console2 when I hit ‘²’ and hides it when I hit ‘²’ again, a bit like Quake’s console or Yakuake under Linux (although I don’t have the nice animation ;-)).

Here’s the script

; QuakeConsole, used in combination with Console2
; Change your hotkey here
;SC029 == ²
SC029::

DetectHiddenWindows, on
IfWinExist ahk_class Console_2_Main
{
 IfWinActive ahk_class Console_2_Main
 {
 WinHide ahk_class Console_2_Main
 WinActivate ahk_class Shell_TrayWnd
 }
 else
 {
 ; put the console at top left
 WinMove, 0, 0
 ; show the console
 WinShow ahk_class Console_2_Main
 WinActivate ahk_class Console_2_Main
 }
}
else
 Run "C:/CloudStation/Programs/tools/Console-2.00b148-Beta_64bit/Console2/Console.exe"
DetectHiddenWindows, off
return

The script is quite simple: if Console2 is running, pressing the configured key will either show/hide the window (and put it on the top left of the screen on show); if not, it starts the program.

As you can see Console2 is also in my CloudStation folder, just like AutoHotkey is.

So now, here are the relevant parts of my Console2 configuration (console.xml file placed in the Console2 program folder in my CloudStation tools)


	
		
			...
		
	
	
		
			
		
		
		
		s caption="1" resizable="1" taskbar_button="1" border="1" inside_border="5" tray_icon="1">
			
		
		
		
	
	
		
		
		
	
	
		...
		 
		
		
	
	
		
			 
  			 
  			 
			
			
			
			
			
		
	
	
		   
		
			
			
			
				
					
				
			
		
	

With this Console2 configuration:

  • I automatically get in a Git Bash shell in my workspace (i.e., where I have my project folders)
  • I can hit ctrl+ F1 to open a new tab, also with the same Git Bash shell
  • I can use ctrl + c / ctrl + v to copy/paste
  • I can use the middle mouse click to paste
  • I can use shift + click to select & copy text

Okay thus so far you can see how I can use my bash shell on Windows, I just hit ‘²’ and I can enter commands, not much harder than hidding the Windows key to open up the start menu & searching for stuff, except that in my shell I’ve got all my commands & aliases available.

The final (and most important) part of my configuration is my bash profile. It’s where I configure my environment, define functions and aliases, configure programs in my path, etc. 

Given how long it is I’ve create a Gist of it here (note that it’s just a subset of my whole config): https://gist.github.com/dsebastien/47d24a5d6c1b8005f434

I don’t claim any rights on the functions I use in my bash profile as it is mostly based on stuff I gathered over time from various sources. Though there mustn’t be many crazies like me to do this kind of things on Windows ;-)

In the file, you’ll see that the basic principles are quite simple & straightforward, folders added to the path, functions and aliases with shorter names, etc.

I have tons of ideas to improve my bash profile but I just don’t have time now. There are just tons of worth-looking examples all over the place if you’re interested. And if you have ideas/links, just post them in the comments!

What I like with my current configuration is that it is portable in the sense that if I add a new tool or change the configuration of an existing one, all my machines are directly updated.

Of course it is far from perfect, most apps aren’t up to date and the more I add into my system, the harder it gets to update stuff. Package managers are indeed the solution and *NIX has had them since the dawn of ages, but there’s hope on Windows too.

In the future, Windows package managers like OneGet (now part of W10) and Chocolatey should simplify things, but I don’t feel like it’s usable for my goals right now (correct me if I’m wrong).

The most evident and easiest solution would simply be to use Linux, but for as long as I’ll be playing games from time to time, I won’t go back. 

 


Windows 10 configuration tips

Thursday, July 30th, 2015

Update 2015-08-26:

I’ve posted a new article with some additional configuration steps/tweaks.

Update 2015-08-05:

Removed some additional tracking services & bloatware using: 

I’ve also removed OneDrive from autorun, removed the app, etc. Thanks Microsoft but no, I’m not interested and if I ever am, I’ll let you know. It’s not because I’m using Windows that I want all the software you’ve ever produced. Propose me to opt in if you want, but don’t force additional products on me!.

Tip:

If you want to get a list of the other currently installed apps just use: Get-AppxPackage -User <username>. If one of them bothers you then you can invoke Remove-AppxPackage <package name>

In the previous post, I’ve mentioned that almost all of my applications and settings were kept during the upgrade from Windows 8.1 to Windows 10. Almost all, but not all.

And anyway, each time I switch to a new OS release, I can’t help but spend some time going through all the options and policy settings just to configure it the way I like.

With Windows 10, it’s the very first time that I’m done in less than two hours, which is nice :)

Now let me list all the things that I’ve done after upgrading, in no specific order:

  • Activate Windows (first things first right? :p)
  • Installed the latest NVidia drivers (these didn’t survive the upgrade)
  • Put the resolution back to 1920*1080
  • Configured the File Explorer to show “This PC” rather than “Quick Access”, because I don’t care about frequent folders & recent files. I know where I need to go and how my files are organized
  • Reinstalled Virtualbox as I’ve noticed that it crashed when started
  • fired up gpedit.msc (which you will only have with the Professional & above editions..)
    • disabled thumbs.db files generation: because I can’t stand trying to move/delete things to discover that the damn thumbnails file prevents me from doing what I want…
      • User > Administrative Templates > Windows Components > File Explorer > Turn off the caching of thumbnails in hidden thumbs.db files
    • disabled things that send data to Microsoft: Sorry MSFT, but I never like having my machine send data around (just a general principle that I stick by)
      • Computer > Windows Components > Windows Error Reporting > Disable Windows Error Reporting
      • Computer > Windows Components > Windows Error Reporting > Do not send additional data
      • Computer > Windows Components > Data Collection and Preview Builds > Allow Telemetry
    • made sure that the shutdown button on the logon screen was disabled: If you have young children you’ll understand why
      • Computer > Windows Settings > Local Policies > Security Options > Shutdown: Allow system to be shut down without having to log on
    • enabled always sending Do Not Track (DNT) header: because if there are still non-evil people on the Web, I need them to know that I somehow value privacy
      • Computer > Windows Components > Internet Explorer > Internet Control Panel > Advanced Page > Always send Do Not Track header
    • disabled Windows SmartScreen: because I don’t need Microsoft to tell me what is safe and what isn’t
      • Computer > Administrative Templates > Windows Components > File Explorer > Configure Windows SmartScreen
    • enabled confirmation for file deletion: because I can’t trust myself that much ;-)
      • Recycle Bin > Properties > Display delete confirmation dialog
    • disabled documents history: who cares about history (don’t repeat that to my son ^^)
      • User > Administrative Templates > Start Menu and Taskbar
        • Clear history of recently opened documents on exit
        • Do not keep history of recently opened documents
    • disabled searching for files/documents/internet in start menu: because I care about apps when I use the start menu, nothing else (personal choice indeed)
      • User > Administrative Templates > Start Menu and Taskbar
        • Do not search communications
        • Do not search for files
        • Do not search Internet
    • forced listing desktop apps first (rather than metro apps..)
      • User > Administrative Templates > Start Menu and Taskbar 
        • List desktop apps first in the Apps view
    • disabled MS Edge app usage tracking: I love MS Edge but I just don’t like tracking
      • User > Administrative Templates > Windows Components > Edge UI
        • Turn off tracking of app usage
    • customized the File Explorer
      • User > Administrative Templates > Windows Components > File Explorer
        • Remove the Search the Internet “Search again” link
        • Start File Explorer with ribbon minimized
        • Turn off display of recent search entries in the File Explorer search box
        • Turn off caching of thumbnail pictures
  • forced numlock at boot (logon screen also!): this setting was apparently lost during the upgrade
    • run “regedit”
    • go to \HKEY_USERS\.DEFAULT\Control Panel\Keyboard
    • change value “InitialKeyboardIndicators” from “2147483648” to “80000002”
    • restart and u will have NUM LOCK ON always on windows startup

After this I already felt a bit more at ease, although that was only the first part.

The next part was to go through all the Settings and trying out the new features..

  • created a new virtual desktop: Hey MSFT, great that you’ve finally added virtual desktops but why so late? :)
  • fixed the default apps: this is one of the things I disliked. MSFT, you’ve managed to keep so many things and just decided to replace my default apps by all of yours? That really sucks!
    • Switched default browser back to Google Chrome
    • Switched default music player back to Winamp (because it really… :p)
    • Switched default video player back to VLC
  • modified the folders that appear by default in the Start Menu
    • File Explorer
    • Settings
    • Downloads
    • Personal Folder
  • modified privacy settings
    • Settings > Privacy
      • General
        • Send Microsoft info about how I write…
          • OFF
      • Location
        • Disabled
      • Removed various rights from apps…
      • Feedback & diagnostics
        • Windows should ask for my feedback: Never
        • Send your device data to Microsoft: Basic
        • Background apps: Remove
  • removed default Windows 10 apps: MSFT I get why it is all there, but I just couldn’t care less
    • Finance
    • News
    • MSN Food & Drinks
    • Health & Fitness
    • Travel
    • Get Skype
    • Get Office
    • Get Bored
    • Get Whatever :o
  • Windows Store
    • signed in with my Windows Live account: ONLY for apps
  • Cortana & Search settings
    • disabled web search results

Done!

I’ll probably edit this post over time to reflect config changes, but for now I think it’s already in a pretty good shape :)

PS: For those wondering, no I’m not hardening my Windows box in any specific way, I just have a local firewall (Net Limiter) set to ask me to allow/Deny anytime there are inbound/outbound connections (when I don’t already have rules covering those), so as long as apps cannot bypass that Firewall, I know what tries to go in/out and I’m in control. That combined with the Antivirus is all I need. I wouldn’t configure a Windows box just like that at work, but at home that’s just more than enough :)


Upgrading from Windows 8 to Windows 10

Thursday, July 30th, 2015

TL;DR: Huge kudos to Microsoft for making the upgrade from W7 & 8 to Windows 10 a breeze!

This post is a summary of my experience upgrading from Windows 8.1 to Windows 10; I’m not going to talk about the new features as there are already a huge amount of articles about that.. :)

Yesterday, the binaries for Windows 10 were available on MSDN so I wanted to finally give W10 a try. I’ve never been keen of installing technical previews on my main machine and I just don’t have time to test that kind of things anymore.

So first things first, I’ve downloaded the ISO & claimed my key. Once downloaded I mounted the iso and let the magic happen.

One HUGE step forward with the Windows 10 installer is that it is now able to perform the upgrade while keeping most applications and settings.

In my case, although I have a “pretty complicated” configuration, I was back up and running directly after the upgrade which is just of awesome :)

Here’s what makes it surprising for me:

  • all of my applications are still there, intact (i.e., still configured just as I’ve left them)
  • my registry settings were kept (for the most part)
  • my services are still there after the upgrade (I’ve got a local MySQL instance, a Confluence wiki and a bunch of other stuff)
  • all my drivers are still there
  • my custom Firewall (Net Limiter 2) is still there after the upgrade (impressive given how deeply it must be integrated with the OS (filter drivers & al)
  • Daemon tools is still installed and my virtual devices are still there
  • (most of) my startup applications are still in the autorun list
  • my Windows defender settings & folder exclusions were still there
  • my custom power plan was still there & active
  • my favorites in File Explorer were still there (ok that’s no magic but hey ^^)
  • my desktop icons are still there
  • my regional settings & other are still there

I think that the difference between XP & 7 was MUCH more important and given how “close” W10 is to W8, I can’t say that any of the above is really surprising, but it’s still very nice.

Hopefully the next upgrade from W10 to W.Next will not even require a reboot anymore.. ;-)

In a follow up post I’ll describe the things that I’ve configured after the upgrade.

Huge kudos to Microsoft for making the upgrade from W7/8 to Windows 10 a breeze!


A bit more Windows Docker bash-fu

Wednesday, April 22nd, 2015

Feeling bashy enough yet? :)

In my last post, I’ve given you a few useful functions for making your life with Docker easier on Windows. In this post, I’ll give you some more, but before that let’s look a bit a what docker-machine does for us.

When you invoke docker-machine to provision a Docker engine using Virtualbox, it “simply” creates a new VM… Okay though pretty basic, this explanation is valid ^^.

What? Not enough for you? Okay okay, let’s dive a bit deeper =)

Besides the VM, behind the scenes, docker-machine generates multiple things for us:

  • a set of self-signed certificates: used to create a server certificate for the Docker engine in the VM and a client certificate for the Docker client (also used by docker-machine to interact with the engine in the VM)
  • an SSH key-pair (based on RSA): authorized by the SSH daemon and used to authenticate against the VM

Docker-machine uses those to configure the SSH daemon as well as the Docker engine in the VM and stores these locally on your computer. If you run the following command (where docker-local is the name of the VM you’ve created), you’ll see where those files are stored:

command: eval "$(docker-machine env docker-local)"

export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH="C:\\Users\username\.docker\\machine\\machines\\docker-local"
export DOCKER_HOST=tcp://192.168.99.108:2376

As you can see above, the files related to my “docker-local” are all placed under c:\Users\username\.docker\machine\machines\docker-local. Note that DOCKER_TLS_VERIFY is enabled (which is nice). Also note that the DOCKER_HOST (i.e., engine) IP is the one of the VM (we’ll come back to this later on). Finally, the DOCKER_HOST port is 2376, which is Docker’s default.

Using docker-machine you can actually override just about any setting (including the location where the files are stored).

If you take a look at that location, you’ll see that docker-machine actually stores many interesting things in there:

  • a docker-local folder containing the VM metadata and log files
  • boot2docker.iso: the ISO used as basis for the VM (which you can update easily using docker-machine)
  • the CA, server and client certificates (ca.pem, cert.pem, server.pem, …)
  • config.json: more about this below
  • disk.vmdk: the VM’s disk (useful to take in backup if you care (you shouldn’t :p)
  • the SSH key-pair that you can use to authenticate against the VM (id_rsa, id_rsa.pub)

As noted above, there’s also a ‘config.json’ file, which contains everything docker-machine needs to know about that Docker engine:

{
	"DriverName" : "virtualbox",
	"Driver" : {
		"CPU" : -1,
		"MachineName" : "docker-local",
		"SSHUser" : "docker",
		"SSHPort" : 51648,
		"Memory" : 1024,
		"DiskSize" : 20000,
		"Boot2DockerURL" : "",
		"CaCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca.pem",
		"PrivateKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca-key.pem",
		"SwarmMaster" : false,
		"SwarmHost" : "tcp://0.0.0.0:3376",
		"SwarmDiscovery" : ""
	},
	"StorePath" : "C:\\Users\\...\\.docker\\machine\\machines\\docker-local",
	"HostOptions" : {
		"Driver" : "",
		"Memory" : 0,
		"Disk" : 0,
		"EngineOptions" : {
			"Dns" : null,
			"GraphDir" : "",
			"Ipv6" : false,
			"Labels" : null,
			"LogLevel" : "",
			"StorageDriver" : "",
			"SelinuxEnabled" : false,
			"TlsCaCert" : "",
			"TlsCert" : "",
			"TlsKey" : "",
			"TlsVerify" : false,
			"RegistryMirror" : null
		},
		"SwarmOptions" : {
			"IsSwarm" : false,
			"Address" : "",
			"Discovery" : "",
			"Master" : false,
			"Host" : "tcp://0.0.0.0:3376",
			"Strategy" : "",
			"Heartbeat" : 0,
			"Overcommit" : 0,
			"TlsCaCert" : "",
			"TlsCert" : "",
			"TlsKey" : "",
			"TlsVerify" : false
		},
		"AuthOptions" : {
			"StorePath" : "C:\\Users\\...\\.docker\\machine\\machines\\docker-local",
			"CaCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca.pem",
			"CaCertRemotePath" : "",
			"ServerCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\server.pem",
			"ServerKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\server-key.pem",
			"ClientKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\key.pem",
			"ServerCertRemotePath" : "",
			"ServerKeyRemotePath" : "",
			"PrivateKeyPath" : "C:\\Users\\...\\.docker\\machine\\certs\\ca-key.pem",
			"ClientCertPath" : "C:\\Users\\...\\.docker\\machine\\certs\\cert.pem"
		}
	},
	"SwarmHost" : "",
	"SwarmMaster" : false,
	"SwarmDiscovery" : "",
	"CaCertPath" : "",
	"PrivateKeyPath" : "",
	"ServerCertPath" : "",
	"ServerKeyPath" : "",
	"ClientCertPath" : "",
	"ClientKeyPath" : ""
}

One thing that I want to mention about that file, since I’m only drawing the picture of the current Windows integration of Docker, is the SSHPort. You can see that it’s ‘51648’. That port is the HOST port (i.e., the port I can use from Windows to connect to the SSH server of the Docker VM).

How does this work? Well unfortunately there’s no voodoo magic at work here.

The thing with Docker on Windows is that the Docker engine runs in a VM, which makes things a bit more complicated since the onion has one more layer: Windows > VM > Docker Engine > Containers. Accessing ports exposed to the outside world when running a container will not be as straightforward as it would be when running Docker natively on a Linux box.

When docker-machine provisions the VM, it creates two network interfaces on it; a first one in NAT mode to communicate with the outside world (i.e., that’s the one we’re interested in) and a second one in VPN mode (which we won’t really care about here).

On the first interface, which I’ll further refer to as the “public” interface, docker-machine configures a single port redirection for SSH (port 51648 on the host towards port 22 on the guest). This port forwarding rule is what allows docker-machine and later the Docker client to interact with the Docker engine in the VM (I assume that the port is fixed though it might be selected randomly at creation time, I didn’t check this).

So all is nice and dandy, docker-machine provisions and configures many things for you and now that Microsoft has landed a Docker CLI for Windows, we can get up and running very quickly, interacting with the Docker engine in the VM through the Docker API, via SSH and using certificates for authentication. That’s a mouthful and it’s really NICE.. but.

Yeah indeed there’s always a but :(

Let’s say that you want to start a container hosting a simple Web server serving your pimped AngularJS+Polymer+CSS3+HTML5+whatever-cool-and-trendy-today application. Once started, you probably want to be able to access it in some way (let’s say using your browser or curl if you’re too cool).

Given our example, we can safely assume that the container will EXPOSE port 80 or the like to other containers (e.g., set in the Dockerfile). When you start that container, you’ll want to map that container port to a host port, let’s say.. 8080.

Okay curl http://localhost:8080 … 1..2..3, errr nothing :(

As you might have guessed by now, the annoying thing is that when you start a container in your Docker VM, the host that you’re mapping container ports to… is your VM.

I know it took a while for me to get there but hey, it might not be THAT obvious to everyone right? :)

I’ve mentioned earlier that docker-machine configures a port forwarding rule on the VM after creating it (for SSH, remember?). Can’t we do the same for other ports? Well the thing is that you totally can using VirtualBox’s CLI but it’ll make you understand that the current Windows integration of Docker is “nice” but clearly not all that great.

As stated, we’re going the BASH way. You can indeed achieve the same using your preferred language, whether it is PERL, Python, PowerShell or whatever.

So the first thing we’ll need to do is to make the VirtualBox CLI easily available in our little Bash world:

append_to_path /c/Program\ Files/Oracle/VirtualBox
alias virtualbox='VirtualBox.exe &'
alias vbox='virtualbox'
alias vboxmanage='VBoxManage.exe'
alias vboxmng='vboxmanage'

You’ll find the description of the append_to_path function in the previous post.

Next, we’ll add three interesting functions based on VirtualBox’s CLI; one to check whether the Docker VM is running or not and two other ones to easily add/remove a port redirection to our Docker VM:

is-docker-vm-running()
{
	echo "Checking if the local Docker VM ($DOCKER_LOCAL_VM_NAME) is running"
	vmStatusCheckResult=$(vboxmanage list runningvms)
	#echo $vmStatusCheckResult
	if [[ $vmStatusCheckResult == *"$DOCKER_LOCAL_VM_NAME"* ]]
	then
		echo "The local Docker VM is running!"
		return 0
	else
		echo "The local Docker VM is not running (or does not exist or runs using another account)"
		return 1
	fi
}


# redirect a port from the host to the local Docker VM
# call: docker-add-port-redirection rule_name host_port guest_port
docker-add-port-redirection()
{
	echo "Preparing to add a port redirection to the Docker VM"
	is-docker-vm-running
	if [ $? -eq 0 ]; then
		# vm is running
		vboxmanage modifyvm $DOCKER_LOCAL_VM_NAME --natpf1 "$1,tcp,127.0.0.1,$2, ,$3"
	else
		# vm is not running
		vboxmanage controlvm $DOCKER_LOCAL_VM_NAME natpf1 "$1,tcp,127.0.0.1,$2, ,$3"
	fi
	echo "Port redirection added to the Docker VM"
}
alias dapr='docker-add-port-redirection'


# remove a port redirection by name
# call: docker-remove-port-redirection rule_name
docker-remove-port-redirection()
{
	echo "Preparing to remove a port redirection to the Docker VM"
	is-docker-vm-running
	if [ $? -eq 0 ]; then
		# vm is running
		vboxmanage modifyvm $DOCKER_LOCAL_VM_NAME --natpf1 delete "$1"
	else
		# vm is not running
		vboxmanage controlvm $DOCKER_LOCAL_VM_NAME natpf1 delete "$1"
	fi
	echo "Port redirection removed from the Docker VM"
}
alias drpr='docker-remove-port-redirection'


docker-list-port-redirections()
{
    portRedirections=$(vboxmanage showvminfo $DOCKER_LOCAL_VM_NAME | grep -E 'NIC 1 Rule')
	for i in "${portRedirections[@]}"
	do
		printf "$i\n"
	done
}
alias dlrr='docker-list-port-redirections'
alias dlpr='docker-list-port-redirections'

Note that these functions will work whether the Docker VM is running or not. Since I’m an optimist, I don’t check whether the VM actually exists or not beforehand or if the commands did succeed (i.e., use at your own risk). One caveat is that these functions will not work if you started the Docker VM manually through Virtualbox’s GUI (because it keeps a lock on the configuration). These functions handle tcp port redirections, but adapting the code for udp is a no brainer.

The last function (docker-list-port-redirections) will allow you to quickly list the port redirections that you’ve already configured. You can do the same through Virtalbox’s UI but that’s only interesting if you like moving the mouse around and clicking on buttons, real ITers don’t do that no more (or do they? :p).

With these functions you can also easily create port redirections for port ranges using a simple loop:

for i in { 49152..65534 }; do
    dapr "rule$i" $i $i

Though I would recommend against that. You should rather add a few useful port redirections such as for port 8080, 80 and the like. These can only ‘bother’ while the Docker VM is running and if you’re trying to use redirected ports.

Another option would be to switch the “public” interface from NAT mode to bridge mode, though I’m not too fond of making my local Docker VM a ‘first’ class citizen of my LAN.

Okay, two more functions and I’m done for today :)

Port redirections are nice because they’ll allow you to expose your Docker containers to the outside world (i.e., not only your machine). Although there are situations where you might not want that. In that case, it’s useful to just connect directly to the local Docker VM.

docker-get-local-vm-ip(){
	export DOCKER_LOCAL_VM_IP=$(docker-machine ip $DOCKER_LOCAL_VM_NAME)
	echo "Docker local VM ($DOCKER_LOCAL_VM_NAME) IP: $DOCKER_LOCAL_VM_IP"
}
alias dockerip='docker-get-local-vm-ip'
alias dip='docker-get-local-vm-ip'

docker-open(){
	docker-get-local-vm-ip
	( explorer "http://$DOCKER_LOCAL_VM_IP:$*" )&	
}
alias dop='docker-open'

The ‘docker-get-local-vm-ip’ or ‘dip’ for close friends uses docker-machine to retrieve the IP it knows for the Docker VM. It’s best friend, ‘docker-open’ or ‘dop’ will simply open a browser window (you default one) towards that IP using the port specified in argument; for example ‘docker-open 8080’ will get you quickly towards your local Docker VM on port 8080.

With these functions, we can also improve the ‘docker-config-client’ function from my previous post to handle the case where the VM isn’t running:

docker-config-client()
{
	echo "Configuring the Docker client to point towards the local Docker VM ($DOCKER_LOCAL_VM_NAME)..."
	is_docker_vm_running
	if [ $? -eq 0 ]; then
		eval "$(docker-machine env $DOCKER_LOCAL_VM_NAME)"
		if [ $? -eq 0 ]; then
			docker-get-local-vm-ip
			echo "Docker client configured successfully! (IP: $DOCKER_LOCAL_VM_IP)"
		else
			echo "Failed to configure the Docker client!"
			return;
		fi
	else
		echo "The Docker client can't be configured because the local Docker VM isn't running. Please run 'docker-start' first."
	fi
}
alias dockerconfig='docker-config-client'
alias configdocker='docker-config-client'

Well that’s it for today. Hope this helps ;-)


A bit of Windows Docker bash-fu

Monday, April 20th, 2015

In my last post I’ve mentionned that Microsoft has helped Docker deliver a native Docker client for Windows (yay!).

I’ve also promised to share the little bits that I’ve added to my Windows bash profile to make my life easier. As I’ve said, I’m a huge fan of MSYS and msysGit and I use my Git Bash shell all day long, so here comes a bit of Windows bash-fu.

For those wondering, I prefer Linux and I would use it as my main OS (did so in the past) if I didn’t also like gaming. I can’t stand fiddling around config files to get my games running (hey Wine) and I can’t stand losing n FPS just to stay on the free side. Finally I am not too fond of putting multiple OSes on my main machine just for the sake of being able to play. The least painful solution for me is simply to use Windows and remain almost sane by using Bash.

One thing to note is that my bash profile as well as all the tools that I use are synchronized between my computers in order to allow me to have a consistent environment; I’m done raging because I’m in the train and some tool I’ve installed on my desktop isn’t available on my laptop.. I’ll describe that setup.. another day :)

So first things first, I’ve installed Docker v1.6.0 on my machine without adding it to the path or creating any shortcuts (since I’m not going to use that install at all); you can get it from https://github.com/boot2docker/windows-installer/releases/latest.

Once installed, I’ve copied the docker client (docker.exe) to the folder I use to store my shared tools (in this case c:\CloudStation\programs\dev\docker). I have the Docker machine in the same folder (downloaded from here).

append_to_path(){ # dumb append to path
    PATH=$1":"$PATH
}
...
# Docker
export DOCKER_HOME=$DEV_SOFT_HOME/Docker
append_to_path $DOCKER_HOME

alias docker='docker.exe'

alias docker-machine='docker-machine.exe'
alias dockermachine='docker-machine'
alias dm='docker-machine'

export DOCKER_LOCAL_VM_NAME='docker-local'

In the snippet above I simply ensure that the docker client is on my path and that I can invoke it simply using ‘docker’. Same for docker-machine, along with a nice shortcut ‘dm’.

Note that I also set a name for the local Docker VM that I want to manage; you’ll see below why that’s useful.

docker-config-client()
{
	echo "Configuring the Docker client to point towards the local Docker VM ($DOCKER_LOCAL_VM_NAME)..."
	eval "$(docker-machine env $DOCKER_LOCAL_VM_NAME)"
	if [ $? -eq 0 ]; then
		echo "Docker client configured successfully!"
	else
		echo "Failed to configure the Docker client!"
		return;
	fi
}
alias dockerconfig='docker-config-client'
alias configdocker='docker-config-client'

The ‘docker-config-client’ function allows me to easily configure my Docker client to point towards my local Docker VM. I’ve added some aliases because I’ve got a pretty bad memory :)

This function assumes that the local Docker VM already exists and is up an running. This is not always the case, hence the additional functions below.

docker-check-local-vm() # check docker-machine status and clean up if necessary
{
	echo "Verifying the status of the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmCheckResult=$(docker-machine ls)
	#echo $dmCheckResult
	if [[ $dmCheckResult == *"error getting state for host $DOCKER_LOCAL_VM_NAME: machine does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is known by docker-machine but does not exist anymore."
		echo "Cleaning docker-machine."
		dmCleanupResult=$(docker-machine rm $DOCKER_LOCAL_VM_NAME)
		
		if [[ $dmCleanupResult == *"successfully removed"* ]]
		then
			echo "docker-machine cleanup successful! Run 'docker-init' to create the local Docker VM."
		fi
		return
	fi
	echo "No problem with the local Docker VM ($DOCKER_LOCAL_VM_NAME) and docker-machine. If the machine does not exist yet you can create it using 'docker-init'"
}
alias dockercheck='docker-check-local-vm'
alias checkdocker='docker-check-local-vm'

The ‘docker-check-local-vm’ simply lists the docker engines known by docker-machine in order to see if there’s a problem with the local Docker VM. Such a problem can occur when docker-machine knows about a given Docker engine and you delete it (e.g., if you remove the Virtualbox VM then invoke ‘docker-machine ls’, then you’ll get the error).

docker-start()
{
	echo "Trying to start the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmStartResult=$(docker-machine start $DOCKER_LOCAL_VM_NAME)
	#echo $dmStartResult
	if [[ $dmStartResult == *"machine does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not seem to exist."
		docker-check-local-vm
		return
	fi
	
	if [[ $dmStartResult == *"VM not in restartable state"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is probably already running."
		docker-config-client
		return
	fi
	
	if [[ $dmStartResult == *"Waiting for VM to start..."* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) was successfully started!"
		docker-config-client
		return
	fi
	
	if [[ $dmStartResult == *"Host does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not exist. Run 'docker-init' first!"
		return
	fi
}
alias dockerstart='docker-start'
alias startdocker='docker-start'

The ‘docker-start’ function above tries to start my local Docker VM. It first assumes that the machine does exist (because I’m an optimist after all).

Since the docker-machine executable doesn’t return useful values, I have to resort to string matching; I know that this sucks but don’t forget we’re on Windows.. There’s probably a way to handle this better, but it’s enough for me now.

If the VM does not exist, the docker-machine check function is called.

If the VM cannot be started, it might be that the machine is already running; in that case the docker client gets configured (same if the start succeeds).

If the VM clearly doesn’t exist then the function stops there and points towards ‘docker-init’ explained afterwards.

docker-stop()
{
	echo "Trying to stop the local Docker VM ($DOCKER_LOCAL_VM_NAME)"
	dmStopResult=$(docker-machine stop $DOCKER_LOCAL_VM_NAME)
	#echo $dmStopResult
	if [[ $dmStopResult == *"Host does not exist"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) does not seem to exist."
		docker-check-local-vm
		return
	fi
	
	if [[ $dmStopResult == *"exit status 1"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) is already stopped (or doesn't exist anymore)."
		docker-check-local-vm
		return
	fi
	
	echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) was stopped successfully."
}
alias dockerstop='docker-stop'
alias stopdocker='docker-stop'

The ‘docker-stop’ function stops the local Docker VM if it’s running (pretty obvious eh ^^). In case of error, the docker-machine check function is called (docker-check-local-vm).

docker-init()
{
	echo "Trying to create a local Docker VM called $DOCKER_LOCAL_VM_NAME"
	dmCreateResult=$(docker-machine create --driver virtualbox $DOCKER_LOCAL_VM_NAME)
	#echo $dmCreateResult
	
	if [[ $dmCreateResult == *"has been created and is now the active machine."* ]]
	then
		echo "Local Docker VM ($DOCKER_LOCAL_VM_NAME) created successfully!"
		docker-config-client
		return
	fi
	
	if [[ $dmCreateResult == *"already exists"* ]]
	then
		echo "The local Docker VM ($DOCKER_LOCAL_VM_NAME) already exists!"
		dockerstart
		return
	fi
}
alias dockerinit='docker-init'
alias initdocker='docker-init'

This last function, ‘docker-init’ helps me provision my local Docker VM and configure my Docker client to point towards it.

With these few commands, I’m able to quickly configure/start/use a local Docker VM in a way that works nicely on all my machines (remember that I share my bash profile & tools across all my computers).

Voilà! :)


Docker, Docker Machine, Windows and msysGit happy together

Sunday, April 19th, 2015

Hey there!

tl;dr The Docker client for Windows is here and now it’s the real deal. Thanks MSFT! :)

If you’re one of the poor souls that have to suffer with the Windows terminal (willingly or not) on a daily basis but always dream about Bash, then you’re probably a fan of MSYS just like me.

If you’re a developer too, then you’re probably a fan of msysGit.. just like me :)

Finally, if you follow the IT world trends then you must have heard of Docker already.. unless you’re living in some sort of cave (without Internet access). If you enjoy playing with the bleeding edge… just like me, then chances are that you’ve already given it a try.

If you’ve done so before this month and survived the experience, then kudos because the least I can say is that the Windows “integration” wasn’t all that great.

Since Docker leverages Linux kernel features so heavily, it should not come as a surprise that support on Windows requires a virtual machine to host the Docker engine. The only natural choice for that VM was of course Oracle’s Virtualbox given that Hyper-V is only available in Windows Server or Windows 8+ Pro/Enterprise.

Boot2Docker was nice, especially the VM, but the Boot2Docker client made me feel in jail (no pun intended). Who wants to start a specific shell just to be able to play with Docker? I understand where that came from, but my first reflex was to try and integrate Docker in my usual msysGit bash shell.

To do so, I had to jump through a few hoops and even though I did succeed, a hugely annoying limitation remained: I wasn’t able to easily run the docker command using files elsewhere than under /Users/…

At the time, the docker client was actually never executed on Windows, it was executed within the VM, through SSH (which is what the Boot2Docker client did). Because of that, docker could only interact with files reachable from the VM (i.e., made available via mount). All the mounting/sharing/SSH (and keys!) required quite a few workarounds.

At the end of the day it was still fun to workaround the quirks because I had to play with Virtualbox’s CLI (e.g., to configure port redirections), learn a bit more about Docker’s API, …

Well, fast forward April 16th and there it comes, Microsoft has helped port the docker client to Windows.

With this release and combined with Docker machine which is also available for Windows, there will be a lot less suffering involved in used Docker on Windows :)

In the next post I’ll go through some of the functions/aliases I’ve added to my bash profile to make my life easier.