Windows 10 configuration tips

Thursday, July 30th, 2015

In the previous post, I’ve mentioned that almost all of my applications and settings were kept during the upgrade from Windows 8.1 to Windows 10. Almost all, but not all.

And anyway, each time I switch to a new OS release, I can’t help but spend some time going through all the options and policy settings just to configure it the way I like.

With Windows 10, it’s the very first time that I’m done in less than two hours, which is nice :)

Now let me list all the things that I’ve done after upgrading, in no specific order:

  • Activate Windows (first things first right? :p)
  • Installed the latest NVidia drivers (these didn’t survive the upgrade)
  • Put the resolution back to 1920*1080
  • Configured the File Explorer to show “This PC” rather than “Quick Access”, because I don’t care about frequent folders & recent files. I know where I need to go and how my files are organized
  • Reinstalled Virtualbox as I’ve noticed that it crashed when started
  • fired up gpedit.msc (which you will only have with the Professional & above editions..)
    • disabled thumbs.db files generation: because I can’t stand trying to move/delete things to discover that the damn thumbnails file prevents me from doing what I want…
      • User > Administrative Templates > Windows Components > File Explorer > Turn off the caching of thumbnails in hidden thumbs.db files
    • disabled things that send data to Microsoft: Sorry MSFT, but I never like having my machine send data around (just a general principle that I stick by)
      • Computer > Windows Components > Windows Error Reporting > Disable Windows Error Reporting
      • Computer > Windows Components > Windows Error Reporting > Do not send additional data
      • Computer > Windows Components > Data Collection and Preview Builds > Allow Telemetry
    • made sure that the shutdown button on the logon screen was disabled: If you have young children you’ll understand why
      • Computer > Windows Settings > Local Policies > Security Options > Shutdown: Allow system to be shut down without having to log on
    • enabled always sending Do Not Track (DNT) header: because if there are still non-evil people on the Web, I need them to know that I somehow value privacy
      • Computer > Windows Components > Internet Explorer > Internet Control Panel > Advanced Page > Always send Do Not Track header
    • disabled Windows SmartScreen: because I don’t need Microsoft to tell me what is safe and what isn’t
      • Computer > Administrative Templates > Windows Components > File Explorer > Configure Windows SmartScreen
    • enabled confirmation for file deletion: because I can’t trust myself that much ;-)
      • Recycle Bin > Properties > Display delete confirmation dialog
    • disabled documents history: who cares about history (don’t repeat that to my son ^^)
      • User > Administrative Templates > Start Menu and Taskbar
        • Clear history of recently opened documents on exit
        • Do not keep history of recently opened documents
    • disabled searching for files/documents/internet in start menu: because I care about apps when I use the start menu, nothing else (personal choice indeed)
      • User > Administrative Templates > Start Menu and Taskbar
        • Do not search communications
        • Do not search for files
        • Do not search Internet
    • forced listing desktop apps first (rather than metro apps..)
      • User > Administrative Templates > Start Menu and Taskbar 
        • List desktop apps first in the Apps view
    • disabled MS Edge app usage tracking: I love MS Edge but I just don’t like tracking
      • User > Administrative Templates > Windows Components > Edge UI
        • Turn off tracking of app usage
    • customized the File Explorer
      • User > Administrative Templates > Windows Components > File Explorer
        • Remove the Search the Internet “Search again” link
        • Start File Explorer with ribbon minimized
        • Turn off display of recent search entries in the File Explorer search box
        • Turn off caching of thumbnail pictures
  • forced numlock at boot (logon screen also!): this setting was apparently lost during the upgrade
    • run “regedit”
    • go to \HKEY_USERS\.DEFAULT\Control Panel\Keyboard
    • change value “InitialKeyboardIndicators” from “2147483648” to “80000002”
    • restart and u will have NUM LOCK ON always on windows startup

After this I already felt a bit more at ease, although that was only the first part.

The next part was to go through all the Settings and trying out the new features..

  • created a new virtual desktop: Hey MSFT, great that you’ve finally added virtual desktops but why so late? :)
  • fixed the default apps: this is one of the things I disliked. MSFT, you’ve managed to keep so many things and just decided to replace my default apps by all of yours? That really sucks!
    • Switched default browser back to Google Chrome
    • Switched default music player back to Winamp (because it really… :p)
    • Switched default video player back to VLC
  • modified the folders that appear by default in the Start Menu
    • File Explorer
    • Settings
    • Downloads
    • Personal Folder
  • modified privacy settings
    • Settings > Privacy
      • General
        • Send Microsoft info about how I write…
          • OFF
      • Location
        • Disabled
      • Removed various rights from apps…
      • Feedback & diagnostics
        • Windows should ask for my feedback: Never
        • Send your device data to Microsoft: Basic
        • Background apps: Remove
  • removed default Windows 10 apps: MSFT I get why it is all there, but I just couldn’t care less
    • Finance
    • News
    • MSN Food & Drinks
    • Health & Fitness
    • Travel
    • Get Skype
    • Get Office
    • Get Bored
    • Get Whatever :o
  • Windows Store
    • signed in with my Windows Live account: ONLY for apps
  • Cortana & Search settings
    • disabled web search results

Done!

I’ll probably edit this post over time to reflect config changes, but for now I think it’s already in a pretty good shape :)

PS: For those wondering, no I’m not hardening my Windows box in any specific way, I just have a local firewall (Net Limiter) set to ask me to allow/Deny anytime there are inbound/outbound connections (when I don’t already have rules covering those), so as long as apps cannot bypass that Firewall, I know what tries to go in/out and I’m in control. That combined with the Antivirus is all I need. I wouldn’t configure a Windows box just like that at work, but at home that’s just more than enough :)

Upgrading from Windows 8 to Windows 10

Thursday, July 30th, 2015

TL;DR: Huge kudos to Microsoft for making the upgrade from W7 & 8 to Windows 10 a breeze!

This post is a summary of my experience upgrading from Windows 8.1 to Windows 10; I’m not going to talk about the new features as there are already a huge amount of articles about that.. :)

Yesterday, the binaries for Windows 10 were available on MSDN so I wanted to finally give W10 a try. I’ve never been keen of installing technical previews on my main machine and I just don’t have time to test that kind of things anymore.

So first things first, I’ve downloaded the ISO & claimed my key. Once downloaded I mounted the iso and let the magic happen.

One HUGE step forward with the Windows 10 installer is that it is now able to perform the upgrade while keeping most applications and settings.

In my case, although I have a “pretty complicated” configuration, I was back up and running directly after the upgrade which is just of awesome :)

Here’s what makes it surprising for me:

  • all of my applications are still there, intact (i.e., still configured just as I’ve left them)
  • my registry settings were kept (for the most part)
  • my services are still there after the upgrade (I’ve got a local MySQL instance, a Confluence wiki and a bunch of other stuff)
  • all my drivers are still there
  • my custom Firewall (Net Limiter 2) is still there after the upgrade (impressive given how deeply it must be integrated with the OS (filter drivers & al)
  • Daemon tools is still installed and my virtual devices are still there
  • (most of) my startup applications are still in the autorun list
  • my Windows defender settings & folder exclusions were still there
  • my custom power plan was still there & active
  • my favorites in File Explorer were still there (ok that’s no magic but hey ^^)
  • my desktop icons are still there
  • my regional settings & other are still there

I think that the difference between XP & 7 was MUCH more important and given how “close” W10 is to W8, I can’t say that any of the above is really surprising, but it’s still very nice.

Hopefully the next upgrade from W10 to W.Next will not even require a reboot anymore.. ;-)

In a follow up post I’ll describe the things that I’ve configured after the upgrade.

Huge kudos to Microsoft for making the upgrade from W7/8 to Windows 10 a breeze!

My current global npm packages

Sunday, July 26th, 2015

If you’re familiar with nodejs & npm you already know this (just skip this part), but newcomers should realize that npm packages can not only be installed locally in a project’s folder, but also globally. Packages that are installed globally are.. globally accessible, which is really cool because using npm you can install many CLI tools to streamline your workflow and boost your productivity.
 
To install a package globally, you simply need to use the –global (-g) flag. For example:
 
npm install --global gulp
Note that you can customize where npm stores globally installed packages by creating a .npmrc file in your home folder and adding the following to it:
 
prefix="/path/to/your/global/npm/packages"
You then simply have to add that same path to your system’s path to get all the tools available at your fingertips.
 
Here’s a small list of npm packages that I currently install globally
  • gulp: Streaming build system for the web!
  • babel: JS transpiler. Because we all want ES2015/2016/20xy right now!
  • jspm: JavaScript package manager: one package manager to rule them all. Let’s just forget about npm vs bower vs git vs whatever, just use jspm and be done with it
  • typescript: Ahhh TypeScript, worth explaining in its own post because strongly typed JS is the future and the future is now
  • tsd: TypeScript type definitions downloader. Because TypeScript without type definitions isn’t very useful
  • brower-sync: Easy to use web server that will make your developer life easier: automatically refresh/sync across all connected devices
  • http-server: Minimalist web server (zero-configuration). Using this you can easily serve local content
  • sass: CSS preprocessor (also deserves its own post). Until I free up some time to learn more about PostCSS, I’ll continue to use this
  • node-sass: SASS without Ruby, weeee
  • yo: CLI to run Yeoman generators
  • slush: Another scaffolding CLI (based on Gulp)
  • caniuse-cmd: CLI to easily check browser compatibility of certain features using data from caniuse.com
  • reveal-md: Quickly generate a reveal.js presentation from markdown content
  • superstatic: Nice web server for SPAs
  • bower: Package manager for the web. I install it for older projects
  • grunt: Task runner for the web. Same as above
  • node-inspector: Blink-based debugger for NodeJS apps
  • node-debug: Wrapper for node-inspector

Quick NPM tip and a little rant about node-gyp

Wednesday, July 1st, 2015

Before I start explaining why I’m writing this, here’s my NPM tip of the day: if you encounter errors pertaining to node-gyp “rebuild”, while trying to install an NPM package, then before wasting precious hours of your life, just try to install using the –no-optional flag; if you’re in luck, that’ll just work (as it did for me in most cases).

Now what the heck is node-gyp? That’s a fair question to ask. As they put it in their readme it’s a “cross-platform command-line tool written in Node.js for compiling native addon modules for Node.js … and takes away the pain of dealing with various differences in build platforms”.

Well the way I now see it, it might just do what they say.. for people who need/care about that, but for the rest of the world and especially people like me who just want to install an npm package and get on with their life.. it’s just trouble and needless time waste.

Sometimes when you try to install an NPM package, there will be some dependency in the tree that requires to be built specifically for your platform and at that point, node-gyp (which is one of the dependencies of NPM itself) might come into play. The issue is that to be able to do its job, node-gyp has some prereqs that vary from OS to OS and those prereqs are not part of node/NPM (you’ll soon understand why :p). If you’re one of the good guys and use Linux (you should… and I should too but can’t) then you’ll be alright: python + make will make your day (you’ll also be fine with OSX).

Unfortunately, if you’re a sad panda working on a Windows box just like me, then tough luck!

Here’s a little overview of the ‘light/small’ requirements that node-gyp has on Windows 8+:

  • Python (2.7.x and NOT 3.x+): ok just with this one I already dislike node-gyp
  • Microsoft Visual Studio C++ 2013: ok, now I hate it. Do I really need 7GB just to get an npm dependency on my machine? Wtf (pre-compiled binaries FTW, if I wanted to compile everything myself on my machine, I’d still be using gentoo..)
  • and last but not least, for 64-bit builds… Windows SDK: are you kidding me?!!

Assuming that you’re motivated, then you’ll go ahead and install these.. try again and… still get the same error?! Gee… Well the thing is that quite some people have encountered this problem and have hopped through all kinds of hoops to finally get it to work. Some have had success by uninstalling all Visual C++ redistributable packages (any gamers around here?), reinstalling node-gyp’s dependencies in a specific order, adding environment variables and whatnot..

In my case I was pretty happy to discover that in all cases, the dependencies that needed node-gyp were optional (e.g., for babel, browserify and some others), so simply avoiding them was fine. If you really do need node-gyp to work then I pity you and your disk space ^^. Just take a look at some of these links and may the force be with you.

What also sucks is that npm install rolls back on error even for optional dependencies although it’s not supposed to..

HSTS enabled!

Friday, June 19th, 2015

Hey everyone!

As noted in my previous post, I’ve finally switched my domain to HTTPS. I was reluctant to enable HSTS (HTTP Strict Transport Security) at first but after looking at this talk, I’ve decided to just go with the flow and enable it on CloudFlare:

hsts

Basically it means that, as of right now, you’ll always use HTTPS when visiting my website, even if you try and visit the old HTTP URL. This will occur not only because my Apache server is configured to automatically redirect you to the HTTPS version, but because your browser will automatically go to the HTTPS URL. Why will it do that? Because my site is now sending the HSTS HTTP header:

strict-transport-security:max-age=15552000; includeSubDomains; preload

Basically that header tells your browser: This is an HTTPS enabled website, always use HTTPS if you come back here. Please do this for this domain and all sub-domains for the next six months..

For now, as my site isn’t in the browsers HSTS preload list yet (I’ve just submitted it), you may visit this site once more using plain HTTP but as soon as your browser will see the HSTS HTTP header it’ll remember to always switch to HTTPS.

Why does HSTS matter? Because it will protect YOU against man-in-the-middle attacks.. not that this Website is sensitive in any way, but as a good Web citizen I have to do what I can, right? ;-)

I was hesitant to enable this because I’ve just signed up with CloudFlare and if they decide to drop their free subscription plan then it means that I’ll be forced to find either another similar solution or buy a certificate that I can install on my web host; in my case OVH doesn’t allow importing third party certificates and they charge about 50€ per year for that (which is wayyyyyyyy too much for a personal website).

The bet that I’m making by enabling HSTS now is simply that the free subscription model of CloudFlare will remain available for at least 2-3 years (hopefully much longer) and that in the meantime, given how Mozilla, Google major players and others are pushing for HTTPS everywhere, the overall accessibility/affordability of HTTPS for personal websites will have improved. If I’m wrong well then I’ll either pay if you show me enough love or shut this thing down ;-)

HTTPS everywhere

Thursday, June 18th, 2015

TL;DR CloudFlare is awesome, but don’t underestimate the effort required to fully switch your site to HTTPS

About time… That’s what I keep telling myself; my site won’t be considered insecure by default :)

I’ve finally switched this site to HTTPS and I must say that CloudFlare has made this extremely easy, straightforward and fast.

Now I’ll be able to have fun with Service Workers and other modern Web goodies that require HTTPS.

Here’s what I had to do in order to get the holy green padlock.

First I had to create a (FREE) account on CloudFlare. Once my account was created I entered the domain that I wanted to add and CloudFlare went about finding all the DNS zone entries it could find. That took about a minute and the result was correct.

Next, I had to modify my domain’s DNS zone name servers to replace the OVH ones by those of CloudFlare. It didn’t take long for the switch to actually take place. DNS replication ain’t the fastest of things.

And bam done.. or almost.


As I like tweaking stuff, I had to check out all the features provided by CloudFlare, and the least I can say is that the feature list included in the free tier is just plain impressive!

Here’s what I’ve enabled:

  • SSL w/ SPDY: SSL between clients and CloudFlare as well as between CloudFlare and OVH (although the certificate presented by OVH isn’t trusted it’s still better than nothing)
  • IP firewall: basic but nice given the price :p
  • Automatic minification of JS/CSS/HTML assets
  • Caching
  • Always online: awesome, they’ll continue to serve my static content even if the site goes down
  • A few other nice things

They also provide ways to purge their cached data and to enable a Dev mode that allows to access up-to-date resources, etc

In the future, if I’m convinced that I can keep my site HTTPS-enabled for long, then I’ll also enable HSTS.

I might also give their Rocket Loader feature a try…


Enabling HTTPS for my site is only the first part of the story; there were other changes I needed to make in order to get the almighty green padlock (TM).

I first needed to make sure that my visitors (you guys) visited the site using HTTPS, so I’ve updated my .htaccess file accordingly:

...

RewriteEngine On

# 2015-06-18 - Automatic redirection to https now that CloudFlare is enabled
RewriteCond %{HTTPS} off
# rewrite to HTTPS
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
# rewrite any request to the wrong domain to use www.
RewriteCond %{HTTP_HOST} !^www\.
RewriteRule .* https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

...

With this, I have an automatic http to https redirection. Of course that isn’t going to protect you from MITM attacks but I’m not ready to enable HSTS just yet.


Next I had to update my WordPress configuration to ensure that all generated links use HTTPS (WordPress address & Site address URL).

This fixed a few issues with mixed content but not all of them. I had to go through all my template’s files to ensure that I was using https everywhere; namely I had hardcoded the URL of my FeedBurner RSS feed.


I also noticed that I was still getting errors in the console about mixed content and indeed my site was retrieving some resources using plain HTTP from other domains.

In order to fix this, I had to

  • use my very rusted SQL-fu to replace http by https at all the places it made sense in my posts (e.g., links to Google Photo images, links to my own site, etc)
  • modify one of my WordPress extensions to retrieve its scripts from Google’s CDN using HTTPS
  • get rid of an extension that was using iframes, swf objects and displayed warnings if Flash was missing (oh god..) =)

I also took the opportunity to configure CORS, also through my .htaccess:

...

RewriteEngine On
Header set Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Methods: "GET,POST,OPTIONS,DELETE,PUT"
...

...

And now, just look at this beauty:

Green beauty

Sublime Text plugins that I use

Monday, June 1st, 2015

TL;DR: I’ve started using Sublime Text as my default text editor, it is indeed awesome and I’ve compiled a list of the plugins that I find useful.

For a very long time, my text editor of choice has remained Notepad++ (NPP for friends). NPP is still great (it’s free and open source, it has tabs, extensions, syntax highlighting and some other goodies), but it’s not AWESOME.

I’ve been hearing about Sublime Text for a while now but never really took the time to try it out seriously. Moreover, the last time I checked I’ve noticed that it wasn’t free so I didn’t get any further. Although I do understand the reasons why some developers choose the lucrative path, I’m in general more inclined to use free and preferably open source software (why pay when you can get something just as good for free?).

So Sublime Text’s website URL was hidden somewhere in some dark corner of my bookmarks and was set to remain in there forever, until a Web designer at work gave me a quick demo which led me to reconsider it :)

The first word that now comes to my mind when thinking about Sublime Text is “polished”: the UI is really beautiful and that alone makes it very pleasing to use. Sublime has really neat text selection/edition features (e.g., column and multi-selection editing, auto-completion, …), support for many languages (syntax highlighting), uber fast search and navigation, tabs, macros, etc. I’m not going to list it all here as I’m pretty sure many people took time to do so already.

But even though the out-of-the-box feature-list is quite nice, it is far from enough to make me consider it worthy of replacing NPP which I’m very used to. Really getting to know an editor takes time and I only have that much available.

What really made me change my mind is the ecosystem of Sublime. Over time, as the community has grown, many developers have spent time to develop a ton of extensions, themes and color schemes for it. The package manager for Sublime is called Package Control and contains almost 3K packages, hence at least 100 are probably worth the try :)

Suffice to say, knowing this, I needed to go through the catalog and try out the most popular extensions. In doing so, I’ve realized that Sublime + extensions > NPP + extensions, which is why Sublime is now my default text editor. It’ll take me a few weeks/months to really take advantage of it, but I already enjoy it a lot.

I’m not going to explain here how to install the package manager or install packages; for that you should rather check out the following video.

Without further ado, here’s the list of extensions that I’m currently using along with a small description to give you an idea of why I consider each useful/relevant for productivity (assuming that you’re into software development that is ^^). I’ll create new posts whenever I discover new ones that are of interest.

General:

  • NPM: Easily interact with the Node Package Manager (NPM) from Sublime (e.g., install NPM packages, add project dependencies, …)
  • Gulp: Quickly execute Gulp commands directly from Sublime (I’ll talk about Gulp in a future post)
  • SublimeCodeIntel: Code intelligence and smart autocomplete engine. Supports many languages: JavaScript, Mason, XBL, XUL, RHTML, SCSS, Python, HTML, Ruby, Python3, XML, Sass, XSLT, Django, HTML5, Perl, CSS, Twig, Less, Smarty, Node.js, Tcl, TemplateToolkit, PHP (phew ^^)
  • BracketHighlighter: Highlight brackets in the gutter (bar left of the file contents); very useful to quickly see where any given code block ends
  • Git: Execute git commands directly from Sublime through easy-to-use contextual menus
  • Git Gutter: Show an icon in the gutter indicating whether a line has been inserted/modified or deleted (checked against HEAD by default)
  • SidebarGit: Add Git commands in the sidebar context menu
  • ApplySyntax: Detect file types and apply the correct syntax highlighting automatically
  • Alignment: Easily align multiple selections and multi-line selections
  • AutoFileName: Automatically complete filenames; very useful when referring to project files (e.g., src for an image tag, file name for a CSS import, …)
  • TrailingSpaces: Easily see/remove trailing whitespace (if you’re crazy like me about small details). Check out the options here
  • SublimeLinter: A plugin that provides a framework for linting code in Sublime. Basically this one is a pre-req for some neat plugins (see below). Check out the docs for more information
  • FileDiffs: Show diff between current file or selection(s) in the current file, and clipboard, another file or unsaved changes
  • SidebarEnhancements: Better sidebar context menu
  • ExpandTabsOnSave: Automatically convert tabs to space (or the other way around, depending on your indentation settings)
  • Open Folder: Add an ‘Open folder’ option to the sidebar context menu
  • Pretty JSON: Prettify JSON, validate JSON, etc
  • Indent XML: Fix XML and JSON files indentation
  • JSONLint: JSON linter; checks JSON files for errors and display them in context
  • EditorConfig: Useful to respect the editorconfig file (.editorconfig in the project) which defines a common configuration for text editors
  • Dockerfile Syntax Highlighting: Add syntax highlighting for Dockerfiles

Web development

  • Emmet: Add zen-coding support to Sublime. (e.g., write div*2>span.cool*5 then hit TAB). Emmet is awesome (note that plugins exist for various editors, not only Sublime). Emmet allows me to quickly generate a ton of HTML code without wasting time
  • TypeScript: Add syntax highlighting and autocompletion for TypeScript code
  • JSCS: Check JS code style using node-jscs. To be able to use this you first need to install NodeJS, NPM then JSCS (npm install -g jscs). Check this link out for the complete list of rules that you can configure. Here’s an example from my latest project
  • JSCS-Formatter: Format JS code based on the JS code style that you’ve configured for your project (i.e., through the .jscsrc file) which is pretty neat
  • SublimeLinter-jshint: JSHint linter for SublimeLinter. Shows you what’s wrong with your JS code (requires SublimeLinter
  • SublimeLinter-csslint: CSS linter for SublimeLinter. Shows you what’s wrong with your CSS code (requires SublimeLinter)
  • SublimeLinter-annotations: Make TODOs FIXMEs etc stand out (requires SublimeLinter)
  • Sass: Sass support for Sublime. Adds syntax highlighting and tab/code completion for Sass and SCSS files. It also has Zen Coding shortcuts for many CSS properties
  • SCSS snippets: Additional SCSS snippets (use tab for autocompletion)
  • CSS3: Add CSS3 support. This plugin includes draft specs and provides autocompletion for each and every CSS3 property. It also highlights bad/old CSS
  • Color Highlighter: Highlight hexadecimal colorcodes with their real color. Here’s a small tip; in the plugin configuration (ColorHighlighter.sublime-settings), it’s possible to enable permanent color highlighting, which I find particularly convenient: { “ha_style”: “filled” }
  • Color Picker: What the name says ;-)
  • Autoprefixer: Add CSS vendor prefixes. This plugin is useful for small prototypes but is otherwise better done through a build process (e.g., using Gulp)
  • HTML5: Snippets bundle for HTML5. Useful to add HTML5 tags/attributes (e.g., type <time then hit TAB)
  • JavaScript Snippets: JavaScript snippets: useful to quickly write JS code
  • AngularJS: AngularJS code completion, code navigation, snippets
  • jQuery: jQuery syntax highlighting and autocompletion (snippets)
  • DocBlockr: Add support for easily writing API docs

Visual candies

  • Seti_UI: Awesome theme with custom icons for file types
  • Schemr: Color scheme selector. Makes it easy to switch color schemes
  • Themr: UI theme selector. Makes it easy to switch themes
  • Dayle Rees colour schemes: A ton of color schemes (.. that I’ll probably never use now that I have Seti_UI :p)

As I’ve explained in previous posts, I’m now busy with the creation of a new version of this website using more modern technologies.

With my current set of Sublime Text plugins, I now almost have a full-featured Web-development-oriented IDE at my disposal. For my current/specific development needs, Jetbrain’s WebStorm (commercial IDE) is actually a better alternative (it supports many of what the plugins above bring and has its own plugin repository) but it’s overkill to use it as my all-around text editor and my wife probably won’t appreciate the 50$/y license cost (even though very reasonable) :)

For casual text editing, quick prototyping etc, Sublime Text wins hands down given how fast it starts and how reactive it is overall.

Note that there is another interesting editor called Atom. Atom has been developed by GitHub and is free and open source. Its engine is based on Web technologies (I assume WebKit, Chromium or the like) which is great for hackability and it is gaining a lot of momentum (it has already >2K plugins). I think that it’s still a bit young so I’ll check back in a year or two.. but don’t take my word for it. Try it out and don’t hesitate to tell me if you think it’s actually better than Sublime (and why) =)

Recovering a raid array in “[E]” state on a Synology nas

Tuesday, May 19th, 2015

WARNING: If you encounter a similar issue, try to contact Synology first, they are ultra responsive and solved my issue in less than a business day (although I’m no enterprise customer). Commands that Synology provided me and that I mention below can wipe away all your data, so you’ve been warned :)

TL;DR: If you have a RAID array in [E] (DiskError) state (Synology-specific error state), then the only option seems to re-create the array and run a file system check/repair afterwards (assuming that your disks are fine to begin with).

Recently I’ve learned that Synology introduced Docker support in their 5.2 firmware (yay!), but unfortunately for me, just when I was about to try it out, I noticed an ugly ORANGE led on my NAS where I always like to see GREEN ones..

The NAS didn’t respond at all so I had no choice but to power it off. I first tried gently but that didn’t help so I had to do it the hard way. Once restarted, another disk had an ORANGE led and at that point I understood that I was in for a bit of command-line fun :(

The Web interface was pretty clear with me, my Volume2 was Crashed (that didn’t look like good news :o) and couldn’t be repaired (through the UI that is).

After fiddling around for a while through SSH, I discovered that my NAS created RAID 1 arrays for me (with one disk in each), which I wasn’t aware of; I actually never wanted to use RAID in my NAS!

I guess it makes sense for beginner users as it allows them to easily expand capacity/availability without having to know anything about RAID, but in my case I wasn’t concerned about availability and since RAID is no backup solution (hope you know why!), I didn’t want that at all, I have proper backups (on & off-site).

Well in any case I did have a crashed RAID 1 single disk array so I had to deal with it anyway.. :)

Here’s the output of some commands I ran which helped me better understand what was going on.

The /var/log/messages showed that something was wrong with the filesystem:

May 17 14:59:26 SynoTnT kernel: [   49.817690] EXT4-fs warning (device dm-4): ext4_clear_journal_err:4877: Filesystem error recorded from previous mount: IO failure
May 17 14:59:26 SynoTnT kernel: [   49.829467] EXT4-fs warning (device dm-4): ext4_clear_journal_err:4878: Marking fs in need of filesystem check.
May 17 14:59:26 SynoTnT kernel: [   49.860638] EXT4-fs (dm-4): warning: mounting fs with errors, running e2fsck is recommended
...

Running e2fsck at that point didn’t help.

A check of the disk arrays gave me more information:

> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [
md2 : active raid1 sda3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md6 : active raid1 sdc3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md5 : active raid1 sdf3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md3 : active raid1 sde3[0](E)
      3902296256 blocks super 1.2 [1/1] [E]

md7 : active raid1 sdg3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md4 : active raid1 sdb3[0]
      1948792256 blocks super 1.2 [1/1] [U]

md1 : active raid1 sda2[0] sdb2[2] sdc2[4] sde2[1] sdf2[3] sdg2[5]
      2097088 blocks [8/6] [UUUUUU__]

md0 : active raid1 sda1[0] sdb1[2] sdc1[4] sde1[1] sdf1[3] sdg1[5]
      2490176 blocks [8/6] [UUUUUU__]

unused devices: 

As you can see above, the md3 array was active but in a weird [E] state. After Googling a bit I discovered that the [E] state is specific to Synology, as that guy explains here. Synology doesn’t provide any documentation around this marker; they only state in their documentation that we should contact them if a volume is Crashed.

Continuing, I took a detailed look at the md3 array and the ‘partition’ attached to it, which looked okay; so purely from a classic RAID array point of view, everything was alright!

> mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Fri Jul  5 14:59:33 2013
     Raid Level : raid1
     Array Size : 3902296256 (3721.52 GiB 3995.95 GB)
  Used Dev Size : 3902296256 (3721.52 GiB 3995.95 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sun May 17 18:21:27 2015
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : SynoTnT:3  (local to host SynoTnT)
           UUID : 2143565c:345a0478:e33ac874:445e6e7b
         Events : 22

    Number   Major   Minor   RaidDevice State
       0       8       67        0      active sync   /dev/sde3


> mdadm --examine /dev/sde3
/dev/sde3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 2143565c:345a0478:e33ac874:445e6e7b
           Name : SynoTnT:3  (local to host SynoTnT)
  Creation Time : Fri Jul  5 14:59:33 2013
     Raid Level : raid1
   Raid Devices : 1

 Avail Dev Size : 7804592833 (3721.52 GiB 3995.95 GB)
     Array Size : 7804592512 (3721.52 GiB 3995.95 GB)
  Used Dev Size : 7804592512 (3721.52 GiB 3995.95 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : a2e64ee9:f4030905:52794fc2:0532688f

    Update Time : Sun May 17 18:46:55 2015
       Checksum : a05f59a0 - correct
         Events : 22


   Device Role : Active device 0
   Array State : A ('A' == active, '.' == missing)		

See above, all clean!

So at this point I realized that I only had few options:

  • hope that Synology would help me fix it
  • try and fix it myself using arcane mdadm commands to recreate the array
  • get a spare disk and copy my data to it before formatting the disk, re-creating the shares and putting the data back (booooringgggggg)

To be on the safe side, I saved a copy of the output for each command so that I had at least the initial state of the array. To be honest at this point I didn’t dare go further as I didn’t know what re-creating the raid array could do to my data if I did something wrong (which I probably would have :p).

Fortunately for me, my NAS is still supported and Synology fixed the issue for me (they connected remotely through SSH). I insisted to get the commands they used and here’s what they gave me:

> mdadm -Cf /dev/md3 -e1.2 -n1 -l1 /dev/sde3 -u2143565c:345a0478:e33ac874:445e6e7b
> e2fsck -pvf -C0 /dev/md3

As you can see above, they’ve used mdadm to re-create the array, specifying the same options as those used to initially create it:

  • force creation: -Cf
  • the 1.2 RAID metatada (superblock) style: -e1.2
  • the number of devices (1): -n1
  • the RAID level (1): -l1
  • the device id: /dev/sde3
  • the UUID of the array to create (the same as the one that existed before!): -u2143565c….

The second command simply runs a file system check that repairs any errors automatically.

And tadaaaa, problem solved. Thanks Synology! :)

As a sidenote, here are some useful commands:

# Stop all NAS services except from SSH
> syno_poweroff_task -d

# Unmount a volume
> umount /volume2

# Get detailed information about a given volume
> udevadm info --query=all --name=/dev/mapper/vol2-origin
P: /devices/virtual/block/dm-4
N: dm-4
E: DEVNAME=/dev/dm-4
E: DEVPATH=/devices/virtual/block/dm-4
E: DEVTYPE=disk
E: ID_FS_LABEL=1.42.6-3211
E: ID_FS_LABEL_ENC=1.42.6-3211
E: ID_FS_TYPE=ext4
E: ID_FS_USAGE=filesystem
E: ID_FS_UUID=19ff9f2b-2811-4941-914b-ef8ea3699d33
E: ID_FS_UUID_ENC=19ff9f2b-2811-4941-914b-ef8ea3699d33
E: ID_FS_VERSION=1.0
E: MAJOR=253
E: MINOR=4
E: SUBSYSTEM=block
E: SYNO_DEV_DISKPORTTYPE=UNKNOWN
E: SYNO_KERNEL_VERSION=3.10
E: SYNO_PLATFORM=cedarview
E: USEC_INITIALIZED=395934

That’s it for today, time to play with Docker on my Synology NAS!

Portrait

Tuesday, May 12th, 2015

2015-04-05 - 16h16 - 033 - Claudine.jpg

Let there be light

Tuesday, May 12th, 2015

2015-05-05 - 19h36 - 012.jpg