Author Archive

Modern Web Development – Part Two

Wednesday, February 17th, 2016

In the first part of this series, I’ve explained how I re-discovered the state of the Web platform and specifically of the JavaScript universe.

Around June, I changed my mind about AngularJS and thought that Angular 2 could arrive on time for our project (hint: it didn’t), so I decided to tag the current state of my personal project and tried to actually start developing my site using it.

I spent some time during my holidays to migrate my app to Angular 2. During that time, I banged my head against the wall so many times it still hurts; not because of Angular 2, but because of RxJS and Reactive Programming; those made me feel really stupid for a while :)

Also during that time, I spent time improving my build. The build story was starting to itch me real bad, so at some point I put my project aside and decided to extract the build to a separate project and concentrate on that for a while. That effort led to the creation of modernWebDevBuild” (MWD for friends). MWD was my take at providing a reusable build for creating modern web applications. You could argue that that solution is not modern anymore but hey, I can’t stop time ;-)

If you look at the feature list of modernWebDevBuild, you’ll see that it’s basically Web Starter Kit on steroids with support for TypeScript, tslint, karma, etc.

I’ve put some effort into making it flexible enough so that it doesn’t put too many constraints on the client project structure and I’m pretty sure that, with some help of the community, it could become much more malleable and could be reused across many more projects, independently of whether those are based on Angular 1, Angular 2 or something else.

A while after, I’ve also created a Yeoman generator called modernWebDevGenerator to make it easy to scaffold new projects using modernWebDevBuild. The generated projects include many personal choices (e.g., Angular 2, TypeScript, SystemJS, JSPM, sass, jshint and a rule set, jscs and a rule set, …) and style guidelines (e.g., component approach for Angular and SASS code), but most if not all can be stripped away easily.

In my opinion, modernWebDevBuild was a good shot at providing a reusable build for front-end web development. I’ve used it for multiple projects and could update it easily without having to worry about the build or having to maintain a ton of build-related dependencies and whatnot. That was a relief: fixing an issue meant fixing it once in one place, much better!

For me, the idea of having a complete build as a dependency of a project is something I find immensely valuable.

Recently though, with the project at work (where we’ll use AngularJS for now) we’ve evaluated different solutions for bundling, module loading & build tasks in general which led to the decision of  using webpack. So far, it’s been a blast. I’m not going to explain in detail what webpack is as there are already more than enough articles over it out there, but IMHO it’s the clear winner at the moment. The most important for me is that it has a very active/vibrant community around it busy maintaining & developing tons of plugins. Those plugins add support for pretty much anything that you might need in your front end build. You need transpilation? Check. You need autoprefixing? Check. You need cache busting? Check… well you get the idea.

We’ve decided to fork the Angular 2 Webpack Starter Kit of AngularClass as it was the closest to what we needed to have.

With our project template, our goal is to integrate the whole stack that we’ve decided to use (e.g., Redux, RxJS, JSData, webpack for module bundling/loading, …) and use that template as basis for our next projects.

The thing is that I’d still like to extract the webpack build to a separate project (or at least a part of it). Again, I really believe that it should be possible to provide a reusable build configuration as long as it is flexible enough to accommodate for general use cases. Ultimately the discussion boils down to pragmatism versus the pleasure of reinventing your own wheel each time. Personally, I like round wheels and if one is flat then I don’t want to fix all my cars. What about you?

In the next post, I’ll explain what my new goal is for my site; as I said, I took a different route for a while because I had lots to learn, but now it’s about time for me to go back to my initial goal :)


Modern Web Development – Part one

Wednesday, February 17th, 2016

Since April last year, I’ve been plunging again in the world of Web development.. and what fun it has been! In this series of posts, I’m going to summarize the stuff I’ve done last year in order to hop back on the train and I’ll describe what I’ve learned along the way.

At the time, I published two blog posts, which were my way of condensing my vision for an important project at work aiming to modernize the way we create Web applications by going towards a client-side architecture combined with RESTful Web Services on the back-end.

When I started looking back at how the Web platform had evolved during the 2012-2015 period, the main things I had on my mind were:

  • mobile first & responsive web design
  • client side Web application architecture (which I personally consider to be the part of Web 3.0 — seriously, why not?)
  • the new specs that had reached broad support in modern Web browsers and were gaining a lot of traction
  • the offline first idea that these specs made more realistic

I wanted to learn more about AngularJSnode.jsnpm and sass but that was about it. I remember that at first, I had no precise idea yet about the build tool and the build steps that I wanted/needed… I hadn’t even heard about ES6 yet!

Since then, I’ve learned a ton about ES2015, TypeScript, module systems, module loaders, JS frameworks & the tooling around, front-end state management solutions, front-end build systems, project boilerplates, css style guides, quality assurance for front-end apps, unit & e2e testing libraries, … and the integration of it all…

The funny thing is that… I failed to deliver.

Initially, my personal goal was to create a responsive client-side Web app exploiting the RESTful API of my WordPress installation to replace my current theme, but I changed my mind along the way… So far, my site hasn’t changed one bit. I did improve some things though, but that was more around security than anything else.

So what made me change my mind and where did I spend my time?

At first, I was concentrated on the task at hand and I looked at how the HTML5 boilerplate had evolved as I knew that it was one of the best starting points around for creating modern Web apps. My idea was simple: use HTML5 boilerplate or InitializR to get ModernizR… and add some good old script tags… :p

I started with HTML5 boilerplate, but shortly after, I stumbled upon Web Starter Kit which was fresh out of Google’s oven, was based on HTML5 boilerplate and had some really cool features.

It came out of the box with a nice build which included support for JSCS (JS code style), JSHint (JS code quality), autoprefixing, BrowserSync (if you don’t know that one, DO check it out!), sass and ES6 (that was still the name at that point) with the help of Babel, …

 I really liked their setup and decided to use it as basis for my project; and that’s where my trajectory deviated :)

Given that I’m quite curious, I spent a while deconstructing Web Starter Kit’s build so that I could really understand what made it tick. That made me discover npm, gulp and the whole ecosystem of gulp plugins.

I really enjoyed doing so as it has helped me better grasp the necessary build steps for modern Web apps:

  • transpile code (ts->js, sass->css, …)
  • check quality
  • check style
  • create a production build (bundle, minify, mangle, …)
  • execute unit tests
  • execute end to end tests

At that moment, I was happy with the build as it stood so I continued to focus on developing my app. I took a good look at what ES6 was, what it meant for JavaScript, its ecosystem and how Babel helped (was it still called 6to5 then?). Learning about ES6 features took me a long while and I’m still far from done, but it was well worth it. ES2015 is such an huuuuuuuuuuuge step forward for the language.

I also took a glance at Angular 2 which was still in alpha state. It looked interesting but I believed that it would never be ready in time for our project at work (and it wasn’t). Still, I did spend a few days toying around with the alpha just to get to understand the basic principles.. and I must say that I really really loved what I saw!

That quick research spike also made me discover TypeScript.

Having a strong Java & OO background, TypeScript (TS) directly got me excited. I’m a strong believer in strong (heh) typing, and the fact that TS already supported many ES6 features that weren’t natively supported by Web browsers yet was very appealing to me.

Moreover, having dozens of Java developers in our development teams at work, TypeScript seemed really ideal for us as it supports many features and idioms that our developers are very familiar with (classes, interfaces, generics, strong typing, decorators, …).

If you want to learn more about TypeScript, I definitely recommend the Typescript Deep Dive.

At that point, tsconfig.json wasn’t there yet and the most evident choice to integrate the necessary build step was gulp, as advertised by Dan Walhin’s excellent blog post. If I had read more about npm I might have gone a completely different path (i.e., used npm scripts only).. ^^.

At that point, I had to deviate from what Web Starter Kit offered me in order to add build tasks for TypeScript, tslint, etc. Fiddling with the build made me realize that it was quite brittle, so I refactored it quite a lot and tried to improve things (e.g., separate the build tasks in different files, extract the configuration settings, ensure that it would not break the build on each error, etc). I remember that I wanted to contribute back to Web Starter Kit but realized too late that I had made too many changes at once for them to be able to integrate easily (silly me, bummer).

I went pretty far with actually as at some point, I was using TypeScript to output ES6 code that I then sent through Babel, just so that I could use async/await and other things that TypeScript wasn’t able to transpile to ES5… :)

The exercise helped me see how “immature” and “fragile” the whole JavaScript ecosystem was. What I mean by that is that there seems to be only moving parts and each of those parts don’t necessarily keep happy with each other. Not only do too few people really understand what semver actually means and respect it, but everything that shines bright gets replaced faster than the speed of light :)

As a technologist, I love the pace it imposes for the fun and innovation it brings to the table, but it’s also frustrating for many reasons and (should be) quite scary for enterprises (to some degree). People talk about JavaScript fatigue, which is quite a fun way to put it and I can certainly understand the idea now.

One example that I thought a lot about is the fact that each and every front-end project seems to have its own build chain and build configuration that lives within the project, in complete isolation and has to be maintained.

Of course each and every project has its specificities so there really can’t be ONE rigid & reusable solution to rule them all, but the idea of duplicating so much effort needlessly across a whole community of developers violates the DRY principle as much as anything ever could.

Just try and imagine how many people must have used some Yeoman generator to scaffold projects, which now all have separate builds with tasks that all do the same things but are all defined 20.000 times in a gazillion different ways using variable and unreliable dependency versions… :)

When you scaffold a project using a generator, you end up with a snapshot of the template and of the build provided by the generator at that point in time and then it’s up to you to keep your version up to date and to integrate all improvements and bug fixes, assuming you have time to follow that… you poor thing!

Being part of a core software development team at work, my focus is most often on finding reusable solutions to common problems, limiting effort duplication and what not and thus, the front-end universe’s situation seems quite sad in that regard.

Another point that struck me was how limited the main package management solution was. npm is nice and all, but not being able to define some parent/generic/reusable configuration (e.g., like parent pom files in Maven) is kind of surprising. Again, the DRY principle probably corresponds to DO Repeat Yourself in the frontend universe. I’m sure that front-end experts will tell me that you can work around all that in countless ways, but that’s exactly the issue: I shouldn’t have to invent my solution for a general issue people should be concerned about.

To conclude on a positive note though, I do believe that all the tooling DOES bring added value because it makes it possible to manage dependencies correctly, define build steps which execute tests, generate coverage reports (e.g., using Istanbul), generate production builds etc.

This piece is getting a bit long, so I’ll continue my little story in part two!

 


OVH et les headers HTTP

Friday, January 15th, 2016

Si un soir d’hiver, il vous prend l’envie d’envoyer des headers HTTP à votre back-end hébergé chez OVH (i.e., si vous êtes aussi cinglés que moi), alors ma petite histoire devrait vous intéresser (en tout cas la conclusion)!

Comme j’ai une forte tendance à vouloir expérimenter, j’ai mis en place un petit système de jetons basés sur les JSON Web Tokens (JWT), de façon à pouvoir en générer à la volée, vérifier leur validité, les renouveler, etc.

Comme je n’ai toujours pas de serveur dédié (à vot’ bon coeur :p), j’ai implémenté ça en PHP (cfr mon post précédent) et j’ai uploadé ça sur un hébergement OVH.

J’avais évidemment tout développé/testé en local et j’étais plutôt content de moi. Or voilà, une fois déployé sur OVH, mon premier essai a été un échec total. Tellement foireux même que j’ai crû à une éclipse lunaire.

Après avoir retourné le bouzin pendant une bonne demi heure, je viens de me rendre compte qu’OVH ne passe pas les headers HTTP sans les chatouiller un peu au passage.

En effet, mon joli header “Authorization” disparaît pûrement et simplement à l’arrivée, tandis qu’une version plus exotique telle que “X-Authorization” devient quand à elle “X_Authorization”.

Alors je m’imagine bien qu’OVH fait ça pour de super bonnes raisons (que je suis curieux de découvrir), mais j’avoue que pour le coup ils ont réussi à me donner la nausée :)

Bref, vous êtes prévenus!


PHP composer and… Bash!

Sunday, December 20th, 2015

Bash bash bash!

It’s been a very long while since I’ve last played with PHP.
I’m not really willing to start a new career as PHP integrator, but it’s still cool to see that the language and the tooling around has evolved quite a lot.

Atwood‘s law states that any application that can be written in JavaScript will eventually be written in JavaScript. One could also say that any language will ultimately get its own package manager (hello npm, NuGet, Maven, …).

So here I am, needing multiple PHP libraries and willing to try a PHP package manager :).

Apparently, composer is the coolest kid around in PHP-land. As you know I still like BASH … on Windows, so here’s a quick guide to get PHP and composer available in your Windows bash universe.

First, you need to download the PHP binaries for Windows; you can get those here (always prefer the x64 version).
Once you have the archive, unzip it where you wish then, in the folder, make a copy of “php.ini-development” and call it php.ini. That’s the configuration file that php will load each time it runs on the command line.

Edit php.ini and in it you need to uncomment the following things (for starters):

  • extension_dir = “ext”
  • extension=php_openssl.dll

With the above, you’ll have SSL support and PHP will know where to find its extensions.

Now, create a folder in which you’ll place PHP extensions. In my case, I’ve created a “php_plugins” folder and placed it right next to the folder containing the PHP binaries (I like to keep things clean).

Next, open up you bash profile and add something along those lines:

alias php7='export PHP_HOME=$DEV_SOFT_HOME/php-7.0.1-Win32-VC14-x64;append_to_path ${PHP_HOME}; export PHP_PLUGINS_HOME=$DEV_SOFT_HOME/php_plugins;'
alias php='php.exe'

Make sure to call ‘php7’ at some point in your profile so that PHP is actually added to your path. Personally, I have a “defaults” alias in which I list all the things that I want to be loaded whenever my shell is loaded:

alias defaults='php7; ...'

# Initialization
defaults # Load default tools

Close and reopen your shell. At this point you should have php at your disposal anywhere you are (eeeewwwww scary :p).

Now you’re ready to get composer. Just run the following command to download it:

curl -sS https://getcomposer.org/installer | php

Once that is done, you should have a “composer.phar” file in the current folder; grab it and move it to your “php_plugins” folder.

Finally, edit your bash profile again and add the following alias:

alias composer='php $PHP_PLUGINS_HOME/composer.phar'

Close and reopen your shell. Tadaaaaa, you can type “composer” anywhere and get the job done.. :)


Security HTTP Headers FTW

Saturday, December 19th, 2015

In the last couple of months, I’ve tried to improve the overall security of this site. I’ve started by putting my server behind Cloudflare to get HTTPS (along with other nice availability/performance improvements). Then I closed my eyes and enabled HSTS. I even dared adding this site to the HSTS preload list (i.e., the list of HSTS-enabled websites loaded in all modern browsers). Weakest-Link-Graphic Today I’m taking this a step further with the addition of some security-related HTTP headers. You might say that this was the very first thing I should’ve done and you’d be right to say so, but here comes :) From now on, if you take a look at the initial response, you’ll see that the following headers (among others) are being sent to you:

...
X-Frame-Options "SAMEORIGIN"
X-Xss-Protection "1; mode=block"
X-Content-Type-Options "nosniff"
content-security-policy: "default-src 'self'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; img-src * data:; script-src 'self' 'unsafe-inline' https://ajax.googleapis.com https://apis.google.com https://*.linkedin.com https://platform.twitter.com https://connect.facebook.net;child-src 'self' https://accounts.google.com https://apis.google.com https://platform.twitter.com https://*.facebook.com; font-src 'self' https://fonts.gstatic.com data:; frame-ancestors 'none';report-uri https://www.dsebastien.net/csp_report.php; connect-src 'self'; form-action 'self'; upgrade-insecure-requests; reflected-xss block; base-uri https://www.dsebastien.net; object-src 'none'"

The X-* headers give additional protection against clickjacking, cross-site scripting (XSS) and preventing some user agents from doing mime type sniffing. Those are nice, but the main one is the Content Security Policy (CSP). There are tons of articles about what a CSP is and how to configure one so I won’t go into the details of that. Any security expert will quickly notice that this isn’t the strictest CSP (far from it) because it allows ‘unsafe-inline’ for scripts & styles. The thing is that adding hashes or nonces to all scripts and styles is not an easy thing to do; even less so when you inherit that from many wordpress plugins… Also, some minified code (e.g., jQuery plugins) uses eval (evil?). For now, I’ve decided to lower my security goal. I’ll surely revisit this later though (probably with the new version of the site). Notice that the CSP makes some older HTTP headers redundant (e.g., X-Frame-Options) but I’m still keeping the older variants just for the sake of wider support. These will go away over time. Here are some tips if you want to go about and create a CSP for your site/domain:

  • start with the report-only mode. It’ll only log errors in the console and will not actually block anything; this is a great starting point:
    content-security-policy-report-only: default-src 'none';
  • use tools such as the CSP extension for Fiddler or an online CSP generator
  • once you’ve got rid of all console errors, remove ‘report-only’ to make your CSP effective
  • configure a ‘report-uri’ to be aware of CSP-related issues. Just be careful with this as attackers might probably take advantage of that (i.e., do not mail yourself all violations :p)

More generally, you can use online tools such as this one to review your site’s security headers. If you look at my site there, you’ll see that I could add HTTP Public Key Pinning (HPKP) headers to improve security a bit more. I won’t do it though as I don’t want my site to break whenever CloudFlare decides to present a new certificate in front of my site…

As a side note, if you’re using Apache, you can configure security headers through .htaccess files and the headers module (mod_headers). Here’s an example:


Header always set X-Frame-Options "SAMEORIGIN"
...

I’m sure that this site still has many vulnerabilities, but there aren’t enough hours in the day for me to fix everything at once. I have other improvements in mind, but that’ll be for later! :)


Installing node and npm on Ubutun 15+

Friday, December 18th, 2015

In case you would want to use one of my recent projects (e.g., ModernWebDevGenerator or ModernWebDevBuild) on Ubuntu (or any other OS btw), you’ll need nodejs and npm.

If you’re using ubuntu and go the usual way (i.e., sudo apt-get install…) then you’re in for a bad surprise; you’ll get node 0.1x.y and also a very old npm release.

Actually, the best way to get nodejs and npm on Ubuntu is to use the node version manager (nvm).

nvm can be used to install and keep multiple versions of node in parallel, which is very useful, especially when you have to test your node-based project on multiple versions.

The installation is very straightforward:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash

After that, close and reopen your terminal. You now have ‘nvm’ at your disposal.

nvm install 4.0
nvm install 5.0
nvm use 5.0

Just with the above, you get two versions of node (along with npm) installed. As you can see, you can use ‘nvm use’ to change the active version easily.

That’s it!


Use bash to decompile Java class files recursively

Tuesday, December 8th, 2015

Here’s a quick one. As you *might* know, I like Bash (even though I’m a Win* user..), so here’s an alias I’ve added recently:

export JAD_HOME=...
append_to_path $JAD_HOME
alias jad='(jad.exe)&'
jadr() { ("jad.exe" "-d" "." "-s" "java" "-r" "**/*.class")& }

With the above, jad will execute ‘jad’ and ‘jadr’ will recursively decompile all Java class files in the current/sub folders.


So fond of fonts

Tuesday, October 6th, 2015

I can’t say that I’m in love with typography, but I do enjoy writing (code or otherwise) using a good editor and… a good looking font.

I’ve recently stumbled upon the Hack font, which has its roots in the open source world and derives from Bitstream Vera & DejaVu. I immediately liked it; it feels good to change stuff once in a while… :)  I might still choose to switch back to Consolas, but for now I’m very pleased with Hack and it gave me a reason to mess around with Bash yet again ^^.

Of course, this alone is not a sufficient justification for a blog post! As I’ve described in an earlier post, I always try to maximize the ‘portability’ of my development environment and overall configuration and changing fonts should be no exception ;-)

I do not install fonts manually in the OS; I prefer to put my fonts in a central folder of my CloudStation share (i.e., along with the rest of my configuration & tools) so that it gets replicated on all my devices (I also do the same with tons of other stuff including wallpapers).

A major issue with this is that customizing fonts can be done in plenty applications but each has its own specificities, either they give you a lot of control or you have to go through hoops to achieve what you want. More specifically, many applications will only allow you to select fonts that are available through the OS’s font system (i.e., that are registered) while others will require additional flags or even worse, will want you to copy the font files around.

Under Windows, installing new fonts requires administrator privileges due to the security risks (laugh all you want :p). Thus even if I was to register the fonts, I couldn’t do so at work which is a bummer.

Fortunately, there are programmatic ways to register fonts in a user’s session without administrator privileges. I’ve found two programs that can do this from the command line:

I’ve found regfont to be better  as it is a bit more *nix friendly, comes with a 64-bit executable and is less verbose than RegisterFont (it could use a –silent switch though). 

Using regfont, you can easily register a new font in your user session using the following:

regfont.exe --add cool.ttf

You can also add a complete folder in one go using a wildcard. As you might already know, I’m a bit of a bash fan, so indeed I added a few more aliases to my profile to automate the registration of my custom fonts whenever my bash profile is loaded. It adds a bit to the overall startup time but it’s still quite reasonable.

First things first, since I wanted to keep a clean organization in my fonts folder, I couldn’t use the wildcard flag of regfont as it doesn’t look for font files recursively. For this reason, I needed to find the files myself (using the find command) and execute regfont once for each file.

Since the find command returns *NIX paths, I needed to convert those to WIN* paths; this was easy enough with the help of StackOverflow (as always ^^):

winpath() {
	if [ ${#} -eq 0 ]; then
		: skip
	elif [ -f "$1" ]; then
		local dirname=$(dirname "$1")
		local basename=$(basename "$1")
		echo "$(cd "$dirname" && pwd -W)/$basename" \
		| sed \
		  -e 's|/|\\|g';
	elif [ -d "$1" ]; then
		echo "$(cd "$1" && pwd -W)" \
		| sed \
		  -e 's|/|\\|g';
	else
		echo "$1" \
		| sed \
		  -e 's|^/\(.\)/|\1:\\|g' \
		  -e 's|/|\\|g'
	fi
}

Later in my profile, I’ve added the following for registering the fonts:

export MY_FONTS_FOLDER=$CLOUDSTATION_HOME/Configuration/Dev/Fonts
...
export REGISTER_FONT_HOME=$TOOLS_HOME/RegisterFont
append_to_path $REGISTER_FONT_HOME

register_font(){ ("$REGISTER_FONT_HOME/regfont" "--add" "$1")& } # alternative "RegisterFont.exe" "add"
alias registerfont='register_font'
...
# Register all my fonts for the current user session
# Works also if the user is not local administrator
# Reference: http://www.dailygyan.com/2008/05/how-to-install-fonts-in-windows-without.html
register_fonts(){
	SAVEIFS=$IFS # save the internal field separator (IFS) (reference: http://bash.cyberciti.biz/guide/$IFS)
	IFS=$(echo -en "\n\b") # change it to newline
	fontsToRegister=`find $MY_FONTS_FOLDER -type f -name "*.ttf"` # recursively find all files matching the original extension

	for fontToRegister in $fontsToRegister; do
		fontToRegisterWinPath=`winpath $fontToRegister`
		#echo $fontToRegisterWinPath
		register_font $fontToRegister
	done
	unset fontToRegisterWinPath
	unset fontToRegister
	unset fontsToRegister
	IFS=$SAVEIFS # restore the internal field separator (IFS)
}

I then simply invoke the register_fonts function near the end of my profile, just before I call clear.

With this in place, whenever my profile is loaded, I know that my fonts are registered and usable in most applications.

Just as a side note, here’s how you can manually install a custom font for use with Java-based applications such as IntelliJ, WebStorm, Netbeans, etc: you need to copy the font files to the jre/jdk lib/fonts folder.

As a second side note, ConEmu will load the first ttf file it encounters in its folder and make that one available for use.

As a third and last side node, I couldn’t find a way to load a custom font with Sublime Text 3, it only seems to be able to list system-registered ones…

So.. which font are you most fond of?

 


Quelques conseils pour vos achats en ligne

Friday, August 28th, 2015

Je ne blogue pas souvent en français, mais une fois n’est pas coutume :)

Depuis quelques années, comme pas mal de gens, j’achète de plus en plus de choses sur le Web. Pas tellement que je sois contre les commerces locaux, mais simplement car la différence de prix est souvent très importante.

J’achète principalement sur Amazon.fr car c’est souvent là que je trouve les meilleurs prix pour ce dont j’ai besoin. Si vous comptez acheter en ligne, il y a quelques bons tuyaux à connaître.

Par exemple, si vous appréciez Amazon, il faut savoir qu’ils ont plusieurs sites en Europe comme Amazon.de, Amazon.es, Amazon.it, Amazon.co.uk, … et les prix des articles sont souvent différents (parfois même de manière substantielle) entre ceux-ci! Donc mon premier conseil c’est de vérifier avant d’acheter que le produit n’est pas moins cher sur l’un des autres sites d’Amazon. Il n’y a pas de frais supplémentaires en commandant là bas.

Notez que si la barrière de la langue vous empêche d’utiliser un des sites étrangers d’Amazon, Google Chrome peut traduire les pages automatiquement pour vous (c’est approximatif mais largement suffisant pour pouvoir retrouver son chemin).

Une autre chose à laquelle faire attention sur Amazon, c’est que c’est une plateforme de vente en ligne: la société Amazon permet à d’autres sociétés de vendre leurs produits sur leur site (un peu comme sur eBay). Ca a pour conséquence que les prix peuvent varier beaucoup d’un vendeur à un autre pour un seul et même produit. Quand c’est Amazon qui vend & expédie, c’est en général le moins cher. Sur la fiche d’un produit on peut voir les différentes offres pour un produit donné en cliquant sur le lien “xx neufs”:

Neufs-01

Ce qui vous amène à la page suivante où vous pouvez voir les différentes offres et ajouter celle qui vous intéresse dans le panier:

Neufs-02

Parfois, l’offre affichée au départ sur Amazon est celle d’un tiers car Amazon n’a plus l’objet en stock; dans ces cas là il vaut souvent mieux attendre qu’Amazon vende à nouveau le produit pour bénéficier d’un meilleur prix et éviter les frais de livraison car, en général, quand c’est Amazon qui vend & expédie, il n’y a pas de frais de port, ce qui est rarement le cas avec les autres vendeurs; mais bon tout dépend du prix :)

Il serait trop facile de dire qu’Amazon est toujours le moins cher; c’est souvent le cas, mais pas toujours et parfois il y a même des différences de prix importantes pour certains produits et à certaines périodes.

De plus les prix pour un produit donné varient parfois énormément dans le temps (sur de courtes périodes). Il existe une extension très pratique pour Google Chrome & Mozilla Firefox appelée Camelizer, qui permet de voir sur un graphique l’évolution des prix pour un produit donné; c’est très utile pour voir si le prix actuel est intéressant ou non :)

Dans tous les cas, il ne faut pas hésiter à faire le tour des commerces en ligne pour trouver le prix le plus intéressant, ça tombe sous le sens, mais le tout est encore de connaître les bonnes adresses…

Une autre extension indispensable pour un shopping en ligne efficace c’est Shoptimate qui peut vous mâcher le travail; si vous êtes sur la fiche d’un produit d’un site géré par l’extension, celle-ci cherchera automatiquement le prix de cet article sur les autres sites gérés qui le vendent. De plus elle vous indiquera directement s’il existe une meilleure offre ailleurs:

Shoptimate-01

Shoptimate-02

Dans l’exemple ci-dessus, le même produit est actuellement 100€ moins cher sur Amazon.de, comparé à Amazon.fr, ce qui est assez.. énorme ;-)

Toujours dans cet exemple, le site designere_fr a l’air d’être encore moins cher, mais là ne connaissant pas le site j’ai préféré rester sur l’offre d’Amazon. J’imagine que ce site est digne de confiance puisque Shoptimate le propose, mais comme on dit, prudence est mère de sûreté ^^.

En parlant de sûreté, je vous conseille d’éviter les sites peu connus pour commander en ligne. Si certains vendeurs sur eBay vendent des produits neufs, ils ne sont pas nécessairement tous fiables; il en va de même pour certains sites de e-commerce… Méfiez-vous aussi des résultats de recherches Google quand vous cherchez un produit, c’est rempli de sites à éviter.

Aussi, quand j’achète sur un autre site qu’Amazon, en général j’essaie d’utiliser Paypal si possible; ça me permet d’éviter d’envoyer les informations de ma carte de crédit à tout va. Sur Paypal on l’enregistre une seule fois et les informations de la carte ne sont jamais dévoilées au site où l’on fait ses achats. En plus de ça il est même possible de faire des achats en ligne sans carte de crédit grâce à Paypal, le seul bémol étant que tous les sites de e-commerce ne supportent pas Paypal.

Je pourrais écrire un bon paquet d’articles au niveau de la sécurité informatique, mais ça sera pour une prochaine fois ^^.

Personnellement, ma liste de boutiques en ligne est assez restreinte:

  • Amazon: un peu de tout & souvent les meilleurs prix
  • bol.com: un peu de tout & parfois des prix très très bas sur certains produits (e.g., 500€ de différence sur le prix de mes enceintes!!)
  • Philibert: Jeux de société (meilleurs prix)
  • LDLC: matériel informatique & smartphones & hi-fi (très souvent plus cher)
  • Rue du Commerce: matériel informatique, smartphones & hi-fi (souvent plus cher)
  • Rue Montgallet: idem
  • Photo Erhardt: matériel photo (Allemagne)
  • Sarenza: vêtements & chaussures
  • ZooPlus: nourriture pour animaux
  • eBay: composants électroniques seulement ou trucs introuvables à l’état neuf
  • Seeed Studio: composants électroniques
  • f-mobile: Smartphones & co (parfois moins cher)

Si vous connaissez d’autres sites ou avez des tuyaux à partager, n’hésitez pas =)


Use bash to open the Windows File Explorer at some location

Wednesday, August 26th, 2015

TL;DR: don’t bother clicking your way through the Windows File Explorer, use bash functions instead! :)

I’ve already blogged in quite some length about my current Windows dev environment and I’ve put enough emphasis on the fact that bash is at the center of my workflow, together with my bash profile & more recently with ConEMU.

I continually improve my bash profile as I discover new things I can do with it, and this post is in that vein.

I often find myself opening the Windows File Explorer (Win + e) to get at some location; for that purpose, I simply pin the often used locations in the ‘Quick access’ list, although that means that I have to go the ‘click-click-click-click’ route and as we know, one can be much more efficient using only the keyboard.

To quickly open the File Explorer at locations I often need to open (e.g., my downloads folder, my movies folder & whatnot), I’ve created the following utility function & aliases:

# Aliases to open the Windows File Explorer at the current location
alias explore='explorer .' # open file explorer here
alias e='explore'
alias E='explore'

# Open File Explorer at the given location
# The location can be a path or UNC (with / rather than \)
# Examples
# openFileExplorerAt //192.168.0.1/downloads
# openFileExplorerAt /c/downloads
# openFileExplorerAt c:/downloads
openFileExplorerAt(){
 pushd $1
 explore
 popd
}

The ‘explore’ alias simply opens the Windows File Explorer at the current shell location while the ‘openFileExplorerAt’ function goes to the path given in argument and opens the File Explorer before going back to the previous shell location.

With the above, I’m able to define functions such as the one below that opens my downloads folder directly:

downloads(){
	openFileExplorerAt //nas.tnt.local/downloads
}

And since i’m THAT lazy, I just alias that to ‘dl’ ^^.

That’s it! :)