Archive for the ‘Uncategorized’ Category

RIP /volume3

Thursday, December 27th, 2018

Yesterday, after about 40K hours of uptime, the HDD behind /volume3 on my NAS has died.

It didn’t go “poof”, but its health got bad enough for my NAS to warn me. The advice was plain and simple: backup everything and get rid of the crashed volume.

Fortunately, this was one of the volumes containing less valuable data so I didn’t lose anything important. I’ve also got local and remote backups of the more important things.

Still, losing a disk is never fun and leads to a lot of wasted time. After a few hours, I could recover most of the data on the disk apart from a few files lying across bad sectors.

Then, just out of curiosity I wanted to check the disk and try a repair of the volume.

First, I’ve shut down every service apart from the SSH daemon:

syno_poweroff_task -d

Then, I’ve identified the faulty disk/RAID array using the commands I’ve shared in a previous post:

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid1 sde3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md6 : active raid1 sdc3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md5 : active raid1 sdf3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md7 : active raid1 sdg3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md2 : active raid1 sda3[0]
      3902296256 blocks super 1.2 [1/1] [U]

md9 : active raid1 sdh3[0]
      7809204416 blocks super 1.2 [1/1] [U]

md8 : active raid1 sdd3[0]
      3902196416 blocks super 1.2 [1/1] [U]

md4 : active raid1 sdb3[0](E)
      1948792256 blocks super 1.2 [1/1] [E]

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] sdg2[6] sdh2[7]
      2097088 blocks [8/8] [UUUUUUUU]

md0 : active raid1 sda1[0] sdb1[2] sdc1[4] sdd1[6] sde1[1] sdf1[3] sdg1[5] sdh1[7]
      2490176 blocks [8/8] [UUUUUUUU]

unused devices: 

As you can see above, the array in error was md4 with the associated sdb3 disk.

NOTE: I only have single-drive RAID “arrays”.

Then I took a look at the md4 array:

mdadm --detail /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Sun Sep  8 10:16:10 2013
     Raid Level : raid1
     Array Size : 1948792256 (1858.51 GiB 1995.56 GB)
  Used Dev Size : 1948792256 (1858.51 GiB 1995.56 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Wed Dec 26 21:40:05 2018
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : NAS:4  (local to host NAS)
           UUID : 096b0ec0:3aec6ef5:5f685a2b:5ff95e38
         Events : 7

    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3

And at the disk:

mdadm --examine /dev/sdb3
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 096b0ec0:3aec6ef5:5f685a2b:5ff95e38
           Name : NAS:4  (local to host NAS)
  Creation Time : Sun Sep  8 10:16:10 2013
     Raid Level : raid1
   Raid Devices : 1

 Avail Dev Size : 3897584512 (1858.51 GiB 1995.56 GB)
     Array Size : 3897584512 (1858.51 GiB 1995.56 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 46ed084c:686ee160:5fa3a986:574d1182

    Update Time : Wed Dec 26 21:40:05 2018
       Checksum : 9ffab586 - correct
         Events : 7


   Device Role : Active device 0
   Array State : A ('A' == active, '.' == missing)

udevadm info --query=all --name=/dev/sdb3
P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb3
N: sdb3
E: DEVNAME=/dev/sdb3
E: DEVPATH=/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb3
E: DEVTYPE=partition
E: ID_FS_LABEL=NAS:4
E: ID_FS_LABEL_ENC=NAS:4
E: ID_FS_TYPE=linux_raid_member
E: ID_FS_USAGE=raid
E: ID_FS_UUID=096b0ec0-3aec-6ef5-5f68-5a2b5ff95e38
E: ID_FS_UUID_ENC=096b0ec0-3aec-6ef5-5f68-5a2b5ff95e38
E: ID_FS_UUID_SUB=46ed084c-686e-e160-5fa3-a986574d1182
E: ID_FS_UUID_SUB_ENC=46ed084c-686e-e160-5fa3-a986574d1182
E: ID_FS_VERSION=1.2
E: ID_PART_ENTRY_DISK=8:16
E: ID_PART_ENTRY_NUMBER=3
E: ID_PART_ENTRY_OFFSET=9437184
E: ID_PART_ENTRY_SCHEME=dos
E: ID_PART_ENTRY_SIZE=3897586881
E: ID_PART_ENTRY_TYPE=0xfd
E: ID_PART_ENTRY_UUID=00003837-03
E: MAJOR=8
E: MINOR=19
E: PHYSDEVBUS=scsi
E: PHYSDEVDRIVER=sd
E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0
E: SUBSYSTEM=block
E: SYNO_DEV_DISKPORTTYPE=SATA
E: SYNO_INFO_PLATFORM_NAME=cedarview
E: SYNO_KERNEL_VERSION=3.10
E: USEC_INITIALIZED=112532

Then I’ve unmounted the faulty volume and stopped the corresponding RAID array:

umount /volume3
mdadm –stop /dev/md4

After that, I’ve re-created the array:

mdadm -Cf /dev/md4 -e1.2 -n1 -l1 /dev/sdb3 -u096b0ec0:3aec6ef5:5f685a2b:5ff95e38

Finally, I ran a file system check:

fsck.ext4 -v -f -y /dev/mapper/vol3-origin

Where /dev/mapper/vol3-origin was an easy to use pointer to the device.

From the NAS’s point of view, everything is now fine (haha), but of course I can’t trust that disk anymore. Now I just have to wait a few days to get a replacement and set it up.

On the bright side, I’ll take the occasion to upgrade to a 10-12TB disk (assuming those are compatible with my Synology NAS..) ^_^. That way I’ll prepare a new disaster for today + 40K hours.. ;-)


The story behind my upcoming book: Learn TypeScript by Building Web Applications — part 1

Sunday, October 7th, 2018

I’ve published a new post on Medium:

https://medium.com/@dSebastien/the-story-behind-my-upcoming-book-learn-typescript-by-building-web-applications-part-1-26926bd1756d


Boats

Tuesday, August 1st, 2017

2017-07-06 - 15h44 - 137.jpg 2017-07-06 - 16h01 - 158.jpg 2017-07-07 - 17h28 - 049.jpg 2017-07-08 - 14h58 - 002-Pano.jpg


Vannes

Tuesday, August 1st, 2017
2017-07-09 - 10h58 - 010-Pano.jpg


Static sites? Let’s double that!

Monday, March 14th, 2016

Now that I’ve spent a good deal of time learning about what’s hot in the front-end area, I can go back to my initial goal: renew this Website.. or maybe I can fool around some more? :) In this post, I’ll describe the idea that I’ve got in mind.

One thing that’s been bothering me for a while is the dependency that I currently have on WordPress, PHP and a MySQL database. Of course there are pros and cons to consider, but currently I’m inclined to ditch WordPress, PHP and MySQL in favor of a static site.

Static site generators like Hugo (one of the most popular options at the moment) let you edit your content using flat files (e.g., using Markdown) with a specific folder structure. Once your content is ready for publication, you have to use a CLI/build tool that takes your content (e.g., posts, pages, …) and mixes it with a template.

Once the build is completed, you can upload the output on your Web host; no need for a database, no need for a server-side language, no need for anything more than a good old Apache Web server (or any Web server flavor you like). Neat!

Now what I’m wondering is: can we go further? What if we could create doubly static static sites? :)

Here’s the gist of my idea:
First, we can edit/maintain the content in the same way as with Hugo: through a specific folder structure with flat files. Of course we can add any feature we’d like around that: front matter, variables & interpolation, content editor, … For all that a build/CLI should be useful.. more on that later.

Note that the content could be hosted on GitHub or a similar platform to make the editing/publishing workflow simpler/nicer.

So, we’ve got static content, cool. What else? Well now what if we added a modern client-side Web application able to directly load those flat files and render them nicely?

If we have that then we could upload the static content to any Web host and have that modern Web app load the content directly from the client’s Web browser. The flow would thus be:

  • go to https://www.dsebastien.net
  • receive the modern Web app files (HTML, CSS, JS)
  • the modern Web app initializes in my Web browser
  • the modern Web app fetches the static content (pages, posts, …)
  • the modern Web app renders the content

Ok, not bad but performance could be an issue! (let’s ignore security for a moment ok? :p).
To work around that, we could imagine loading multiple posts at once and caching them.
If we have a build/CLI could also pack everything together so that the Web app only needs to load a single file (let’s ignore the HTTP 1.1 vs HTTP 2.0 debate for now).

In addition, we could also apply the ‘offline-first’ idea: put pages/posts in local storage on first load; the benefit would be that the application could continue to serve the content offline (we could combine this with service workers).

The ideas above partially mitigate the performance issue, but first render would still take long and SEO would remain a major problem since search engines are not necessarily great with modern client-side Web apps (are they now?). To fix that, we could add server-side rendering (e.g., using Angular Universal).

Server-side rendering is indeed nice, but it requires a specific back-end (let’s assume node). Personally I consider this to be a step back from the initial vision above (i.e., need for a server-side language), but the user experience is more important. Note that since dedicated servers are still so pricey with OVH, it would be a good excuse to go for DigitalOcean.. :)

Another important issue to think about is that without a database, we don’t have any way to make queries for content (e.g., search a keyword in all posts, find the last n posts, …). Again, if we have a build/CLI, then it could help work around the issue; it could generate an index of the static content you throw at it.

The index could contain results for important queries, post order, … By loading/caching that index file, the client-side Web app could act more intelligently and provide advanced features such as those provided by WordPress and WordPress widgets (e.g., full text search, top n posts, last n posts, tag cloud, …).

Note that for search though, one alternative might be Google Search (or Duck Duck Go, whatever), depending on how well it can handle client-side Web apps :)

In addition, the build/CLI could also generate content hashes. Content hashes could be used to quickly detect which bits of the content are out of date or new and need to be synchronized locally.

There you have it, the gist of my next OSS project :)

I’ll stop this post here as it describes the high level idea and I’ll publish some additional posts to go more in depth over some of the concepts presented above.


Modern Web Development – Part one

Wednesday, February 17th, 2016

Since April last year, I’ve been plunging again in the world of Web development.. and what fun it has been! In this series of posts, I’m going to summarize the stuff I’ve done last year in order to hop back on the train and I’ll describe what I’ve learned along the way.

At the time, I published two blog posts, which were my way of condensing my vision for an important project at work aiming to modernize the way we create Web applications by going towards a client-side architecture combined with RESTful Web Services on the back-end.

When I started looking back at how the Web platform had evolved during the 2012-2015 period, the main things I had on my mind were:

  • mobile first & responsive web design
  • client side Web application architecture (which I personally consider to be the part of Web 3.0 — seriously, why not?)
  • the new specs that had reached broad support in modern Web browsers and were gaining a lot of traction
  • the offline first idea that these specs made more realistic

I wanted to learn more about AngularJSnode.jsnpm and sass but that was about it. I remember that at first, I had no precise idea yet about the build tool and the build steps that I wanted/needed… I hadn’t even heard about ES6 yet!

Since then, I’ve learned a ton about ES2015, TypeScript, module systems, module loaders, JS frameworks & the tooling around, front-end state management solutions, front-end build systems, project boilerplates, css style guides, quality assurance for front-end apps, unit & e2e testing libraries, … and the integration of it all…

The funny thing is that… I failed to deliver.

Initially, my personal goal was to create a responsive client-side Web app exploiting the RESTful API of my WordPress installation to replace my current theme, but I changed my mind along the way… So far, my site hasn’t changed one bit. I did improve some things though, but that was more around security than anything else.

So what made me change my mind and where did I spend my time?

At first, I was concentrated on the task at hand and I looked at how the HTML5 boilerplate had evolved as I knew that it was one of the best starting points around for creating modern Web apps. My idea was simple: use HTML5 boilerplate or InitializR to get ModernizR… and add some good old script tags… :p

I started with HTML5 boilerplate, but shortly after, I stumbled upon Web Starter Kit which was fresh out of Google’s oven, was based on HTML5 boilerplate and had some really cool features.

It came out of the box with a nice build which included support for JSCS (JS code style), JSHint (JS code quality), autoprefixing, BrowserSync (if you don’t know that one, DO check it out!), sass and ES6 (that was still the name at that point) with the help of Babel, …

 I really liked their setup and decided to use it as basis for my project; and that’s where my trajectory deviated :)

Given that I’m quite curious, I spent a while deconstructing Web Starter Kit’s build so that I could really understand what made it tick. That made me discover npm, gulp and the whole ecosystem of gulp plugins.

I really enjoyed doing so as it has helped me better grasp the necessary build steps for modern Web apps:

  • transpile code (ts->js, sass->css, …)
  • check quality
  • check style
  • create a production build (bundle, minify, mangle, …)
  • execute unit tests
  • execute end to end tests

At that moment, I was happy with the build as it stood so I continued to focus on developing my app. I took a good look at what ES6 was, what it meant for JavaScript, its ecosystem and how Babel helped (was it still called 6to5 then?). Learning about ES6 features took me a long while and I’m still far from done, but it was well worth it. ES2015 is such an huuuuuuuuuuuge step forward for the language.

I also took a glance at Angular 2 which was still in alpha state. It looked interesting but I believed that it would never be ready in time for our project at work (and it wasn’t). Still, I did spend a few days toying around with the alpha just to get to understand the basic principles.. and I must say that I really really loved what I saw!

That quick research spike also made me discover TypeScript.

Having a strong Java & OO background, TypeScript (TS) directly got me excited. I’m a strong believer in strong (heh) typing, and the fact that TS already supported many ES6 features that weren’t natively supported by Web browsers yet was very appealing to me.

Moreover, having dozens of Java developers in our development teams at work, TypeScript seemed really ideal for us as it supports many features and idioms that our developers are very familiar with (classes, interfaces, generics, strong typing, decorators, …).

If you want to learn more about TypeScript, I definitely recommend the Typescript Deep Dive.

At that point, tsconfig.json wasn’t there yet and the most evident choice to integrate the necessary build step was gulp, as advertised by Dan Walhin’s excellent blog post. If I had read more about npm I might have gone a completely different path (i.e., used npm scripts only).. ^^.

At that point, I had to deviate from what Web Starter Kit offered me in order to add build tasks for TypeScript, tslint, etc. Fiddling with the build made me realize that it was quite brittle, so I refactored it quite a lot and tried to improve things (e.g., separate the build tasks in different files, extract the configuration settings, ensure that it would not break the build on each error, etc). I remember that I wanted to contribute back to Web Starter Kit but realized too late that I had made too many changes at once for them to be able to integrate easily (silly me, bummer).

I went pretty far with actually as at some point, I was using TypeScript to output ES6 code that I then sent through Babel, just so that I could use async/await and other things that TypeScript wasn’t able to transpile to ES5… :)

The exercise helped me see how “immature” and “fragile” the whole JavaScript ecosystem was. What I mean by that is that there seems to be only moving parts and each of those parts don’t necessarily keep happy with each other. Not only do too few people really understand what semver actually means and respect it, but everything that shines bright gets replaced faster than the speed of light :)

As a technologist, I love the pace it imposes for the fun and innovation it brings to the table, but it’s also frustrating for many reasons and (should be) quite scary for enterprises (to some degree). People talk about JavaScript fatigue, which is quite a fun way to put it and I can certainly understand the idea now.

One example that I thought a lot about is the fact that each and every front-end project seems to have its own build chain and build configuration that lives within the project, in complete isolation and has to be maintained.

Of course each and every project has its specificities so there really can’t be ONE rigid & reusable solution to rule them all, but the idea of duplicating so much effort needlessly across a whole community of developers violates the DRY principle as much as anything ever could.

Just try and imagine how many people must have used some Yeoman generator to scaffold projects, which now all have separate builds with tasks that all do the same things but are all defined 20.000 times in a gazillion different ways using variable and unreliable dependency versions… :)

When you scaffold a project using a generator, you end up with a snapshot of the template and of the build provided by the generator at that point in time and then it’s up to you to keep your version up to date and to integrate all improvements and bug fixes, assuming you have time to follow that… you poor thing!

Being part of a core software development team at work, my focus is most often on finding reusable solutions to common problems, limiting effort duplication and what not and thus, the front-end universe’s situation seems quite sad in that regard.

Another point that struck me was how limited the main package management solution was. npm is nice and all, but not being able to define some parent/generic/reusable configuration (e.g., like parent pom files in Maven) is kind of surprising. Again, the DRY principle probably corresponds to DO Repeat Yourself in the frontend universe. I’m sure that front-end experts will tell me that you can work around all that in countless ways, but that’s exactly the issue: I shouldn’t have to invent my solution for a general issue people should be concerned about.

To conclude on a positive note though, I do believe that all the tooling DOES bring added value because it makes it possible to manage dependencies correctly, define build steps which execute tests, generate coverage reports (e.g., using Istanbul), generate production builds etc.

This piece is getting a bit long, so I’ll continue my little story in part two!

 


Installing node and npm on Ubutun 15+

Friday, December 18th, 2015

In case you would want to use one of my recent projects (e.g., ModernWebDevGenerator or ModernWebDevBuild) on Ubuntu (or any other OS btw), you’ll need nodejs and npm.

If you’re using ubuntu and go the usual way (i.e., sudo apt-get install…) then you’re in for a bad surprise; you’ll get node 0.1x.y and also a very old npm release.

Actually, the best way to get nodejs and npm on Ubuntu is to use the node version manager (nvm).

nvm can be used to install and keep multiple versions of node in parallel, which is very useful, especially when you have to test your node-based project on multiple versions.

The installation is very straightforward:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash

After that, close and reopen your terminal. You now have ‘nvm’ at your disposal.

nvm install 4.0
nvm install 5.0
nvm use 5.0

Just with the above, you get two versions of node (along with npm) installed. As you can see, you can use ‘nvm use’ to change the active version easily.

That’s it!


Quelques conseils pour vos achats en ligne

Friday, August 28th, 2015

Je ne blogue pas souvent en français, mais une fois n’est pas coutume :)

Depuis quelques années, comme pas mal de gens, j’achète de plus en plus de choses sur le Web. Pas tellement que je sois contre les commerces locaux, mais simplement car la différence de prix est souvent très importante.

J’achète principalement sur Amazon.fr car c’est souvent là que je trouve les meilleurs prix pour ce dont j’ai besoin. Si vous comptez acheter en ligne, il y a quelques bons tuyaux à connaître.

Par exemple, si vous appréciez Amazon, il faut savoir qu’ils ont plusieurs sites en Europe comme Amazon.de, Amazon.es, Amazon.it, Amazon.co.uk, … et les prix des articles sont souvent différents (parfois même de manière substantielle) entre ceux-ci! Donc mon premier conseil c’est de vérifier avant d’acheter que le produit n’est pas moins cher sur l’un des autres sites d’Amazon. Il n’y a pas de frais supplémentaires en commandant là bas.

Notez que si la barrière de la langue vous empêche d’utiliser un des sites étrangers d’Amazon, Google Chrome peut traduire les pages automatiquement pour vous (c’est approximatif mais largement suffisant pour pouvoir retrouver son chemin).

Une autre chose à laquelle faire attention sur Amazon, c’est que c’est une plateforme de vente en ligne: la société Amazon permet à d’autres sociétés de vendre leurs produits sur leur site (un peu comme sur eBay). Ca a pour conséquence que les prix peuvent varier beaucoup d’un vendeur à un autre pour un seul et même produit. Quand c’est Amazon qui vend & expédie, c’est en général le moins cher. Sur la fiche d’un produit on peut voir les différentes offres pour un produit donné en cliquant sur le lien “xx neufs”:

Neufs-01

Ce qui vous amène à la page suivante où vous pouvez voir les différentes offres et ajouter celle qui vous intéresse dans le panier:

Neufs-02

Parfois, l’offre affichée au départ sur Amazon est celle d’un tiers car Amazon n’a plus l’objet en stock; dans ces cas là il vaut souvent mieux attendre qu’Amazon vende à nouveau le produit pour bénéficier d’un meilleur prix et éviter les frais de livraison car, en général, quand c’est Amazon qui vend & expédie, il n’y a pas de frais de port, ce qui est rarement le cas avec les autres vendeurs; mais bon tout dépend du prix :)

Il serait trop facile de dire qu’Amazon est toujours le moins cher; c’est souvent le cas, mais pas toujours et parfois il y a même des différences de prix importantes pour certains produits et à certaines périodes.

De plus les prix pour un produit donné varient parfois énormément dans le temps (sur de courtes périodes). Il existe une extension très pratique pour Google Chrome & Mozilla Firefox appelée Camelizer, qui permet de voir sur un graphique l’évolution des prix pour un produit donné; c’est très utile pour voir si le prix actuel est intéressant ou non :)

Dans tous les cas, il ne faut pas hésiter à faire le tour des commerces en ligne pour trouver le prix le plus intéressant, ça tombe sous le sens, mais le tout est encore de connaître les bonnes adresses…

Une autre extension indispensable pour un shopping en ligne efficace c’est Shoptimate qui peut vous mâcher le travail; si vous êtes sur la fiche d’un produit d’un site géré par l’extension, celle-ci cherchera automatiquement le prix de cet article sur les autres sites gérés qui le vendent. De plus elle vous indiquera directement s’il existe une meilleure offre ailleurs:

Shoptimate-01

Shoptimate-02

Dans l’exemple ci-dessus, le même produit est actuellement 100€ moins cher sur Amazon.de, comparé à Amazon.fr, ce qui est assez.. énorme ;-)

Toujours dans cet exemple, le site designere_fr a l’air d’être encore moins cher, mais là ne connaissant pas le site j’ai préféré rester sur l’offre d’Amazon. J’imagine que ce site est digne de confiance puisque Shoptimate le propose, mais comme on dit, prudence est mère de sûreté ^^.

En parlant de sûreté, je vous conseille d’éviter les sites peu connus pour commander en ligne. Si certains vendeurs sur eBay vendent des produits neufs, ils ne sont pas nécessairement tous fiables; il en va de même pour certains sites de e-commerce… Méfiez-vous aussi des résultats de recherches Google quand vous cherchez un produit, c’est rempli de sites à éviter.

Aussi, quand j’achète sur un autre site qu’Amazon, en général j’essaie d’utiliser Paypal si possible; ça me permet d’éviter d’envoyer les informations de ma carte de crédit à tout va. Sur Paypal on l’enregistre une seule fois et les informations de la carte ne sont jamais dévoilées au site où l’on fait ses achats. En plus de ça il est même possible de faire des achats en ligne sans carte de crédit grâce à Paypal, le seul bémol étant que tous les sites de e-commerce ne supportent pas Paypal.

Je pourrais écrire un bon paquet d’articles au niveau de la sécurité informatique, mais ça sera pour une prochaine fois ^^.

Personnellement, ma liste de boutiques en ligne est assez restreinte:

  • Amazon: un peu de tout & souvent les meilleurs prix
  • bol.com: un peu de tout & parfois des prix très très bas sur certains produits (e.g., 500€ de différence sur le prix de mes enceintes!!)
  • Philibert: Jeux de société (meilleurs prix)
  • LDLC: matériel informatique & smartphones & hi-fi (très souvent plus cher)
  • Rue du Commerce: matériel informatique, smartphones & hi-fi (souvent plus cher)
  • Rue Montgallet: idem
  • Photo Erhardt: matériel photo (Allemagne)
  • Sarenza: vêtements & chaussures
  • ZooPlus: nourriture pour animaux
  • eBay: composants électroniques seulement ou trucs introuvables à l’état neuf
  • Seeed Studio: composants électroniques
  • f-mobile: Smartphones & co (parfois moins cher)

Si vous connaissez d’autres sites ou avez des tuyaux à partager, n’hésitez pas =)


Additional Windows 10 Configuration Tips

Wednesday, August 26th, 2015

I’ve recently blogged about my Windows 10 configuration. In this post I’ll list some additional things that I could disable/tweak/configure using a new application called W10Privacy.

If you haven’t read the first part, then I recommend you to do so first as it has some interesting tips in store for you :)

First, you need to download the application here. Once downloaded, you should uncompress it and run it with administrator privileges. To have access to the list of System applications, you can also download PSExec and place the executable in the folder where W10Privacy is located.

Here’s what I’ve configured using that tool (knowing that my configuration already covers many of the settings it provides):

  • Privacy
    • Turn off SmartScreen Filter to check web content (URLs) that Windows Store apps use
    • Disable sending of information on writing behavior
    • Disable location for this device
    • Disable asking for Feedback
    • Disable the AutoLogger
    • Block Microsoft server, to which telemetry data will be sent (in the hope that this setting has additional domain names to block)
  • Search
    • Do not search online and do not include web results
    • Disable the retrieve of Bing search suggestions and web results (applies only to the actual user)
  • Network
    • Do not connect to proposed public hotspots
    • Do not connect to wireless networks shared by my contacts
    • Do not share my networks with my Outlook.com contacts
    • Do not share my networks with my Skype contacts (w t f)
    • Do not share my networks with my Facebook contacts (w t f)
  •  Explorer
    • Remove search option on the taskbar (searching by Windows key + Q is still possible)
    • File Explorer opens at “This PC” instead of “Quick Access”
    • Show a desktop icon for “Computer”
    • Show extensions for known file types in File Explorer
    • Show hidden files, folders or drives in File Explorer
    • Show protected operating system files in File Explorer
    • Turn off Windows SmartScreen
    • Remove “- Shortcut” suffix from future shortcut file names (w00t!)
  • Services
    • Disable Windows Diagnostics Tracking Service – reboot required!
  • Edge
    • Send “Do Not Track” requests
    • Do not help me protect me from malicious sites and downloads with SmartScreen Filter
  • OneDrive
    • Do not start OneDrive automatically when I sign in to Windows
    • Remove OneDrive from the File Explorer sidebar in Windows 10
  • Tasks
    • Disable the task “Microsoft Compatibility Appraiser”
    • Disable the task “ProgramDataUpdater”
    • Disable the task “Proxy”
    • Disable the task “Consolidator”
    • Disable the task “KernerlCeip Task”
    • Disable the task “UsbCeip”
    • Disable the task “Microsoft-Windows-DiskDiagnosticDataCollector”
    • Disable the task “DmClient”
    • Disable the task “FamilySafetyMonitor”
    • Disable the task “FamilySafetyRefresh”
    • Disable the task “SmartScreenSpecific”
  • Tweaks
    • Disable automatic restart, the user is instead asked to plan a restart
    • Disable updates for other Microsoft products on Windows Update (e.g., office, etc)
    • Updates and apps will no longer be distributed to other clients (disables the lower switch) (i.e., my bandwidth is my own)
    • Distribute updates and apps only on the local network (disables upper switch)
  • Background-Apps
    • Disable background functionality for … (ALL THE DAMN APPS!)
  • User-Apps
    • Uninstall the following:
      • Money
      • News
      • Sports
      • Weather
      • First Steps
      • Get Office
      • OneNote
      • Skype download
      • Groove-Musik
      • Movies and TV shows
      • Maps
      • companion phone

As you can see, W10Privacy has quite a lot of nice features. I know that disabling the privacy related features will not protect my privacy much more than it currently is (i.e., it ain’t), but it can’t do harm either and at worst it’ll just save me some CPU cycles.. ;-)


Google Translate bash function (Windows)

Friday, August 7th, 2015

I’ve noticed that since I switched to Windows 10, my Google Translate bash functions were broken. I suppose that something has changed in the way that explorer.exe interprets URLs (?). Anyway, here’s a fixed version, simply using a different way to construct the URL ;-)

I use the function below to translate from english to french:

enfr(){ (explorer "https://translate.google.com/?sl=en&tl=fr&text=$*" )& }

The only things to know to understand the above:

  • sl = source language
  • tl = translation language
  • text = what to translate :)
  • $* = arguments passed to the function (i.e., what you want translated)
  • calling this function will open a new tab in your default Web browser

I know that it could be improved because it needs proper escaping (e.g., running frnl c’est sympa will break it because of the ‘), but it’s just enough for what I need.

One could create a more intelligent function supporting multiple languages (please do :p) but I don’t need one =)