Archive for the ‘IT’ Category

Using JUnit 5 with Spring Boot 2, Kotlin and Mockito

Tuesday, December 19th, 2017

I’ve just published a new article on Medium.com.

I’m lazy today, so I’ll just give you a link to it:
https://medium.com/@dSebastien/using-junit-5-with-spring-boot-2-kotlin-and-mockito-d5aea5b0c668

Enjoy!


My GPG Config

Tuesday, November 28th, 2017

About

Some notes about my current setup for GPG/PGP.

I’m currently using GnuPG: https://www.gnupg.org/ and in particular GPG4Win.

Portable mode

As usual, I like portable installs and GPG is no exception. I’ve uncompressed it in my tools folder (synchronized across my machines). By itself, the tool is portable, maybe Kleopatra isn’t but I don’t care too much.

By default, Gpg4win installs in two locations:

  • Gpg4win: C:\Program Files (x86)\Gpg4win
  • GnuPG: C:\Program Files (x86)\GnuPG

Bash profile

Here’s how my bash profile is configured to have GPG tools available:

# GPG/PGP
# where the tool is installed
export GPG4WIN_HOME=$TOOLS_HOME/Gpg4Win_3.0.1
export GPG_HOME=$GPG4WIN_HOME/GnuPG
export KLEOPATRA_HOME=$GPG4WIN_HOME/Gpg4win

append_to_path $GPG_HOME
append_to_path $GPG_HOME/bin
append_to_path $KLEOPATRA_HOME/bin_64
append_to_path $KLEOPATRA_HOME/bin

# where it puts its files and looks for its configuration
export GNUPGHOME=$HOME/.gnupg

# create it otherwise it complains
mkdir -p `echo $GNUPGHOME`
alias gpg='gpg.exe'
alias pgp='gpg' # who cares ;-)
alias kleopatra='kleopatra.exe'

GPG configuration

Here’s my current GPG configuration (~/.gnupg/gpg.conf). I’ve removed comments for stuff I don’t use for clarity, although I like to keep those in my actual configs):

# get rid of the copyright notice
no-greeting

# key server
keyserver hkp://keys.gnupg.net

# Ensure that stronger secure hash algorithms are used by default
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAMELLIA256 CAMELLIA192 CAMELLIA128 TWOFISH CAST5 ZLIB BZIP2 ZIP Uncompressed
personal-digest-preferences SHA512
cert-digest-algo SHA512
 
# Enable the passphrase agent
use-agent
 
# Avoid locking files
lock-never

# Armor when exporting
armor
 
# Keyserver options
keyserver-options auto-key-retrieve include-subkeys honor-keyserver-url honor-pka-record
 
# Import/export options
import-options import-clean
export-options export-clean

# Don't use key ids are those are unsafe (both short and long!)
keyid-format none

With this configuration, I’ve forced the usage of stronger secure hash algorithms by default and also disabled key ids (short & long) since those are insecure. There’s nothing much to it.

How I generated my keys

First of all, I didn’t reinvent the wheel, I’ve mostly applied what Alex Cabal has described here, so thanks to him!

You might ask “Why not a simple key that does it all?”. Because in general, mixing signing and encryption keys is not a good idea, management & security wise. Firstly, different key types have different lifecycles. Secondly, it might just not be safe to do so.

Also, without this setup, if the keys I use on a daily bases were to be compromised, I wouldn’t have any other choice but to re-create everything from scratch (i.e., new identity!). With the configuration below I can just revoke a specific sub-key and create a new one, while keeping my identity.

Here’s the whole shabang.

Create the keypair

First of all, create the key:

gpg --gen-key

Settings to use:

  • Kind of key: (1) RSA and RSA
  • Key size: 4096 (longer = safer?)
  • Valid for: 0 (never expires)
  • mail: [email protected]

When selecting the passphrase, use a tool like Keepass, don’t choose the passphrase yourself, you’re not smart enough ;-).

Set strong hash preferences on the keypair

Just to make sure:

gpg --edit-key [email protected]
...
gpg> setpref SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAMELLIA256 CAMELLIA192 CAMELLIA128 TWOFISH CAST5 ZLIB BZIP2 ZIP Uncompressed
gpg> save

Add a signing sub-key

Next, create a signing sub-key for code signing:

gpg --edit-key [email protected]
...
gpg> addkey
...
gpg> save

Settings:

  • Key type: (4) RSA (sign only)
  • Key size: 4096 (longer = safer?)
  • Valid for: 0 (never expires)
  • mail: [email protected]

Add an authentication sub-key

Next, create an authentication sub-key for SSH authentication:

gpg --expert --edit-key [email protected]
gpg>addkey
...
gpg>save

Settings:

  • (8) RSA (set your own capabilities)
  • S: disable sign
  • E: disable encrypt
  • A: enable authenticate
  • –> now you must see “Currently allowed actions: Authenticate”
  • Q: finished
  • Key size: 4096
  • Expires: today + 365 days

Create a revocation certificate

Generating a revocation certificate will allow me to later revoke this keypair if it is compromised. It must be kept safe because it can render my keys useless ;-)

gpg --output ./[email protected] --gen-revoke [email protected]

Export the keypair/subkeys to a safe location and make the key safe to use

First export the private key:

gpg --export-secret-keys --armor [email protected] > [email protected]

Then export the public key:

gpg --export --armor [email protected] > [email protected]

Finally, you can export the sub-keys alone:

gpg --export-secret-subkeys [email protected] > /tmp/gpg/subkeys

We’ll see why afterwards.

Ideally, you should export your private key to a temporary in-memory file system. Alex proposed the following:

mkdir /tmp/gpg # create a temp folder
sudo mount -t tmpfs -o size=1M tmpfs /tmp/gpg

Once that’s mounted, you can safely write there and remove the folder once you’re done.

Once exported, back-up those keys in a safe location (e.g., Keepass).

Once you’re 100% it’s backed-up, delete the secret key from the gpg keyring:

gpg --delete-secret-key [email protected]

Now re-import the sub-keys. With this you’ll only have the sub-keys at your disposal (and you don’t need more than that on a daily basis):

gpg --import /tmp/gpg/subkeys

So simple steps:

  • create/mount the temporary in-memory file system
  • export your private key
  • back it up in a safe location
  • remove the temporary file system
  • bonus: burn the machine you’ve done this upon ;-)

To verify that you didn’t mess up, go ahead and try to add a new sub-key; you shouldn’t be able to:

gpg --edit-key [email protected]
gpg> addkey
Secret parts of primary key are not available.
gpg: Key generation failed: No secret key

That’s it!

How I can revoke a sub-key

Using Google! Err I mean like this: https://wiki.debian.org/Subkeys.

First re-import my whole key (i.e., master + sub-keys)

gpg --allow-secret-key-import --import 

Second, edit the key and revoke the sub-key that I don’t want anymore:

gpg --edit-key [email protected]
gpg> list # list the keys
gpg> xyz # select the unwanted key
gpg> revkey # generate a revocation certificate
gpg> save

Once done, I can export/back-up the result and finally make sure to send the updated key to the key servers.

Where I’ve published my key

Once my key was ready, I’ve published it at various locations.

For starters I needed the full fingerprint (the 40 chars beauty):

gpg --fingerprint

In my case: 9AEC 7595 2F0F 8E52 65A8 4364 6448 ABB4 AEAD 81A2.

Just to be in the clear, if you need to share your key, always try to use the full fingerprint, certainly never the short version (8 hex chars one) nor the “long” (16 hex chars) since those are really unsafe.

First I sent the public key to the MIT key server using gpg:

gpg --send-keys [email protected]

Then I exported my public key to a file (ASCII-armored):

gpg --export --armor > dsebastien-pgp-key.asc

I then uploaded that file to my FTP, updated my about page to add the full fingerprint and a link to my public key. Then I added a blog post with the same information.

I’ve also sent an update on twitter the same. After that I’ve updated my Twitter bio to link to that tweet (optimizing character count :p).

Next up, I’ve uploaded the public key manually on Ubuntu’s key server.

Finally, I’ve updated my GitHub profile to add my PGP key.

Git client configuration

I’ve also updated my git client configuration in order to make my life easier.

  • git config –global user.signingkey 9AEC75952F0F8E5265A843646448ABB4AEAD81A2

This tells git which key to use. BTW, don’t enable automatic commit signing. Sign tags instead.

Verifying signatures is a breeze with git.

Later

In a later post, I’ll explain how I use my PGP keys with SSH, git and my Yubikey.

That’s all folks!


New PGP key

Monday, November 27th, 2017

I’ve got a new PGP key.

My PGP key fingerprint is: 9AEC 7595 2F0F 8E52 65A8  4364 6448 ABB4 AEAD 81A2

You can find my public PGP key here: https://dsebastien.net/pgp/dsebastien-pgp-key.asc


Battling against the 4.7.0 CrashPlan Synology package update

Saturday, May 21st, 2016

If you’re using CrashPlan to backup data on your Synology NAS in headless mode, you’ve probably already had to go through this update nightmare. This is pretty regular unfortunately; each time an update arrives for CrashPlan, the package gets broken in various ways.

Basically, clicking the “update” button always leads to a couple of hours wasted :(

Here’s how I fixed the issue this time, just in case it could help other people! Before you start, make sure you have a good hour in front of you.. ;-)
The commands are assumed to be executed as root…

  • close your eyes and update the package
  • start the package, it’ll download the update file then will crash and burn
  • copy cpio from the CrashPlan package to /bin/cpio: cp /var/packages/CrashPlan/target/bin/cpio /bin/cpio
  • extract the “upgrade” file: 7z e -o./ /var/packages/CrashPlan/target/upgrade.cpi
  • move the upgrade file outside the Crashplan folder
  • uninstall the CrashPlan package
  • install the CrashPlan package again (don’t let it start)
  • move back the upgrade file and put it in the upgrade folder (/var/packages/CrashPlan/target/upgrade)
  • edit install.vars in the CrashPlan folder to point to the correct location of Java on your NAS. To find it, just use ‘which java’. Then put the correct path for the JAVACOMMON property
  • (optional) rename the upgrade file to upgrade.jar (or whatever you like)
  • extract the upgrade file: 7z e -o/var/packages/CrashPlan/target/lib /var/packages/CrashPlan/target/upgrade/upgrade.jar
  • remove the upgrade file (not needed anymore)
  • remove the upgrade.cpi file
  • IF you have enough memory, then add the USR_MAX_HEAP property to /var/packages/CrashPlan/target/syno_package.vars
  • start the CrashPlan package; it should now stay up and running
  • install the latest CrashPlan client version on your machine
  • disable the Crashplan service on your machine
  • get the new Crashplan GUID on your NAS: cat /var/lib/crashplan/.ui_info; echo
  • copy the guid (everything before “,0.0.0.0”) in the ‘.ui_info’ file under C:\ProgramData\CrashPlan (assuming you’re on Windows). You must edit the file from a notepad executed as admin. Make sure to replace the IP (127.0.0.1) by the one of your NAS
  • Start the CrashPlan client, enter your CrashPlan credentials and passphrase (you do have one, right? :p)
  • Now let CrashPlan sync all your files for a few days :o)

Hope this helps!

Enjoy :)


So you want to be safe(r) while accessing your online bank account?

Saturday, May 14th, 2016

Web browsers

One quick tip: if you want to access sensitive Websites safely (e.g., your online bank, your taxes, …), then:

  • do so in a different Web browser than the one you generally use.
  • make sure that the browser you use for sensitive sites is NOT your default browser (i.e., the one that opens when you click on links in e-mails for example)
  • make sure that your browser is up to date
  • make sure that you never use that browser for anything else
  • do NOT visit anything else (i.e., no other tabs) at the same time
  • quickly check that you don’t have weird extensions or plugins installed (you could very well have been p0wned by any application installed on your machine)
  • make sure that you configure very strict security rules on that browser (e.g., disable caching, passwords/form data storage, etc)

Why does this help? Well if your machine isn’t part of a botnet or infected with hundreds of malwares yet, then the above could still protect you against commonly found vulnerabilities (e.g., cross-site request forgery), vulnerabilities exploited through a different tab in your browser, etc.

Personally I use Google Chrome as my default Web browser and Mozilla Firefox whenever I need to access sensitive sites.

Do NOT consider this as bulletproof though, it’s nothing but ONE additional thing you can do to protect yourself; you’re still exposed to many security risks, the Web is a dangerous place ;-)


Don’t use JSON for configuration files

Monday, April 25th, 2016

For quite some time, I wondered about this: “why the hell are comments forbidden in json files?”.

The short answer is: Douglas Crockford cared about interoperability (https://plus.google.com/+DouglasCrockfordEsq/posts/RK8qyGVaGSr).

The problem is that nowadays, many CLI tools make us of json files to store their configuration. It’s nice because the syntax is pretty lightweight and because it’s really easy to parse, but that’s where it ends because you know what? Comments are pretty darn useful in configuration files..

Unfortunately, as it stands, many of those tools (or at least the parsers they rely upon) choose not to accept comments. As Douglas states, nothing prevents us from sending json files through a minifier to get a comments-free version but… but it’s just a pain to have to do that before passing json files around; worse so when you need to have the file available on disk for some tool and even worse when that file needs to have a certain name (e.g., tsconfig.json).

Some tools do add support for comments, but then you realize that any surrounding tools must also accept that, which is often not the case or takes a while to get there. So that’s that, and IDEs which will complain if you start adding comments to json files (and rightly so..).

All in all, my opinion about this matter now is that json is just not the answer for configuration files. Since json does not support comments, then don’t use json, use something else, don’t try to hack your way around.

What should we use instead? Who cares, as long as it supports comments and doesn’t force you into hacks just to be able to comment things that need be!

YAML is one option, TOML is another, XML is yet another (though way too verbose) and I’m sure there are a gazillion other ones.

If you’re in the JS world then why not simply JS modules? There you get the benefit of directly supporting more advanced use cases (e.g., configuration composition, logic, etc).


Silence please

Tuesday, April 19th, 2016

As all music copyright holders will tell you, adding music you like (but do not own) to family video clips is copyright infringement. As such, you should remove the audio track entirely to avoid getting into a lawsuit… or worse, getting your video removed from Youtube :)

The command below is will list all streams that exist in your video file.

$ ffmpeg -i yourfile.mp4

ffmpeg version N-60592-gfd982f2 Copyright (c) 2000-2014 the FFmpeg developers
  built on Feb 13 2014 22:05:50 with gcc 4.8.2 (GCC)
  configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
  libavutil      52. 63.101 / 52. 63.101
  libavcodec     55. 52.101 / 55. 52.101
  libavformat    55. 32.101 / 55. 32.101
  libavdevice    55.  9.100 / 55.  9.100
  libavfilter     4.  1.102 /  4.  1.102
  libswscale      2.  5.101 /  2.  5.101
  libswresample   0. 17.104 /  0. 17.104
  libpostproc    52.  3.100 / 52.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'yourfile.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41
    creation_time   : 2015-12-22 23:09:46
  Duration: 00:05:27.04, start: 0.000000, bitrate: 5836 kb/s
    Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv), 1280x720 [SAR 1:1 DAR 16:9], 5579 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
    Metadata:
      creation_time   : 2015-12-22 23:09:46
      handler_name    : Alias Data Handler
    Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 253 kb/s (default)
    Metadata:
      creation_time   : 2015-12-22 23:09:46
      handler_name    : Alias Data Handler

As you can see in the example above, my file contains two streams: the video stream (h264) as 0:0 and a single audio stream as 0:1

To get rid of the audio stream with ffmpeg, I simply needed to ask ffmpeg nicely to copy the file, keeping the 0:0 video stream, ignoring the audio stream and leaving the codecs alone (i.e., not trying to reencode anything):

ffmpeg -i yourfile.mp4 -map 0:0 -acodec copy -vcodec copy yourfile-silent.mp4

If you have multiple video streams or if you want to keep some audio streams, then just adapt the mappings accordingly.


Docker for Windows (beta) and msysgit

Friday, April 15th, 2016

I’ve recently joined the beta program for Docker on Windows (now based on Hyper-V).

I wanted to keep my current config using msysGit but got weird errors when executing Docker commands from msysGit: https://forums.docker.com/t/weird-error-under-git-bash-msys-solved/9210

I could fix the issue by installing a newer version of msysGit with support for the MSYS_NO_PATHCONV environment variable. With that installed, I then changed my docker alias to a better approach:

docker()
{
    export MSYS_NO_PATHCONV=1
    ("$DOCKER_HOME/docker.exe" "[email protected]")
    export MSYS_NO_PATHCONV=0
}

Hope this helps!


Static sites? Let’s double that!

Monday, March 14th, 2016

Now that I’ve spent a good deal of time learning about what’s hot in the front-end area, I can go back to my initial goal: renew this Website.. or maybe I can fool around some more? :) In this post, I’ll describe the idea that I’ve got in mind.

One thing that’s been bothering me for a while is the dependency that I currently have on WordPress, PHP and a MySQL database. Of course there are pros and cons to consider, but currently I’m inclined to ditch WordPress, PHP and MySQL in favor of a static site.

Static site generators like Hugo (one of the most popular options at the moment) let you edit your content using flat files (e.g., using Markdown) with a specific folder structure. Once your content is ready for publication, you have to use a CLI/build tool that takes your content (e.g., posts, pages, …) and mixes it with a template.

Once the build is completed, you can upload the output on your Web host; no need for a database, no need for a server-side language, no need for anything more than a good old Apache Web server (or any Web server flavor you like). Neat!

Now what I’m wondering is: can we go further? What if we could create doubly static static sites? :)

Here’s the gist of my idea:
First, we can edit/maintain the content in the same way as with Hugo: through a specific folder structure with flat files. Of course we can add any feature we’d like around that: front matter, variables & interpolation, content editor, … For all that a build/CLI should be useful.. more on that later.

Note that the content could be hosted on GitHub or a similar platform to make the editing/publishing workflow simpler/nicer.

So, we’ve got static content, cool. What else? Well now what if we added a modern client-side Web application able to directly load those flat files and render them nicely?

If we have that then we could upload the static content to any Web host and have that modern Web app load the content directly from the client’s Web browser. The flow would thus be:

  • go to https://www.dsebastien.net
  • receive the modern Web app files (HTML, CSS, JS)
  • the modern Web app initializes in my Web browser
  • the modern Web app fetches the static content (pages, posts, …)
  • the modern Web app renders the content

Ok, not bad but performance could be an issue! (let’s ignore security for a moment ok? :p).
To work around that, we could imagine loading multiple posts at once and caching them.
If we have a build/CLI could also pack everything together so that the Web app only needs to load a single file (let’s ignore the HTTP 1.1 vs HTTP 2.0 debate for now).

In addition, we could also apply the ‘offline-first’ idea: put pages/posts in local storage on first load; the benefit would be that the application could continue to serve the content offline (we could combine this with service workers).

The ideas above partially mitigate the performance issue, but first render would still take long and SEO would remain a major problem since search engines are not necessarily great with modern client-side Web apps (are they now?). To fix that, we could add server-side rendering (e.g., using Angular Universal).

Server-side rendering is indeed nice, but it requires a specific back-end (let’s assume node). Personally I consider this to be a step back from the initial vision above (i.e., need for a server-side language), but the user experience is more important. Note that since dedicated servers are still so pricey with OVH, it would be a good excuse to go for DigitalOcean.. :)

Another important issue to think about is that without a database, we don’t have any way to make queries for content (e.g., search a keyword in all posts, find the last n posts, …). Again, if we have a build/CLI, then it could help work around the issue; it could generate an index of the static content you throw at it.

The index could contain results for important queries, post order, … By loading/caching that index file, the client-side Web app could act more intelligently and provide advanced features such as those provided by WordPress and WordPress widgets (e.g., full text search, top n posts, last n posts, tag cloud, …).

Note that for search though, one alternative might be Google Search (or Duck Duck Go, whatever), depending on how well it can handle client-side Web apps :)

In addition, the build/CLI could also generate content hashes. Content hashes could be used to quickly detect which bits of the content are out of date or new and need to be synchronized locally.

There you have it, the gist of my next OSS project :)

I’ll stop this post here as it describes the high level idea and I’ll publish some additional posts to go more in depth over some of the concepts presented above.


YubiKey

Thursday, February 18th, 2016

I’ve received a YubiKey Neo today and thus I’m going to start experimenting with it. If you care about security but never heard about YubiKey or Universal 2nd Factor (U2F) then you should probably take a look at how awesome that stuff is :)

Here’s a list of things I’m planning on using it as second authentication factor for:

  • Google tools & Google Chrome
  • Windows authentication
  • OpenVPN
  • KeePass
  • Android

I’ll also look at other ways I could leverage U2F… If you’ve got tips & tricks to share, don’t hesitate to tell me!