Posts Tagged ‘web’

Modern Web Development – Part Two

Wednesday, February 17th, 2016

In the first part of this series, I’ve explained how I re-discovered the state of the Web platform and specifically of the JavaScript universe.

Around June, I changed my mind about AngularJS and thought that Angular 2 could arrive on time for our project (hint: it didn’t), so I decided to tag the current state of my personal project and tried to actually start developing my site using it.

I spent some time during my holidays to migrate my app to Angular 2. During that time, I banged my head against the wall so many times it still hurts; not because of Angular 2, but because of RxJS and Reactive Programming; those made me feel really stupid for a while :)

Also during that time, I spent time improving my build. The build story was starting to itch me real bad, so at some point I put my project aside and decided to extract the build to a separate project and concentrate on that for a while. That effort led to the creation of modernWebDevBuild” (MWD for friends). MWD was my take at providing a reusable build for creating modern web applications. You could argue that that solution is not modern anymore but hey, I can’t stop time ;-)

If you look at the feature list of modernWebDevBuild, you’ll see that it’s basically Web Starter Kit on steroids with support for TypeScript, tslint, karma, etc.

I’ve put some effort into making it flexible enough so that it doesn’t put too many constraints on the client project structure and I’m pretty sure that, with some help of the community, it could become much more malleable and could be reused across many more projects, independently of whether those are based on Angular 1, Angular 2 or something else.

A while after, I’ve also created a Yeoman generator called modernWebDevGenerator to make it easy to scaffold new projects using modernWebDevBuild. The generated projects include many personal choices (e.g., Angular 2, TypeScript, SystemJS, JSPM, sass, jshint and a rule set, jscs and a rule set, …) and style guidelines (e.g., component approach for Angular and SASS code), but most if not all can be stripped away easily.

In my opinion, modernWebDevBuild was a good shot at providing a reusable build for front-end web development. I’ve used it for multiple projects and could update it easily without having to worry about the build or having to maintain a ton of build-related dependencies and whatnot. That was a relief: fixing an issue meant fixing it once in one place, much better!

For me, the idea of having a complete build as a dependency of a project is something I find immensely valuable.

Recently though, with the project at work (where we’ll use AngularJS for now) we’ve evaluated different solutions for bundling, module loading & build tasks in general which led to the decision of  using webpack. So far, it’s been a blast. I’m not going to explain in detail what webpack is as there are already more than enough articles over it out there, but IMHO it’s the clear winner at the moment. The most important for me is that it has a very active/vibrant community around it busy maintaining & developing tons of plugins. Those plugins add support for pretty much anything that you might need in your front end build. You need transpilation? Check. You need autoprefixing? Check. You need cache busting? Check… well you get the idea.

We’ve decided to fork the Angular 2 Webpack Starter Kit of AngularClass as it was the closest to what we needed to have.

With our project template, our goal is to integrate the whole stack that we’ve decided to use (e.g., Redux, RxJS, JSData, webpack for module bundling/loading, …) and use that template as basis for our next projects.

The thing is that I’d still like to extract the webpack build to a separate project (or at least a part of it). Again, I really believe that it should be possible to provide a reusable build configuration as long as it is flexible enough to accommodate for general use cases. Ultimately the discussion boils down to pragmatism versus the pleasure of reinventing your own wheel each time. Personally, I like round wheels and if one is flat then I don’t want to fix all my cars. What about you?

In the next post, I’ll explain what my new goal is for my site; as I said, I took a different route for a while because I had lots to learn, but now it’s about time for me to go back to my initial goal :)


Modern Web Development – Part one

Wednesday, February 17th, 2016

Since April last year, I’ve been plunging again in the world of Web development.. and what fun it has been! In this series of posts, I’m going to summarize the stuff I’ve done last year in order to hop back on the train and I’ll describe what I’ve learned along the way.

At the time, I published two blog posts, which were my way of condensing my vision for an important project at work aiming to modernize the way we create Web applications by going towards a client-side architecture combined with RESTful Web Services on the back-end.

When I started looking back at how the Web platform had evolved during the 2012-2015 period, the main things I had on my mind were:

  • mobile first & responsive web design
  • client side Web application architecture (which I personally consider to be the part of Web 3.0 — seriously, why not?)
  • the new specs that had reached broad support in modern Web browsers and were gaining a lot of traction
  • the offline first idea that these specs made more realistic

I wanted to learn more about AngularJSnode.jsnpm and sass but that was about it. I remember that at first, I had no precise idea yet about the build tool and the build steps that I wanted/needed… I hadn’t even heard about ES6 yet!

Since then, I’ve learned a ton about ES2015, TypeScript, module systems, module loaders, JS frameworks & the tooling around, front-end state management solutions, front-end build systems, project boilerplates, css style guides, quality assurance for front-end apps, unit & e2e testing libraries, … and the integration of it all…

The funny thing is that… I failed to deliver.

Initially, my personal goal was to create a responsive client-side Web app exploiting the RESTful API of my WordPress installation to replace my current theme, but I changed my mind along the way… So far, my site hasn’t changed one bit. I did improve some things though, but that was more around security than anything else.

So what made me change my mind and where did I spend my time?

At first, I was concentrated on the task at hand and I looked at how the HTML5 boilerplate had evolved as I knew that it was one of the best starting points around for creating modern Web apps. My idea was simple: use HTML5 boilerplate or InitializR to get ModernizR… and add some good old script tags… :p

I started with HTML5 boilerplate, but shortly after, I stumbled upon Web Starter Kit which was fresh out of Google’s oven, was based on HTML5 boilerplate and had some really cool features.

It came out of the box with a nice build which included support for JSCS (JS code style), JSHint (JS code quality), autoprefixing, BrowserSync (if you don’t know that one, DO check it out!), sass and ES6 (that was still the name at that point) with the help of Babel, …

 I really liked their setup and decided to use it as basis for my project; and that’s where my trajectory deviated :)

Given that I’m quite curious, I spent a while deconstructing Web Starter Kit’s build so that I could really understand what made it tick. That made me discover npm, gulp and the whole ecosystem of gulp plugins.

I really enjoyed doing so as it has helped me better grasp the necessary build steps for modern Web apps:

  • transpile code (ts->js, sass->css, …)
  • check quality
  • check style
  • create a production build (bundle, minify, mangle, …)
  • execute unit tests
  • execute end to end tests

At that moment, I was happy with the build as it stood so I continued to focus on developing my app. I took a good look at what ES6 was, what it meant for JavaScript, its ecosystem and how Babel helped (was it still called 6to5 then?). Learning about ES6 features took me a long while and I’m still far from done, but it was well worth it. ES2015 is such an huuuuuuuuuuuge step forward for the language.

I also took a glance at Angular 2 which was still in alpha state. It looked interesting but I believed that it would never be ready in time for our project at work (and it wasn’t). Still, I did spend a few days toying around with the alpha just to get to understand the basic principles.. and I must say that I really really loved what I saw!

That quick research spike also made me discover TypeScript.

Having a strong Java & OO background, TypeScript (TS) directly got me excited. I’m a strong believer in strong (heh) typing, and the fact that TS already supported many ES6 features that weren’t natively supported by Web browsers yet was very appealing to me.

Moreover, having dozens of Java developers in our development teams at work, TypeScript seemed really ideal for us as it supports many features and idioms that our developers are very familiar with (classes, interfaces, generics, strong typing, decorators, …).

If you want to learn more about TypeScript, I definitely recommend the Typescript Deep Dive.

At that point, tsconfig.json wasn’t there yet and the most evident choice to integrate the necessary build step was gulp, as advertised by Dan Walhin’s excellent blog post. If I had read more about npm I might have gone a completely different path (i.e., used npm scripts only).. ^^.

At that point, I had to deviate from what Web Starter Kit offered me in order to add build tasks for TypeScript, tslint, etc. Fiddling with the build made me realize that it was quite brittle, so I refactored it quite a lot and tried to improve things (e.g., separate the build tasks in different files, extract the configuration settings, ensure that it would not break the build on each error, etc). I remember that I wanted to contribute back to Web Starter Kit but realized too late that I had made too many changes at once for them to be able to integrate easily (silly me, bummer).

I went pretty far with actually as at some point, I was using TypeScript to output ES6 code that I then sent through Babel, just so that I could use async/await and other things that TypeScript wasn’t able to transpile to ES5… :)

The exercise helped me see how “immature” and “fragile” the whole JavaScript ecosystem was. What I mean by that is that there seems to be only moving parts and each of those parts don’t necessarily keep happy with each other. Not only do too few people really understand what semver actually means and respect it, but everything that shines bright gets replaced faster than the speed of light :)

As a technologist, I love the pace it imposes for the fun and innovation it brings to the table, but it’s also frustrating for many reasons and (should be) quite scary for enterprises (to some degree). People talk about JavaScript fatigue, which is quite a fun way to put it and I can certainly understand the idea now.

One example that I thought a lot about is the fact that each and every front-end project seems to have its own build chain and build configuration that lives within the project, in complete isolation and has to be maintained.

Of course each and every project has its specificities so there really can’t be ONE rigid & reusable solution to rule them all, but the idea of duplicating so much effort needlessly across a whole community of developers violates the DRY principle as much as anything ever could.

Just try and imagine how many people must have used some Yeoman generator to scaffold projects, which now all have separate builds with tasks that all do the same things but are all defined 20.000 times in a gazillion different ways using variable and unreliable dependency versions… :)

When you scaffold a project using a generator, you end up with a snapshot of the template and of the build provided by the generator at that point in time and then it’s up to you to keep your version up to date and to integrate all improvements and bug fixes, assuming you have time to follow that… you poor thing!

Being part of a core software development team at work, my focus is most often on finding reusable solutions to common problems, limiting effort duplication and what not and thus, the front-end universe’s situation seems quite sad in that regard.

Another point that struck me was how limited the main package management solution was. npm is nice and all, but not being able to define some parent/generic/reusable configuration (e.g., like parent pom files in Maven) is kind of surprising. Again, the DRY principle probably corresponds to DO Repeat Yourself in the frontend universe. I’m sure that front-end experts will tell me that you can work around all that in countless ways, but that’s exactly the issue: I shouldn’t have to invent my solution for a general issue people should be concerned about.

To conclude on a positive note though, I do believe that all the tooling DOES bring added value because it makes it possible to manage dependencies correctly, define build steps which execute tests, generate coverage reports (e.g., using Istanbul), generate production builds etc.

This piece is getting a bit long, so I’ll continue my little story in part two!

 


Web 3.0 is closer than you might think

Wednesday, April 22nd, 2015

Evolution of the Web towards Web 3.0

The real modern Web — I guess we’ll call that ‘Web 3.0’ — is really getting closer and closer and guess what? It’s also up to you to make it real.

Since my last Web development project (2012), many things have changed and evolved on the Web; its evolution pace is simply astonishing. HTML5, CSS3 and many WhatWG specifications are now pretty well supported by major browsers. You could say that we currently live the Web 2.5 era.

The real game changer now is that Microsoft has finally decided to retire IE and introduce Spartan, a legacy-free modern Web browser.

When you take a look at what Microsoft has announced for Spartan and their roadmap, you can only feel GOOD if you’re a Web enthusiast.

Nearly gone are the days where you had to play around IE’s quirks, battle against box model issues, resort to IE-specific conditional tags, use X-UA-Compatible and all that crap just to get a WORKING application/design across browsers. Nearly gone are the days where you had to manually add 36000 browser-specific prefixes to your CSS stylesheets just to get a gradient and whatnot, …

We’re getting closer to a state of the Web browser landscape where we’ll finally be able to think less about browser compatibility issues and concentrate our efforts on actually creating useful and/or beautiful things that simply work everywhere. OK I’m getting a ahead of myself, but still there’s more hope today than there was back in 2012 ;-)

Just take a look at this list. Of course there could be more green stuff but it’s already pretty damn cool.

Who needs native applications?

Some people already think about the next step, the ‘Ambient Web Era’, an era where the Web will actually be everywhere. You could say that we’re already there, but I don’t agree. I think that we’ll only reach that state after the real boom of the Internet of Things, when the Web will really be on par with native applications, when I’ll be easily able to create a Web interface for managing my heating system using nothing but Web technologies (i.e., without all the current hoops that we need to get through to reach that point).

But before we reach that level, we should observe progressive evolutions. Over time, Web applications will be more and more on par with native applications with means to ‘install’ them properly (e.g., using things such as manifest.webapp, manifest.json and the like) and native capabilities will end up exposed through JavaScript APIs.

Adapting to the mobile world

I hear and read more and more about such ideas as mobile first, responsive web design, client-side UIs, offline first, … The modern Web standards and browser vendors try to cater for all those things that we’ve missed for so long: means to actually create engaging user experiences across devices. For example, with standards such as IndexedDB, File API and Local Storage, we’ll be able to save/load/cache data at the client side to allow our applications to work offline. WebGL and soon WebGL 2.0 allow us to take advantage of all the graphics chip horsepower while the canvas element and associated API allow us to draw in 2D. WebRTC enable real-time audio/video communications, then there are also WebSockets, etc. These are but a few out of many specs that we can actually leverage TODAY across modern Web browsers!

As far as I’m concerned, mobile first is already a reality, it’s just a question of time for awareness and maturity to rise among the Web developers crowd. CSS3 media queries and tons of CSS frameworks make it much easier to adapt our Web UIs to different device sizes and responsive Web design principles are now pretty clearly laid out.

But for me, mobile first is not enough; we need to care about and build application for people who live in the mobile world but don’t necessarily have fast/consistent connectivity (or simply choose to stay offline in some circumstances). We need to consider offline first as well. For example, although we’re in 2015, I’m still disconnected every 5 minutes or so while I’m on the train for my daily commute (though I live in Western Europe).

We must ensure that our Web applications handle disconnections gracefully. One way to do this is for example to cache data once we’ve loaded it or batch load stuff in advance (e.g., blog posts from the last two months). The term offline first is pretty well chosen because, just like security, it’s difficult to add that as an afterthought. When your application tries to interact with the server side (and those interactions should be well thought/limited/optimized) it needs to check the connectivity state first, maybe evaluate the connection speed and adapt to the current network conditions. For example you might choose to load a smaller/lighter version of some resource if the connection is slow.

Offline first ideas have been floating around for a while but I think that browser vendors can help us much more than they currently do. The offline first approach is still very immature and it’ll take time for the Web development community to discover and describe best practices as well as relevant UX principles and design patterns. Keep an eye on https://github.com/offlinefirst.

Application Cache can help in some regards but there are also many pitfalls to be aware of; I won’t delve into that, there’s already a great article about it over at A List Apart.

Service Workers will help us A LOT (background sync, push notifications, …). Unfortunately, even though there are some polyfills available, I’m not sure that they are production ready just yet.

There are also online/offline events which are already better supported and can help you detect the current browser connectivity status.

The device’s battery status might also need to be considered, but the overall browser support for the Battery Status API isn’t all that great for now.

In my new project, I’ll certainly try and create an offline-first experience. I think that I’ll mostly rely on LocalStorage, but I’d also like to integrate visual indications/user control regarding what is online/offline.

Client-side UIs – Why not earlier?

One thing I’ve mentioned at the beginning of the post is fact that client-side UIs gain more and more momentum and rightfully so. I believe that UIs will progressively shift towards the client side for multiple reasons, but first let’s review a bit of IT history :)

Historically, Web application architectures have been focusing on the server side of things, considering the client-side as nothing more than a dumb renderer. There were many important and obvious reasons for this.

Yesterday we were living in a world where JavaScript was considered as a toy scripting language only useful to perform basic stuff such as displaying alert boxes, scrolling text in the status bar, etc. Next to that, computers weren’t nearly as powerful as they are today. Yesterday we were living in a world where tablets were made of stone and appeared only much later in Star Trek which was still nothing more than sucky science fiction (sorry guys, I’m no trekkie :p).

Back in 200x (ew that feels so close yet so distant), browser rendering engines and JavaScript engines were not nearly as fast as they are today. The rise of the Web 2.0, XHR, JS libs and the death of Flash have pushed browser vendors to tackle JS performance issues and they’ve done a terrific job.

Client-side UIs – Why now?

Today we are living in a world where mobile devices are everywhere. You don’t want to depend on the server for every interaction between the user and your application. What you want is for the application to run on the user’s device as independently as possible and only interact with the server if and when it’s really needed. Also, when interaction is needed, you only want useful data to be exchanged because mobile data plans cost $$$.

So why manage the UI and its state on the server? Well in the past we had very valid reasons to do so. But not today, not anymore. Mobile devices of today are much more powerful than desktop computers of the past. We are living in a world where JavaScript interpreters embedded in our browsers are lightning fast. ECMAScript has also evolved a lot over time and still is (look at all the cool ES6 stuff coming in fast).

Moreover, JavaScript has not only evolved as a language and from a performance standpoint. With the Web 2.0, JavaScript has become increasingly used to enhance the UI and user experience of Web applications in general. Today, JavaScript is considered very differently by software developers compared to 10 years ago. Unfortunately there are still people who still think that JavaScript is Java on the Web but hey, leave those guys aside and stick with me instead :)

Today we have package and dependency management tools for front-end code (e.g., NPM, Browserify, Webpack, …). We also have easy means to maintain a build for front-end code (e.g., Gulp, Grunt, etc). We have JavaScript code quality checking tools (e.g., JSHint, JSLint, Sonar integration, …). We have test frameworks and test runners for JavaScript code (e.g., Mocha, Karma, Protractor, Testacular, …), etc.

We have a gazillion JS libraries, JS frameworks, better developer tools included in modern browsers, we have better IDE support and even dedicated IDEs (e.g., Jetbrain’s WebStorm). And it all just keeps getting better and better.

In short, we have everything any professional developer needs to be able to consider developing full blown applications using JavaScript. Again, over time, standards will evolve and the overall Web SDK (let’s call it that ok?) will keep expanding and extending the Web’s capabilities.

Today we can even develop JavaScript applications on the server side thanks to NodeJS and its entire forks suite :) Some people don’t understand yet why that’s useful, but once they start to see more and more code shift towards the client side, they’ll probably see the light.

Okay where was I headed? Ok I remember: server-side vs client-side. I think that given the above, we can probably agree that the client-side development world is much more mature today than it was at the beginning of the Web 2.0 and that client-side UIs make a hell of a lot more sense in today’s world.

My view of a modern Web application architecture is as follows; it might indeed not be applicable to all use cases but in many cases it certainly can:

  • Client-side UI with HTML, CSS, JS, JS libs and a JS framework to keep the code base manageable/maintainable
  • Server-side responsible exposing RESTful Web Services adapting the data representations to the specific clients
  • Server-side responsible for enforcing the business rules and interacting with the rest of the infrastructure pieces

The benefits of this approach are multiple. Since the UI is fully managed on the client-side:

  • only the necessary data needs to be exchanged between the client and the server (i.e., JSON vs full HTML page)
  • the server-side can (and should) become stateless
  • it can more easily adapt to the needs of the mobile-first & offline-first approaches
  • the UI can be much more responsive and thus more ‘native-like’
  • Since the UI is completely separated from the back-end, it can more easily be replaced

If you care for the Web even just a bit then you know this is the model we need to go towards.

Of course, shifting the UI along with all the logic behind it to the client-side clearly means that the JavaScript code bases will become much larger and thus much more complex, which poses the question of code complexity and maintainability. As I’ve said, ECMAScript continues to evolve and ES6 introduces many things that are lacking today such as modularization, collections, more functional programming constructs and even proxies.

You could say that ES6 hasn’t landed yet, but what prevents you from using it already?

One thing that you cannot ignore is the technologies knowledge. There’s a huge difference between adding JS validation code to a form and developing a whole UI based on Web technologies and JavaScript framework. With the former you can survive with a minimal knowledge of the language while with the latter, you’d better have a deeper understanding.

If you work in a Java or .NET shop and don’t have actual Web developers at your disposal, then you might not be able to follow that path easily. It all depends on your organization’s culture and your people willingness/capability to learn new things and adapt.

I often like to compare the Web technology stack to the *NIX world: as an IT professional, what do you prefer? Learning stuff that’ll remain useful and beneficial to you for many years to come or learning stuff that you know will only be true/valid for a 3-4 years period? Or even worse yet: ignore market trends and market evolution? At some point you’ll have to adapt anyway and that’ll cost you.

Technology will always evolve but some technologies have much less interest for your professional career. If someone fresh out of school asked me today what to learn/what to focus on, I certainly would recommend learning as much as possible about the Web.

Here I’m not saying that the server-side is doomed, far from it. You just can’t expose your everything directly to the client-side. You need to place security boundaries somewhere. If you’re comfortable writing server-side code in Java, C#, PHP or whatever, then continue to do so, nothing forces you to switch to Go, NodeJS or anything else on that side of the fence. What I’m saying is that if you create a Web application, then you need to consider shifting the UI towards the client and if you’re serious about it, you must create a clear separation between both sides; to each its own responsibilities.

That being said, I think that JS on the server side also makes sense, we just need to give that a few more years to become mature. But picture this: you have a server-side application that interacts with your infrastructure, manipulates a JavaScript domain model and exposes Web services that accept/deliver JS objects. Next to that, you have a client-side application that uses the same JavaScript domain model and that interacts with those Web services.

What you gain there is the ability to share code between your server and client, whether that be the domain model, validation rules or whatever else. It makes a lot of sense from a design perspective, but the difficulty to take advantage of that is two-fold: first it needs maturity (it’s not for everyone just yet) and second, you need your organization/people to adapt. Java developers are not necessarily good JavaScript developers (since yes, those two simply are very different beasts ;-)).

Another technology that I think will help us transition UIs towards the client side are Web components. The more I read about these, the more I think that they’re going to represent the next big thing (TM) in Web development. It’s unfortunate that we currently still need to resort to polyfills to be able to use these, but hopefully that won’t last too long. Browser vendors should finally agree upon a set of primitives/APIs to support that we can use (e.g., will HTML imports prevail?). I can’t expand much more on this because I haven’t played much with them yet, but they’re on my radar ^^.

Conclusion & my current plans

This post represents my current vision of the Web as it stands, its future and why I believe that it’s time to seriously consider shifting the UI towards the client-side (let’s hope that time will prove me right). This was a pretty long one but I think that sometimes it makes for a good exercise to try and articulate a vision.

As I’ve mentioned in a previous post, I’ve started a new personal project to replace this now dated Website with a more modern version.

Now that I’ve shared my vision for the future of the Web, I can expand a bit more on my plans for this new project. Rather than re-creating a full blown WordPress theme, I intend to use WordPress as a simple CMS without a frond-end of its own: I’ll keep using the administration space to manage the content but I’ll extract the data (posts, comments, pages, tags, etc) through the WP REST API (which will soon become part of WordPress’s core).

My goal is to create a modern, responsive, mobile-first, offline-first and (hopefully) good looking web front-end. Why? Because we can, today! :)

Also, I want to leverage multiple data sources:

On the technical side of things, I plan to use HTML5/CSS3 (haha), AngularJS and/or Meteor and/or Polymer (I haven’t chosen yet).

For the CSS part I intend to use SASS and I think that I’ll use Pure.CSS or Foundation. I might also take a peek at Foundation for Apps.. Finally, I’ll try and play with ES6 (through ES 6TO5).

For now I’ve only created a build based on Googles Web Starter Kit to make my life a bit easier, using NPM and Gulp. I’ve also added basic Docker support (though there’s much room for improvement there).

Well that’s it for today; please don’t hesitate to share your thoughts! ;-)