Saturday, October 20, 2012

The Holy war that is HTML5 vs. Native

What follows below is an email I wrote today to a smart but non-technical friend in response to a question about HTML5 vs. native app development. Yes, there are small technical inaccuracies. However, before you start a flame war, ask yourself if the errors really hinder the original goal of informing a friend about the differences between HTML5 and native apps.

=================================================

Let me preface this by saying that the HTML5 vs. Native debate is a religious one. There are zealots on both sides. Being the cool-headed and scrutinizing moderate that I fancy myself, I can see the merits of both.

The difference boils down to accessibility vs. capability.

HTML5 is much more accessible. Because it's being accessed through the browser, one can access an HTML5 app from a myriad of devices (I know I'm holding your hand a bit here but bear with me). I'll give you an example. I was selling Waiter d', our restaurant software app, the other day and I realized that the restaurant POS was running Windows. Without writing a single line of code, I downloaded Chrome (IE was already on there, but it's a piece of [redacted]) and the app was fully functional running on their POS--a piece of hardware they already own. That was a pretty cool moment. 

Developing for multiple platforms is incredibly resource intensive. If you want to be legit, you have to have some combination of an iOS team, an Android team, a Windows team, a Mac OSX team, a Blackberry team (not for long, but for a long time this was essential for enterprise apps), and possibly another team or two depending on your target market. Each of these platforms require their own experts. This means that the most senior member of the iOS team wouldn't even be able to get hired on the Windows team. There just isn't as much overlap as you would think.

This is a digression, but I want to tell you a little story. When the Internet was invented, it couldn't do [redacted]. It could serve static webpages and the browsers could sometimes render them correctly. The Internet is only as powerful as it's end clients (in this case the browser). Any type of information can travel over the wire, but if the browser can't interpret it, then you're S.O.L. I'll give you a very technical example and hope that I don't lose you. Cascading Style Sheets, or CSS, are what is used to make a web page appear the way that it does. Designers can only style webpages in the way that CSS allows them. A very specific example of this is rounded corners on page elements. Look at any webpage from 2001. There will be zero rounded corners. Look at every single attractive webpage today. Rounded corners everywhere. This is because a few years ago, rounded corners were added to CSS. In practical terms, this means that the people programming browsers (like the people making Chrome. I'm not talking about people writing webpages here.) taught them that when the stylesheet contains "div { border-radius: 4px; }" that means that all of the "div" elements should have corners rounded off at a 4 pixel radius. Before that point, rounded corners simply weren't an option. Sorry for the injection of code there, I couldn't help myself.

Moving on. In 2004 some people got together (they eventually joined a group called the W3C or the World Wide Web Consortium), and they said "hey, everyone is doing all of these crazy things to try and stretch HTML to make web "apps" as opposed to web "pages". It was a bit of an identity crisis for the web. People wanted to do more with it--think Facebook, Pandora, Google Docs, Asana--than it was really designed to do. The way to fix this is to say "We're going to write down a bunch of very specific rules. Then we're going to give these rules to Mozilla, Google, Microsoft, and other so that all of their browsers behave the same way when they encounter the same HTML." If this weren't the case then application developers would be writing a Chrome version, a Safari version, and an Internet Explorer version. All of the sudden we're right back in the same multi-platform quagmire we started in. Anyway, the W3C is a "standards body" that makes sure that all of the browsers work reasonably similarly. (There are still "cross-browser inconsistencies" that exist in some of the rarer cases, and they are literally the bane of a web developers' existence.) To be clear, these private companies are free to develop whatever browser they would like. The only authority the W3C has is that they have a quorum of browser makers, so if you depart from their standard, you will be the only one and your browser will be incompatible with the web pages/apps people are writing for all of the other browsers.

Getting there. The W3C has been around since the dawn of time. However, the specific action this splinter group took in 2004 was to start work on an "HTML5 standard". These are the aforementioned rules that the browser makers are supposed to follow. The important thing about HTML5 was that it was supposed to bring web app capabilities to HTML. These capabilities included things like geolocation, local storage, and websockets. Geolocation allows web developers to access the physical location of the computer (only if the end user allows it). Local storage allows web developers to store data and application files directly on the user's computer. Before HTML5, they either had to store the data on their servers or they lost it as soon as the user quit the browser. For certain use cases, this capability is essential. Finally, websockets allowed web developers to "push" information to the browser. Previously, the web developer had no access to the user once the page was downloaded. Think web chat. Not possible IN A BROWSER without websockets. All the information about what is to be displayed simply isn't available at the time the user downloads the webpage. HTML5 includes a ton of other changes to HTML, but they are similar in that they attempt to give the browser application-like abilities. 

This was all a grand plan, and in my mind it largely succeeded. HTML5 is awesome to work with because you "write once and run anywhere". However, critics will tell you that HTML5 fell short of its goal--they are also right. For instance, you can't access a phone's camera from the browser. Instagram is impossible using HTML5 as it stands today. My thinking is that HTML5, which is still evolving, will slowly gain more and more of the capabilities of native apps. For instance, access to hardware like the camera is slowly being implemented. As another example, geolocation has already been implemented. If your device has GPS, then HTML5 can use it.

The difference goes deeper than just features though. Native apps also have a decided performance advantage. Native apps run much closer to the hardware. (I don't know if I'm going to lose you here.) This means that there are fewer layers of software sitting between the user and the CPU. Because of this, everything runs faster. Think of it as a game of telephone. In this case, each person in the telephone chain is a layer of software. Users talk to native apps, native apps talk to the operating system, and the operating system talks to the hardware. Whereas users have to talk to HTML5 apps which talk to the browser, the browser talks to the operating system, and the operating system talks to the hardware. On top of that, web apps often use a framework like Ruby on Rails or Django which adds another pseudo-layer between the user and the hardware. Every layer both slows things down and makes them less fluent. 

The indisputable fact is that native apps have a performance advantage. However, this needs to be weighed against the resources that it takes to develop on disparate platforms. Like I said--capability vs. accessibility. 

1 comment: