Arnout Kazemier - Pushing the real-time web forward
Fronteers 2014 | Amsterdam, October 9, 2014
WebSockets, Server Sent Events or should I maybe just use Short Polling?
Choosing the right tool for the job is always harder then it looks and often requires deep understanding of the various techniques and hacks. Join this journey and learn about the beauty and horrors of building the real-time web.
Give a grand old welcome to Nodejitsu's Arnout Kazemier.
So yeah, I'm going to be talking about pushing the web forward.
I work for a company called Nodejitsu, and we basically host node.js
applications in the cloud.
In addition to hosting node applications, we also host private repositories for the NPM registry, which is really cool.
So you can have all your code in one single location.
In addition to that, I started my own startup which is called Observe.it,
and we do real-time monitoring of your users interacting with your site, and we stream all that information back to our site and you can see everybody clicking on your site, figure out why people are not submitting your checkout forms, maybe they are lost on some input fields, or stuff like that.
It's really cool technology, and again, it's real-time, and it's amazing.
So yeah, it's really cool, and we need speakers, so if you ever want to go up north in the Netherlands, visit Groningen and come speak at one of our meet-ups.
I'm on Twitter, I'm on GitHub, 3rd-Eden on GitHub with a dash for -Eden because Twitter doesn't like dashes in usernames.
So we're going to talk a bit about the previous real-time web, and what we were using back in the good old days.
And it might not surprise you, but it was plugins.
Plugins were amazing back then.
Everybody had these really cool Java applets running on their sites, and doing real-time stuff with it, and they wanted to add video functionality, voice functionality, and the web just didn't have that yet.
And there were also some downsides for using the regular web.
For example, you could not do cross-domain requests.
So if you were building a real-time site, it has to be hosted on your same domain, and there could be some issues.
In addition to that, plugins don't have any connection limits.
Back in the day, all browsers only allowed two connections to your server, and if you want to have a real-time connection open all the time, it already accounts for one, or even two connections, so you couldn't load any CSS or images.
That was not optimal.
But these are plugins, and they are really bloaty, and they take a lot of memory out of your browser, and we moved away from plugins because everybody basically started hating them all of a sudden.
So we all moved to polling.
Polling is really cool, and it's a great way to basically abuse HTML and resources in your browser, to get some sort of real-time interaction with your server.
There are a couple of different ways to implement polling.
A really bad way to do polling is called short polling, which is basically sending a request to the server, and the server answers immediately.
If it has data, you will get some data, if it has nothing, you will just send another poll request after a small interval.
And the problem with this is, during that interval, you are basically not sending any data to the server, because you don't want to explode your server, because you are sending so many requests, because you really want to have all the data.
So this solution is not really real-time because you add this really nonsense delay of your data.
And the last thing you want to do in a real-time application is introduce pointless latency, because it's no longer real-time.
So there's a different way to do polling, and this is called long polling.
Where you basically send one request to the server, and the server holds that request until it finally has some data, and its answers the request with the data.
And right after you receive the data, you send another one, and the server will do it again, and it goes on and on and on.
There are some problems with this approach, and that is kind of related to the duration that a server can hold a connection.
Browsers think they are smart and they think, hey this request takes longer than 25 seconds, so your server must be broken, so we're just going to cancel it.
So you can only send long polling requests for no longer than 25 seconds, or your browser will just abort it.
And it's really hard to figure out if the server actually closed it down, or it was the browser, and if you need to start a new request, or the server was just down.
But there are different ways to implement this polling, and one of the common ways is to use JSONP polling, which is basically creating a simple script deck and appending it to the DOM.
In newer browsers we can set the async flag, but most of these requests will already be async, because we are using the createElement function of the DOM instead of a document write, which blocks the rendering of your page, or even wipes down your whole page if it's done after loading the page.
It's really simple to implement, but this is only a way to retrieve data.
And of course there are some minor security implications, because it basically just evals all the data that you receive from the server.
So some caution might be needed when you want to implement it in real-time using polling.
Especially with the reason, the tech vector called Rosetta something, where they basically abused Flash to steal all the data off your computer, by using JSONP requests.
As I said before, this is only a way to get data.
To post data when you're doing something like JSONP, it's a bit different.
You need to create a form, and you want to submit it.
This is the only way to gather a decent post request to the server.
But when you submit a form, you usually refresh the page where the forms hosted in.
So the way around this is to create an iframe where your form can be posted to.
It's really cool.
A really simple technique, and it just works as intended.
But there's a minor problem with polling, and some polling in specific.
And that is, when you're building real-time application, the last thing that you want is to have those really annoying loading indicators at the bottom of your bar, or those annoying spinners around your cursor, because much of the time the user will just wait until these loading indicators are gone because that's when they know they can finally interact with your page, because everything is ready.
So you want to avoid this with real-time.
And a neat way to actually kill loading indicators for JSON polling is to create an iframe, append it, and remove it again.
I mean, who on earth actually thought about, hey I'm going to kill a loading indicator by adding and removing iframe.
Some people must have been really insane are desperate to figure out these kind of techniques.
But they work, but they don't work in every browser.
That's also why there are different ways to do polling.
JSONP polling is one of them, and AJAX polling is a better alternative in most cases, but AJAX is not available in every browser.
So you still need to have this fall back to an older transport way, or all the real-time technology, in order to have aa really good, solid browser support.
So AJAX polling is great.
It's asynchronous, you don't get any loading indicators, hopefully.
But there are some small conditions for that.
There's this thing called cross domain.
And IE decided to solve it by creating a really odd x-domain function, which kind of does cross domain, but they didn't really understand it when they built this thing, because it only accepts connections that are made through the same protocol.
So if you are hosting a page on HTTP, and you want to connect to an HTTPS site, you can't use this.
You can't allow any custom headers to be sent from the client to the server.
You cannot use this if you don't send any plain text back from the server.
So you can't send any HTML or JSON.
It's really annoying, and one of the biggest annoyances is that it doesn't allow you to send any authorization information, like cookies, or even basic authorization.
So you basically have no idea where these requests are coming from, and you can't really use it in an efficient way.
That's why this technique is mostly frowned upon by most real-time frameworks.
But they still want to give people that are using Internet Explorer to get some sort of real-time notion, and this is somewhat the only way, except for newer browsers.
But as I said before, loading indicators, it's a big problem.
And these can also still occur in AJAX polling based real-time techniques.
And the problem with that is because you start connecting to your server while the base is still loading, and your browser will think, hey this request is part of the loading process, so I must wait.
I must wait until it's finished.
For short polling, not that much of a problem because we'll end quickly anyways.
But if you're going to do long polling, and you want to have a loading indicator for 25 seconds, it's not that much of a good solution.
So the way around it is to add a load listener, but this again introduces unwanted latency, because you have to wait until your complete page is loaded.
It's even worse than this, because on mobile devices you even have to add a set timeout in your connect method to even wait 100 milliseconds longer in order to connect, or you will still get that really annoying spinning indicator at the top of your screen.
And it's really annoying.
But while we are developing this, there are things that you might not notice until you start navigating around on your page.
And one of those things is caching.
The most worrying part of this is that these problems only occur with back and forward between pages.
And it's called the back-forward crash, and it's like a special cache that browsers implement to keep the web as fast as possible.
Basically they save the state of the previous page, so when you go to the next page and go back again, you can just finish or start again at the previous state and just do whatever you were before you left the page.
But for real-time, you are already gone from the server, and you've missed a lot of data, and the browser will still think, hey you are making poll requests to that same API pointer and it will cache all the data.
And this can lead to problems because you are just missing data, and that is one of the last things that you want to do in real-time applications.
So a way around this is just to time stamp the URLs with a date.
But even appending the date isn't enough, you also need to introduce a unique variable at the end of your date string, because it's still possible to start two requests at the same millisecond, which can cause these really odd caching behaviors.
So this is a bit about the past and now we are living in his glorious age of the HTML5 web and real-time is everywhere.
We finally have decent, or decent enough, technologies to create this real-time experience that we've always been wanting to build using plugins like Flash.
And one of these techniques is the glorious WebSocket.
And this came from HTML5 and it's been extracted again from HTML5, and released as separate specifications, and separate protocols, but the browser support for this is actually quite decent.
It really depends on how far back so you want to build your real-time system, because IE only started supporting it since IE 10, and it's still a decent enough browser, but there might be people here that need to support IE 6, 7, 8, 9, whatever, or even older mobile devices.
So it's possible to do this because this chart only shows the browsers that support the most recent part of the specification.
If we look at Chrome, for example, they started supporting the RFC WebSockets specification since Chrome 20, but if you want to support older versions of WebSockets and support all the protocols, you can actually get support from Chrome 4.
Same with Firefox, they started with supporting the RFC around version 12, and they are shipping a drafted version of the WebSockets since Firefox 4, which has been prefixed with moz.
Opera, same case.
They shipped RFC versions since version 12, but there's a version of the old specification behind configuration flags in Opera 11.
It's still really good enough browser support in order to get this new real-time functionality working.
With one small note, Chrome is flagged here as partial, because the data comes from the Can I Use website.
But they have some bad data in there, and the problem is that older versions of Chrome don't really fully support the RFC specification, because they lack support for basic authorization.
So just don't trust the data that's always posted on the internet, do your own research, in some cases, because you might hit this bug, and it's really annoying if you want to connect to something that requires basic auth.
For example, your internal internet, or something like that.
The specification of WebSockets is quite complex.
But it's a really optimized protocol.
It has support to prevent WebSockets from cache poisoning, it has support for binary, sub protocols.
It's a really lengthy spec, and if I had to talk about it here today, I would be talking for one hour and 15 minutes just to tell you about the specification, and I'm not going to do that to you.
But the one cool part of the WebSockets is that it has a handshaking phase.
And during the handshaking phase, you can accept or deny WebSocket requests to your server.
It's a really simple way to just add some decent authorization to your WebSocket endpoints.
And only after this handshaking has been done, the real upgraded connection will be made to your server.
So as for the API, it's a really simple, understandable API.
The thing that I'm doing here is catching it with a try catch statement because there are a few bugs in browsers where you cannot connect from an (https) point to a HTTP: server and will just throw an error, instead of just not connecting.
So you have to be careful with that.
And when we're listening to messages, when using WebSockets, you must check the event.data property,
because that's where all the data is held.
It's a really simple API, it supports basically everything that you want.
It supports UTF-8 by default, but if you want more, if you want to send binary data because that's really cool and awesome on the web, it's possible.
And you can just change like one really simple binary type flag, and set it to arraybuffer, and you can just stream your canvas data or anything else back to your server, and make really cool streaming things, like streaming images, movies, emulators, anything you want would be possible using these WebSockets.
But I've been calling it WebSockets for a while now, but it should basically be renamed to websuckets, because the really pain-point of WebSockets is, not that it can throw when you initialize it, it's a bit more than that.
On Macs we have this network setting, where we can turn auto proxy discovery on and off.
And when you use this option, it can actually cause a full browser crash when you initialize a WebSocket.
It's a Mac only bug, it's only happened in older versions of Mac browsers, so Safari and Chrome, but there's nothing we can do about it.
You can wrap it in a try catch statement, you can cross your fingers that it doesn't crash, but it will happen.
And the only way around this is to do something that we, as web developers, really hate.
And that is browser sniffing.
Recent versions of Chrome don't do this anymore, the same as Safari, so I also left Chrome out in this if statement, but you basically have to parse out the version of webkit in order to figure out if you can support WebSockets or not.
And that's really annoying because there's really nothing you can do about it, except browser sniffing.
And then there's this bug for mobile devices, and that is writing to a closed WebSocket connection can also crash your whole device.
This is kind of a serious bug, because WebSockets can be really great for mobile devices because you don't have the overhead of polling every single time, so it doesn't drain your battery as much as a WebSocket connection.
But if we're using mobile phones, we usually are switching between apps, going back before, between our email, web browsers, Twitter, Facebook.
These are the circumstances where these crashes are happening.
But luckily, for this bug, there's actually a way around it, and that is to add a small set timeout before you send a message.
The problem is, is that we're building real-time applications here and in order to fix these bugs, we actually have to add some latency just to prevent certain browsers from crashing.
Even though we're setting a set timeout of zero, most browsers will normalize it to 10 or 15 milliseconds of delay, and that's not that good.
So what you might want to do, if you want to implement this fix, is to just do it for mobile only.
So you don't have to delay on desktop devices, and only a small delay on mobile browsers.
So another bug is something that we probably do a lot when we're browsing the web, and that's pressing the escape key.
If we have these modal dialogues that pup up with, hey we've got some cool offer for you, do you want to subscribe here? And we're like, nope.
But the problem with that is, in Firefox, if you even press escape after the page is loaded, it will still disconnect your WebSocket.
This is not something that you want, especially for games as well, because the way to get to your options page or your settings, is pressing escape.
And boom, your whole game is down.
But they fixed it in Firefox 20, but is it still something that will affect you when you're building games that target a bit of older browsers.
There is one thing that we can do about it.
It's really simple and that is just to intercept at the prevent this event from happening.
So I'm using jQuery here just to listen to the key down event.
when we notice that it's the escape key that we have pressed, we're just going to prevent the default action of it.
And that's all it takes, just to get Firefox from stopping the connection.
And then there's yet another Firefox bug.
It's amazing how many bugs one browser can have.
And this bug is really annoying because it can create ghost connections.
And that is only when you connect during the closing event of your WebSocket.
And the biggest problem with this is that when you close down your tab, the connection will still be alive.
And when you close down all your other tabs, it will still be alive.
But only after you fully closed Firefox, your connection will be closed.
So this can lead to really strange behavior on your server, where you're just debugging and, hey I expect one connection, and suddenly getting 12.
Where does it come from? It's just a small bug in Firefox.
But there's nothing that you can do about this at this point.
It's still an issue.
And then we always have these great mobile devices that we carry with us every single time, and mobile devices, or our mobile providers, want to optimize everything for us.
Because they are so kind, they don't want us to do all our data, and pay that much money, and stuff like that.
So they implemented these [INAUDIBLE] glorious caching proxies, where they store all assets from servers and stuff like that.
But the problem with these caching proxies is that they don't support the WebSocket protocol.
So it could be that one of your mobile providers like O2 or Vodafone phone actually drops the connection when you want to connect using WebSockets, just because they don't understand it.
Your server might understand it, but they don't.
So what you could do is completely block out any mobile device.
While it's not something I recommend, doing some sort of fallback to different polling systems might be a better solution if you want to fix this.
Block out everything.
And then there's the user environments where our browsers run in.
And these are especially harmful for WebSockets, because viruses scanners, they want to read all the data that comes in from the web, and figure out if you're not downloading any viruses, or doing anything that should be forbidden by their law.
So what they do is, not understand WebSockets, so they just block it.
They can't decode it, so it must be bad.
Must be a virus.
And that's really painful, because these kinds of things are really hard to detect, because it's only certain virus scanners that have these issues, like older version of Norton virus scanner has this, as well as AVG has this as well.
There's nothing we can do about it, we can just hope that we're not having these users that run into these issues.
But it's not just the users, it's also the network that we're on.
We've got user firewalls, but also the network firewalls of the companies that we work at, or even the server firewalls that don't understand WebSockets.
It's the same thing, you cannot connect.
Some firewalls even go as far as blocking polling as well.
And there's a really famous, or bad famous, firewall that does this, it's called the Blue Coat Firewall.
That's really cool as well.
It's not just WebSockets that we have to deal with, firewalls also with polling.
And then at the server level, we have to deal with out-of-date load balancers.
There might be people here that are still using Apache 2.2, which
doesn't support WebSockets.
They only added it in 2.4.
Even older installations of nginx that don't understand the proxying of WebSockets.
If they do support it, it's usually when you're load balancers are in TCP mode, so you can't do any proper load balancing with it, or redirect WebSockets based on the host headers and stuff like that because they just don't understand it.
So it's kind of painful just to use WebSockets, this new awesome technology, but you have to deal with all these kind of issues.
And you might almost forget about this new technology, called server-sent events.
And it kind of sneaked into the HTML specification.
Not a lot of people are actively advocating it, but it's actually pretty freaking awesome.
Because it has decent browser support, it's kind of the same as WebSockets, but even really, really old versions of Opera support it.
Opera 8.5 supports an older
specification of this, but it still supports streaming of data.
But the only problem with a server-sent event is that it only allows you to stream data from the server to the client, and not from the client to the server.
So if you want to have a full real-time system, you might still need to do the post hack that I explained earlier.
But in most cases, depending on the app that you're building, you might not even need your users to send you data.
Like analytic sites, or reporting dashboards, or notifications, stuff like that, they're all perfectly suitable for server-sent events.
And the protocol is super tiny.
And it's really human readable.
You can skip over it in 25 minutes.
It's just plain text.
It's really, really easy to debug, because it's plain text.
It supports comments, so you can add comments to your data that you are sending.
Not a smart thing to do, but it's possible.
You can send different events.
It supports message drops, because you can send down an ID of a message, and when it detects that an ID is missing, they will do a new request to the server with the ID of the last message that they received, and you can just send new data.
Is also supports retry, so the service and events will automatically restart when your connection has been dropped.
And that's really cool.
Again, really simple API.
It basically supports two methods, an error handler, it doesn't even have a close handler because it will just reconnect automatically, it does have a close method to just close the connection, but you don't receive any closing events.
You have a message handler, which will receive the default data fragments.
And if you specified an event, you can add in a handler for the specified event, and listen for that specifically.
But the problem with this technology is that it's not cross domain.
Or is it? In later revisions of the specifications, they finally added support for course headers and you can check it if there's withCredentials in the prototype of the eventsource, and connect to your server with that.
But you might want to also have core support for older browsers.
And there's a really neat way to do that.
And again, we can use glorious iframes for that.
And the way to do this, is just to create an iframe which points to your server which hosts the page, and use cross document messaging in order to communicate with this new iframe that you've created.
So your iframe will receive a message from the server and pushes it to your parent page where you can listen to messages, and therefore have this cross domain solution for eventsource.
But again, this solution is also riddled by firewalls, and they can actually buffer up the request.
And it can be annoying because you will just not receive any data information.
It will not crash or something like that, you will just get nothing.
So it's really a wise thing to do, when you start implementing eventsource to always send a starting message, and listen to that on the client side.
So if you don't receive starting messages, you can assume that all messages will be buffered, and that's basically a no-go anymore.
Another thing that's slightly annoying, it's reconnection bugs.
Eventsource comes with built-in reconnect, but it's not something you can trust.
On Chrome, when you go away from your PC or laptop, and it goes into sleep mode, and you go back to your computer again, you wake it up.
It will not reconnect, it will not send and any error events, it will just silently set the ready state to two, indicating that it's closed, but you don't get any notifications from it.
And another thing is, there's a problem with Macs that can have inactive connections.
It's really a MacBook-only, but it's not something we can fix.
It's not something we can prevent, so you could sniff again in the user ID strings, so if you're using a Mac, don't use it.
But it's not that bad.
As I mentioned before, cross domain is really important when you're building real-time applications.
But the problem is that a lot of people don't really understand the specification of access control.
Because when it's done right, it's really powerful.
The only thing that your server has to do is send the correct headers.
Which are the Access-Control-Allow-Origin headers, and Access-Control-Allow-Credentials header in order to indicate that the client is capable, or allowed, to send cookie headers.
But yeah, this looks all right.
And this is also an all right example of it, but the specification also allows you to send stars as origins, basically to indicate that I'm accepting any origin that would want to connect to our server.
So when we do that, this might seem to work, but it's not.
There's this really small edge case in the specification where they state you cannot use the star symbol for allowed origins if you also want to send credentials.
So the real problem here is that it's causing an origin mismatch, so what you should have done is send back the origin header that you received from the server, and then it can successfully connect.
And the problem with this is that even a lot of frameworks that are currently out there are having a lot of issues, of course, because they simply forget this really tiny feature in the specification.
And the last thing that you want to do with real-time applications is introduce pointless requests and pointless overhead.
And of course will do that if you do not send a request with a content header of text-plain.
So if you don't send that, they will send a request to server using an options method, where you can send back all the access control headers that are allowed, methods that are allowed, headers that are allowed.
It's really annoying because it's just a pointless request, and it can be prevented with just a simple change your code.
But the biggest problem with most real-time applications comes from reconnect.
And this is mostly because it's not something that most people see as something that's important.
And it's often missed by developers.
And even when it's implemented, it's implemented incorrectly.
Because we are all connected with our glorious real-time system, and our real-time system can support hundreds or even thousands of concurrent connections and happy users will all be connected.
But then your server goes down, and then what happens? Most people will just reconnect on an interval.
They figure out the connection is down, nah we'll wait a bit and connect again.
Servers are still down, eh wait a bit, connect again.
But the problem with this is, that if your server goes down, it goes down for everybody.
And you basically are creating this huge attack on your server.
Because every connected client will attempt to reconnect at the same moment, every single time.
So your operations team will have some issues getting your server back online.
The proper way to implement this, is to use and exponential back off algorithm, which is also randomized.
The benefit of doing something like this algorithm I've included here, is that it's incrementing the interval over each attempt, and because it's randomized, not every connected user will be reconnecting at the same time.
So instead of flooding your server or even DDOSing your server to death, all requests will be nicely distributed as it should.
So this is a good way to get your server back online.
But the problem with this is, servers and events do it on an interval.
So if you're using servers and events, don't use the reconnect option.
Turn it off.
When you notice there's an error, cancel the request and do your own reconnection.
Because you will just kill your server if your site will get really popular.
Another thing that's often forgotten is the implementation of a heartbeat system in real-time system.
The reason for this is, again, for implementations of eventsource where you cannot detect if your connection has been down.
So you have to continuously check if you've received data.
If you have not received anything within a given interval, you can assume that the connection is dead and you have to restart again.
And from the server-side, it's the same thing.
Because your server also needs to know when you are actually genuinely disconnecting as a user, or if you are in a polling cycle.
So what people mostly do is, on the server they set some sort of states, somewhere in a session, where they know it's in a polling cycle and we can expect a new poll within 50 seconds, and when it's not there, we can fire off an event so you can properly clean up all this data in your database or in your server again.
And with reconnect, it has support for offline.
And it's something that, again, a lot of real-time systems are missing.
It's really easy to implement because the browser already ships with these events, in newer versions at least, so you can just listen to the offline and the online events and make decisions if you need to reconnect or disconnect based on that.
But again, this is older browsers that we need to support, so what we can do is request a random image on the internet and if you get a load event, we can assume our internet connection is still working.
So you can do this on a polling interval.
Again, it's eating requests, and it's not something that you want to do on really old browsers, but it can be useful for certain applications.
If you don't want to do an image route, you can also, again using a polling system, and just do header requests to your image of favicon or stuff like that.
Last but not least is the connection limit to your server.
I've spoken about this before.
Browsers have a certain limit of amount of connections that you can do on a single web page.
New browsers support a lot of them, 6-8 is normal now, but back in the day it was back to 4-2 in older browsers.
Luckily because this problem exists for such a long time, there are already ways around this.
And that is to connect to a different sub-domain for each connection.
So these connections all allow two connections per sub-domain.
In addition to that, you can also use something called intertab communication, and that's basically, you set up one real-time system in a tab, and you check if there are any new tabs opened to the same URL.
And send all the data from one tab to another, so this new tab doesn't have to open a connection.
And one of the ways to do that is to implement shared workers, which is a new specification.
It's kind of like web workers, but these workers are shared between every tab that's opened.
And you can just broadcast information for tabs between that.
But the problem with web workers, again, is that it does another request to the worker here, because needs to fetch this script and implement it.
So you might think, yes we can inline this because it's such a new technology, we also have blobs in the browser, and we can do these really cool things with binary data and implement it as a blob, and create a URL from it, and connect to that as a worker.
But the problem with this is that these create object URL methods, as seen here, create a new URL every single time.
And that will not work with shared workers because these shared workers need to share the same script every single time in order to figure out that it's actually shared.
Another way to do this is to use or abuse local storage, and every time you set an item in local storage, you get an event.
And this event is actually broadcast in every single tab that's on the same domain.
So you can listen to that, you can figure out which key was stored and what the value of it is, and you can use that to retrieve the data.
And then, of course, we still have polling.
You can poll for cookies and set that, instead.
But it's a really horrible way in order to establish cross tab communication, because you're basically just bloating your whole site with pointless cookies, and all of these cookies will be sent back to the server as well.
And one of the most important things is, when you're building real-time time applications is that you always need to support (or connect to https) and WSS: if you're using WebSockets.
Because it's basically the only way to get a really rock-solid connection without the interference of firewalls or virus scanners, because your connection, and all the data that is sent is encrypted, so they cannot sniff it, and they will usually allow it.
So using a secured connection will increase your connectivity in your real-time systems.
But you might think, yeah frameworks solve all of this, but unfortunately, yeah frameworks exist but they don't fix all these issues, you might think.
But, no buts.
They don't fix anything.
So the next big thing of real-time could be HTTP/2 or SPDY, which also supports the server pushing data back to the client.
Maybe webRTC, but webRTC is only a user-to-user connection.
But we're still waiting for someone who's insane enough to also implement webRTC as a client on the server, so we can do client to server connections using the webRTC data channels as well.
But, yeah, that's all I got.
[APPLAUSE] Come along to my lounge.
Lots of information.
It's quite sad, isn't it? It's quite sad but it's also a really great adventure.
I wish I had heard this talk last year, because I tried to run an interactive quiz talk based on server-sent events, An Event Apart.
And it worked fine at Velocity, and I tested it at home, and I got to do it in front of Zeldman, my finest hour, it all failed, and how bad that talk went still is there when I close my eyes.
I wish I would've-- Nightmarish Seen your talk.
Are there any good parts? At all? Maybe.
Maybe in the future, I mean-- We haven't found them yet.
They might be there somewhere.
I mean, all these fixes are still being discovered, like fixes for spinners, fixes for crashes, and stuff like that.
So as long as we keep pushing all of these technologies forward, we will find new solutions to all of these bugs.
And make it more doable in the real-time web.
Is there something that can handle it all for us? I mean, Socket.IO has been
quite popular in this space.
Socket.IO is indeed
quite popular, but it doesn't solve all these issues.
It doesn't protect you against WebSocket crashes.
It also has limited support for the transport method that it supports.
It goes Socket.IO 1.0
recently came out, and they reduced the amount of transports that they supported.
Even for the [INAUDIBLE] so they currently only support WebSockets and JSON polling and XHR polling.
So they've ditched their Flash support? They ditched Flash, they also had HTML file streaming, and they also ditched that.
So would Flash have fixed that issue, with the autoproxy thing, or was it a very system level crash you were seeing, then? It's a system level crash.
Are the latest browsers becoming bug free? Are the problems more legacy, now? Yes and no.
But it really depends on how much adoption is being pushed.
For example, the bugs that I've told you about, eventsource, about Chrome not being able to reconnect.
It's still an open issue in Chrome as we speak.
And it's a really annoying bug because it just doesn't reconnect, and it's one of the most beautiful things about the technology.
Most WebSocket implementations are recently bug-free, now.
It's just the previous browsers we have to worry about.
Are there APIs that we're missing, to make this stuff easier? Or is it just stuff that needs to be fixed? Who do we need to be shouting at if we want this stuff to become easier? I think that every browser needs to be shouted at.
I mean, crashing WebSockets connections is just really horrible.
Same as connectivity, missing connectivity in browsers.
It's one of the most vital parts of a real-time system, and it's almost like these browser vendors didn't test it enough.
The Extensible Web Manifesto says that as browser vendors, we should be offering the lowest level tools that we can.
Is that hurting WebSocket? Given all the bumps in the road it exposes, with different types of networks, the reconnect stuff, should we have been given something more high level? I think it should have been even more low level.
Because the thing with the WebSocket specification, and the blocking of WebSockets, it's been known when they've written the specification, but they've not done anything about it.
So if we were given really raw TCP connections instead, we could have fixed it ourselves, but now we're stuck with a broken specification which doesn't solve it.
So what stopped the raw sockets stuff? I guess it was security? I honestly don't know.
I do know that Chrome extensions do support raw TCP connections.
It must be a security related issue.
I'm guessing, based on that.
The server-sent event stuff, should that have, it's kind of same subject, worked across tabs automatically? Having to use a shared worker for that, is that good, in that you're able to use all of the different parts to build something, or should it have done some of that stuff out of the box? I think it should have done it out of the box, because a browser could be smart enough, just to detect that you're connecting to the same endpoint.
Is there a use case for having two separate connections? Not that I'm aware of.
Server-sent events is a high-level abstraction on top of-- I guess long polling is what it's similar to.
Does that fix the back off issue, if you're needing to reconnect? Does that do smart things there or is it-- No the server-sent events is just an interval.
And it's three seconds by default.
So you're going to still get hit by that-- You still get hit by it, depending on the amount of traffic that you have.
So is there any point in using WebSockets at all? Or are we just hurting themselves by doing that? It really depends on what kind of application that you're building.
If you're building games, you really want to have low latency, and you might need binary support, so a WebSocket is basically the only thing that you can use today.
So you just need to make sure that you only support a certain degree of browsers.
With the binary data stuff, if you want to use binary data, you're having to use WebSockets, then you take on all the extra pain.
Is there a higher-level equivalent for streaming binary data? Or do we need one? I don't know.
You might need it.
But I also forgot to mention that the latest specification of the XHR polling also supports binary.
So, there's a part of the spec for polling with XHR? Yeah.
Do we need some kind of-- because with server-sent events, it's text that you get back.
You can send JSON chucks, but do we need more streaming data formats one on the web? Do we need a streaming equivalent for JSON, or some other kind? Is there work being done in this area? There's nothing being done in this area.
I was actually hoping to have webRTC working for servers as well, because that fixes a lot of issues.
Because webRTC can actually work across these broken firewalls and networks, and it fixes all these things that we have problems with, with server-client communication.
So if we're using secure connections [INAUDIBLE] for all these things, how many of the problems go away? Is it all of them? Is it most of them? Most of them go away.
You can go as far as 95% of all of your connections will be established, but you still have like 5% of all of your users, which will basically be fucked if they can't get any connection.
That's a happy note to end on, so.
Thank you very much.