Modern Progressive Enhancement by Jake Archibald
What's the difference between a hippo and a zippo.
One is incredibly heavy and the other's a little lighter.
That's made me feel a whole lot better actually because I've had a bit of a bad experience so far because part of my conference setup, just before speaking, I like to go to the toilet and do I like to call a terror piss, which is getting all the panic, all of it, just turn it into urine, and just send it as far away from me as possible.
Are you hinting you want me to tell more jokes while you go for a piss? No, I couldn't possibly, and I've got this all attached now, and I know that the sound will be going through.
No it's fine.
But the whole thing is that during my usual allotted time for doing that, my Wiimote stopped working, and I know it's not an important part the talk but it was bugging me, so I thought what's the worst that can happen? [LAUGHTER AND APPLAUSE] Oh, Bruce, man, don't do that.
You know so I just I restarted my machine thinking, you know, I can set everything back up in time, and it just took, it went just dark, and then pressed the on button and nothing happened.
Pressed the on button again, and it came on, and OS X does this little bar as it's loading? It got halfway through, and then the screen went black again.
And it came back up, and it just started and it stopped once again at the halfway point.
So you know the whole panic of that I've just been sort of glued to that chair, couldn't go for my usual thing.
So everything's a panic.
They guy who helped me set up the laptop, just as a kind of friendly thing as I was leaving, he just kind of went, Good luck, mate, and I'm aware of how sweaty my back is, so it's all, it's all going on.
No, don't touch.
Don't even look at it.
OK, we'll let you go straight on so you can go to the toilets immediately after.
If everyone could avoid going to the toilet straight after so he could run and not have to wait in a line.
So ladies and gentleman, the man from Google, Jank Architect.
[APPLAUSE] Ooh, yeah.
I preferred it when I first spoke here.
I think I was working at the BBC, and no one expected me to fix all the Chrome bugs, so that was much nicer.
OK so if you enjoyed Scott's-- Scott's talk this morning, you'll like this one because it's the same.
OK so it's not totally the same.
I do have a slightly different point of view so it will be worth it.
At least that's how the argument's represented on, you know, in 140 characters on Twitter, the home of reasonable debates.
Which is great, but I do think it's easier said than done.
That's another potential outcome.
I mean you can fix this if you know the potential breakpoints and code defensively for each one, but you don't always know where they are.
Who here still has to support IE 8 as part of their day job? OK, so sort of 15%, 20%.
Just as a kind of control, raise your hand if you refuse to take part in conference polling.
[LAUGHTER] OK, fair enough, really just flat.
We heard about this earlier.
It was popularized by the BBC.
They called it cutting the mustard.
Yeah super simple to implement.
You could test actual features that you use, and you should do that throughout your code, but right at the start I think it's OK to kind of cheat a little bit right at the start, and sniff out a kind of modern looking browser.
Here I'm using the page visibility API which I found by reverse engineering the caniuse data set because it's the one feature that doesn't work in IE 9 and below and doesn't work on the old Android webkit browser.
Although these boxes are green, that little yellow symbol there means it needs a webkit prefix, so you just don't test for the webkit prefix.
So once you do that, these browsers are just running plain HTML and CSS, you've protected yourself from the real bite-y parts of these browsers, the horrible bits.
But the content still works.
Developing becomes way easier.
There's no reason for your stuff to work without it.
Progressive enhancement is about building up from a baseline, but each project will have different baselines.
It isn't just about browser capability, it's user capability in terms of cognitive, sensory, motor.
It's about the device's form factor, input, connectivity, and we set and adjust these baselines in terms of our users.
Build on top of what's already there.
This is SVGOMG, which I built, It does its work entirely on the client for minifying SVG's.
I mean, of course, you could have replaced this with just the kind of file inputs that post the SVG to the server.
It does the work, and it posts it back.
It doesn't really solve the problems that this aims to solve.
With progressive enhancement, every phase of the enhancement absolutely must have a real world user that benefits from it.
And in a world where everyone has modern browsers, is progressive enhancement dead? Earlier in the year I bought a PS4, and a particular feature of the PS4 caught my eye.
When downloading a game, you could play level one even though level 30 was still downloading.
Well, whoopie shit.
[LAUGHTER] The web has had this feature for 20 years.
This is the full HTML spec loading here.
It's like a 3 megabyte document.
It's a throttled connection.
But it gets on screen after only 20K has downloaded.
And this is what makes the web feel fast.
Browsers embrace streaming.
You don't need like a lengthy install step just to get somewhere.
Whereas for things like consoles, the PS4, the web has been this kind of awkward add on.
It was mostly used for just online gaming at first.
And I am not interested in online gaming at all because I've never played a game that I thought would be better with the addition of verbal abuse from a child.
I can't play any game with these headset communication things.
It's like being on the phone to the Westboro a Baptist children's choir, you know, it's this torrent of abuse in high pitched voices.
But what do we do with this, this amazing web feature? We web engineer away with this all or nothing mentality.
Performance matters, like it really actually matters.
AutoAnything, they cut their load time in half.
They saw a 50% jump in revenue.
The Obama campaign, a 60% speedup, a 14% increase in donation conversions.
Walmart, every 100 milliseconds grew incremental revenue by 1%.
Firefox, this is their download page, a 2.2 second reduction,
and they saw a 15% increase in download conversions.
And when I tell people these facts, the counter argument is often, Well actually I'm building a web app, not a web site, I think you'll find.
And there's no definition, really, for what a web app is versus a site, and too often it's used as an excuse for poor performance or bad accessibility or not working in particular places that it should work.
But at the day users do not stare at a blank loading screen and think, Well thank Christ this is a web app and not a web site otherwise I'd be having a bad experience right now.
And on that note, I want to take a closer look at Talky.io
because the author of this app, site, whatever, is a big framework fan.
[LAUGHTER] Yeah, this was hastily recorded.
It's right next to each other.
But the author of this, he gave this really eloquent and smart anti progressive enhancement talk in Brighton last year, and I really hated it so I really want to get back at him.
And that's what this talk is.
It's a kind of petty form of revenge.
No, but seriously I got his permission to use it as a use case, and he's actually changed his point of view on a lot of this stuff, and a lot of the stuff I'll talk about today will appear in the next version of Talky.
Anyway we will take a look at how long it takes to render, and we're going to use WebPagetest because it's brilliant.
So we put the URL in, run it in Chrome, and we're going to set the connection down to 3G.
I always think 3G is the best connection type to test on because although some of your users will be mainlining the internet through a fiber connection in the back of their head, many will have something much slower.
Increasingly many will have a mobile connection.
I know that we have 4G now, but even 4G capable users will be on 3G or worse around 20% of the time in the Netherlands.
Although it does depend on what mobile network you're on.
If you're on Tele2, it's only 50% of the time.
I guess that's a much worse network.
I don't know.
I assume that mobile network is just an old man in a field holding an umbrella in the air connected to a car battery.
I don't know, but they seem much worse.
Actually I shouldn't take the piss because the worst in the Netherlands is actually the average in the UK.
It's 50% for us.
So Scott had some slightly different figures earlier on.
So these are figures for actual users that are already 4G capable, but you'll have a lot of users that aren't, that will be on slower connections all the time.
So anyway going to run the tests, and you get results.
You get results for both first load and a cached load, and it's the first load that's much more interesting because you can't rely on the cache for the performance because stuff falls out of it all the time, just as users browse around the web.
But we as developers invalidate stuff all the time.
You know we have a culture of updating and launching frequently-- which is great-- but that means every time you do that you're invalidating a load of files.
So we're going to have a closer look at this filmstrip view.
We show screenshots of all of the site as it loads, and we can see here that it took 8 seconds to get to a meaningful first render.
That's 8 seconds of blank screen.
But let's put this into perspective.
This isn't the worst.
While the PS4 may have embraced progressiveness the PlayStation store has not.
It takes almost 16 seconds to get to a meaningful render.
And the PlayStation store is a site, right, as much as Amazon is, even though they're trying to be app-y.
But so 60 seconds is clearly ridiculous, but you know Talky is an app, so is 8 seconds worth it? We must consider load time in terms of what we get, what we actually get at the time.
And we didn't get a slick, feature full video chat app.
We got that, some text on the page.
The video chat stuff is all an interaction away.
Adobe spend a lot of effort improving the start up time of Photoshop because a few versions ago it was absolutely ridiculous to stare at this for 30 seconds just to get this.
Because what can I actually do with this? There are a lot of buttons that don't do anything.
There are a load of menu items are disabled.
All I can do at this point is create a new image or open an existing image, and that doesn't require a whole bunch of upfront loading.
It's alarming to me that this decade's web apps are willfully making the mistakes of last decade's native apps.
We need to let users play level one as soon as level one has downloaded.
When downloading and displaying content don't wait until you've buffered everything before you show anything.
But so how would we do this for Talky.io?
Talky's source code is shaped like this which a lot of the web is.
Absolutely no content in the markup.
Don't do that.
You know, put some content in.
Put some content in the markup.
An empty or missing body tag is really a missed opportunity.
So what should the content be? Well, the content to make that look like that.
If you're a content rich site such as Twitter for example, is it a site? Is it an app? No one knows.
Put all the markup in to make that look like that.
And that's what they did.
And if you absolutely can't get content in your markup, at least use markup for the basic UI around it, the page shell.
But hear this.
A splash screen doesn't count as a first render.
It is an admission of failure.
It is a gravestone commemorating the death of your performance.
[LAUGHTER] And don't put the names of everyone responsible on it.
[LAUGHTER] Same goes for spinners and loading bars.
These are failure cases.
I mean here's a case a little bit closer to home.
Gmail has a loading bar, and it's such a problem that they have like a-- they offer up a whole different implementation using basic HTML in case you want to do that, and that's a whole separate thing that they have to maintain.
But what do I get after this loading bar? Do I get an HD movie or an immersive WebGL experience? No.
I get a list of some texts.
And I realize Gmail is complex, right, and it was a game changing web app, especially when it was first launched, but this is classic waiting for level 30 until I can play level one.
Anyway, what's next? As Steve Souders always says, Unblock your scripts.
Old scripts will block rendering of subsequent content by default. So we can stop that.
As we heard earlier we can put async on there.
Alternatively you can put defer.
Async will execute as soon as the script loads, and they can execute out of order if you've got multiple async scripts, whereas defer will defer execution until DOM ready, and it will execute the script in order, which is quite nice.
And this is actually a feature that came from Internet Explorer.
They've had this since IE 4 back in 1997.
Unfortunately this implementation had a bug.
That meant if you're first script modified the DOM at the point it modifies the DOM, it will start executing the second script like halfway through your first script.
So everything would just blow up.
And unfortunately this bug actually carried through to IE 5 and IE 5.5 and IE 6 and IE 7
and IE 8 and IE 9.
[LAUGHTER] But they did fix it in IE 10.
So that's OK.
But if you need to support IE 9 or below then you cannot use defer.
It will catch you out, and it will be really difficult to work out what's going on, so I would stick with async.
But once we can forget about IE 9 then, yeah, defer is a much better solution.
But this does present a small problem loading async and using content in the markup because we've got this button at the top here, this Let's Go button where you type in the name of the video call you want to make and you click that.
And that button is going to be available before the script has downloaded and executed.
So we could hide the button until the script loads.
We could put a spinner there or something.
Lie to the user.
Go full Volkswagen, you know, fake it.
Pretend you're ready.
[LAUGHTER] Because you will probably get away with it.
But don't leave it to chance.
You don't need to do anything.
Otherwise make a note of the room name that the user enters and store that in a global variable.
Tell you what, I'll just make that happen right now.
And we have to do something in the meantime, though.
And I think the right thing to do is to react to the user's click.
Change the color of the button, do a little transition or something, an active state.
I wouldn't show a spinner unless another second passes until that script loads.
Because the action and maybe like the animation of the button clicking, that will buy you an extra second before you have to kind of admit that you are lying, and no I'm not actually ready at all.
With SVGOMG, I'm actually really lazy.
If the user interacts before the main script arrives, I do this.
It may as well be this.
Well, I'm not betting, I'm using analytics to check, and it's happened four times.
Two of them were me checking the analytics were working, and one was like an old version of IE 8.
I don't really care.
The other one seems to be genuine, so you could say it's happened two times out of thousands.
I don't know, it's kind of like, if you've got a horrible user experience there, but no users actually experience it, is it actually a bad experience? I'm going to say no, and just let myself away with that.
So if Talky put content in their markup and unblocked the script, how quickly would they get to render on 3G? It was eight seconds before, and now it is-- [DRUM ROLL] [BUZZER] We have solved nothing.
So what's going on here? Let's have a look at the network waterfall.
So this is it.
The script was blocking rendering for two seconds, and we fixed that, but it wasn't actually the main problem.
This is the CSS for the font.
Because it's to a different origin it has to go through a DNS look up and SSL negotiation, that's the shorter bit of the blocks there, the thinner bit.
And all it does is it gives us a redirect to another origin.
So we have to go through SSL and DNS again.
Two seconds is wasted on the redirect alone, and then the CSS delivers like 200 K of in-lined font, you know data URL in there.
And in-lining fonts like that is never the answer, but this is not actually Talky's fault.
It's out of their control.
This is done by the font provider.
The font provider is being a dick to them, and they're doing it deliberately.
Now why would a font company deliberately give a worse experience to regular users, regular, innocent users like you.
It's not like you would steal a car is it? Or a handbag, or even a TV.
Yeah, so this whole user experience is brought you by the letters DRM.
It's adding four seconds onto the actual render to try and stop people pirating fonts.
Does it stop people pirating fonts? Fuck no.
Of course it doesn't.
It doesn't make a difference.
It just makes stuff slow and painful.
We need to work around this.
These DRM style hacks are a performance disaster, but even regular web fonts are a real problem.
I think they're real performance trouble especially when they use for body content.
Hiding the text until the font is ready is bad because it's really frustrating when you think, I've got the content here, actually.
Please show me it.
Showing a fallback is great, but maybe it's just me, but the switch from one font to another I find really jarring.
It's a really jarring experience especially if the text I'm reading moves, especially if it changes the line it's on.
We need to sort that out.
And Scott mentioned it earlier, there's a new spec proposed to deal with this, and it's the font rendering controls module, and it lets you specify how long do you want to-- do you want us to block the rendering or do you want us to swap straightaway? And they're going to extend the spec so you can actually provide the numbers yourself and provide the timings.
But the most interesting option that they have is this font display optional.
And that's one where it says, if the font is cached, use it.
Otherwise use a fallback and stick with it.
Don't swap out, but let the browser download it for next time if the connection is good or whatever.
And this was inspired by the Guardian who sort of did this thing using local storage, like they'd download the font and put in local storage, and then the next time they load it go, Oh if it's in local storage, we'll use it.
If it's not, we're not going to use it at all.
This is being implemented in Chrome now but it is still in experiments, you know waiting for cross browser agreement on it.
I think it's kind of going to save us when it comes to web fonts.
In the meantime, I would avoid using web fonts for the first rendered content, be that the first screen of your app or what's above the fold, especially for primary content like body text.
So we're going to avoid using web fonts for the first render in this case, but load the CSS async for subsequent screens.
To do that, we're going to use the Filament Group's load CSS that we heard about earlier.
There should be an async attribute on link rel stylesheet, I think.
There isn't, but we should add that.
So we'll inline the load CSS stuff.
So we've replaced the link tag with a call to load CSS to the font CDN, and now we could render without the font but it still loads in the background.
So if Talky put content in the markup, made their script async, and made that stupid font thing async, what would their render time be? It was eight seconds, and it's now-- [DRUM ROLL] [TRUMPET] That's a 6.4 second
saving on 3G.
And they're actually worse in combination because the script is blocking first render, the engine doesn't realize it needs the font data as well, so they kind of add up to be more than the sum of their parts.
Anyway, 2.6 seconds
is pretty good for 3G.
On a 5 megabit connection, which a lot of people have, these changes would have made it go from 2.5 seconds down
to half a second, and that's huge as well.
It was born out of date.
It's locking you into the mistakes of last decade's native apps.
It's great to see things like Fast Boot and Ember, and React JS.
These things can render on the server, and then the client side part of it can re-render it again, which is slightly odd that it has to re-render.
I'd rather it picked up where the server left off, but at least that gets you to that first render a bit quicker.
But even on the server, this all or nothing approach is troublesome because what it will do is it needs to sit there and work out what the whole HTML is going to be before it can start sending any of it.
So I'm actually a lot more excited about things like DustJS, which is being taken on by LinkedIn.
And rather than being all or nothing, DustJS takes a streaming approach.
So with Dust, and this is what a Dust template looks like, it would just send the Hello World heading out straight away, and the opening div tag, and then it gets to the foo value.
And if it is just a string or like a number, it will HTML, escape it, and output it and often it will carry on.
But if foo is a reference to a stream, then it will pipe that stream, and it'll add a transform stream onto it to escape the HTML and start sending it to the browser.
So even if foo is like a 100 K worth of string or eventual string, the user will see it as it's piped down, and they won't have to wait until the whole thing is ready.
This is really good when you're getting some data from databases or getting data from servers or services over HTTP and transforming them.
And then say bar is a promise, it will wait for the promise to resolve, and it'll output the eventual value.
So you can just, as soon as you get your request, you can go and set off everything that you need from elsewhere like databases and other services, but you can start rendering before any of that returns.
Like don't buffer everything before sending anything.
Instead flush early and flush often.
I really wanted to make a poop joke here, but I couldn't, I couldn't work one out-- oh that was the poop joke.
Oh, shit yeah, OK.
Poo jokes all the way down.
Anyway where am I? Oh, so using all the stuff that I've mentioned so far, I was seeing SVGOMG render in around 2.6 seconds,
but I wanted to see how close to native performance I could get with the web, especially on mobile.
And the only render block I had left after doing everything that I mentioned so far was the request for the CSS, not the font CSS, just the regular site CSS.
But yeah, you can do better here by just rendering with level one CSS, or often the CSS for the whole app or site.
So to fix this, I did this.
So that's async loading the most of the CSS, but in-lining just a little bit for that first render.
It's just a tiny bit of CSS.
This is pretty easy to do on, like, low content apps if you want, but it's a little bit trickier for sort of high content sites.
But we're fighting for some changes to the HTML spec to make this easier, as Scott mentioned earlier.
So he was using two style sheets in the example, but I would kind of, I would see it being used with many more.
So you'd have like a style sheet include for your article, and straight after that you would have your article.
And then later on you would have the style sheet for your comment CSS, and then you would have your comments, and so on and so on.
So rather than having one big style sheet that blocks the whole page, you have a little style sheet per component that's just included just before the first use of that component.
And then the browser can render in stages because it blocks the rendering of subsequent content while that style sheet downloads.
So it can render the article even though the comment CSS and the sidebar CSS is still downloading.
And this is especially useful over HTTP 2, which you get a lot of performance out of smaller requests.
So these stylesheets could be pretty small, and you're not going to have the overhead of the extra request in HTTP 2.
And also you get benefits like if you modify the CSS via sidebar, it doesn't mean you invalidate the cache for your article and comment CSS as well.
This is actually how Internet Explorer behaves today.
It does this.
I really think it's the right thing, and so on Chrome we've started work to change so we do the same.
Scott's-- I thought Firefox did the same, but Firefox actually just async loads the CSS.
I would rather Firefox move to the IE model as well.
And we'll get the spec changed in the hope that Safari might do it eventually.
In the meantime, load CSS is the way to go.
So by in-lining the CSS and loading the rest async, that to my load time from 2.6 seconds down to--
[DRUM ROLL] [TRUMPET] So it was a second of saving on that, just that little modification.
And when you come to 1.7 seconds,
I mean most of that, 70% of that is DNS, SSL, which is just slow over 3G.
There's not a whole lot you can do about it.
But only 5K of content is needed for that first render and all part of one request.
And that shouldn't be a shock, right, because look at it.
There's not a lot to it really.
Like Talky's initial render and Photoshop's initial UI even, there's not a lot there.
There's not a lot it can do.
All the heavy functionality is an interaction away.
But even then we can optimize for the first interaction as well.
So this is the waterfall for SVGOMG.
So there's the initial connection and HTML, and that's the 5K of content that the first render.
And as soon as this has arrived, the user can start interacting with the site.
They can select an image to optimize.
They can drag one into the browser or load the demo image.
But then in the background, three other scripts download as workers, and these are the heavier scripts, the ones that deal with SVG [? minifying ?] and pausing, g zipping, syntax highlighting.
But the user doesn't care that these we're loading in the background.
If the user manages to select an SVG before these scripts have loaded, it doesn't matter because as far as the user knows the app is [? minifying ?] the SVG.
In actual fact it's waiting for the scripts to arrive that know how to do that, and then it does that.
But in my 99% of cases the script will load before the user selects an image.
When I was a kid at school they had someone come in to teach us about the dangers of smoking.
And I went home full of all this new information because I was really young and naive, and I related it all to my dad who is a smoker.
And I was like, Dad, did you know that cigarettes contain all these harmful chemicals, and it causes 100 cancers and you're damaging society.
And then I finished on the big fact I learned that day, and I said, Did you know every time you smoke a cigarette, you reduce your life by 10 minutes? And my dad thought to himself for a moment, and then he said, Can it be this 10 minutes? [LAUGHTER] And despite being a total smart-ass, he kind of had a point.
The time people hate wasting the most is right now.
They care less about what happens after that.
I think a few hundred milliseconds to total load time is worth it if it decreases the time to first render and, interaction.
The users don't notice the extra.
They're distracted by having something they can use right now.
So it's fine to take a little performance hit in total load time.
So I'm making myself laugh.
I don't know if anyone else noticed, it was during the Chris's talk earlier, he kept saying that he was saying a couple of times, If you do this, then you take a little performance hit there, and if you do that you take a little performance hit there.
And I kept hearing little performance Hitler.
And I can't get this idea out of my head.
I think maybe Ilya Grigorik is Little Performance Hitler, I don't know.
He'll just run on and tackle you until you do it the right way-- the Reich way, I guess it would be.
No one groaned at Bruce's joke.
How come that got a groan? Oh, it's because I'm not a dad.
Using the stuff mentioned so far, you can make your sites and apps work really fast on Wi-Fi.
You can make your stuff render in half a second even with an empty cache.
You can be fast on 3G, you know render in 1.7 seconds with
an empty cache.
But there's one more extremely common connection type that we haven't hit before, and it is-- it's this.
No matter what the resource is, when you're on Lie-Fi, it is going to take minutes, and then it is going to fail.
Offline is a problem, but it's kind of OK because at least you get quick answer.
You know, can I have this page? No.
You're like, oh.
I'll deal with that.
Lie-fi is like offline but it trolls you by pretending it's online.
That's the problem.
And I've used one bar as an indicator here, but it's not always that.
I mean, I get this quite a lot, maybe it's because mobile connectivity in the UK is poor, but has anyone else had this? Where your phone says it has a full signal but can it squeeze a bite down it? No? Yeah, yeah.
OK, so it's not just me.
When your phone's got this kind of connection, it's like a one legged dog.
It thinks it can play fetch but it can't, and you have to watch it drag itself along the ground with it's one leg, and it's horrible.
This is actually me being lie-fi'd to only last week.
Look at how miserable my face is.
This what lie-fi is like.
I've actually been driven to drink, but it's OK.
I'm in a bar.
They cater for that quite well, so.
We need to save users from this.
Progressive enhancement needs to evolve.
The majority of users now have a frequently auto-updating browser, and [? how sync ?] based on browser features is becoming less of a challenge.
Varying network conditions have always being a bigger issue.
And as mobile usage increases around the world it's an even bigger issue.
Offline first is progressive enhancement with the network.
Once again progressive enhancement proves to be the better option than graceful failure, graceful degradation, because if you go to the network, and then if that fails, you provide some kind of offline experience.
The lie-fi users are still left in limbo.
They still have to wait minutes for that initial connection to fail.
I think there's been a library recently released called UpUp.
I think it's UpUp? Yeah? OK.
And that says here's how you make your stuff work offline first, and it's a really good library, but it's online first, and that's why it really, really grates with me, but-- Yeah, if you're doing online first, you're still going to leave the lie-fi users in limbo.
And you create this weird situation because I actually did this with the lanyard site, like I did it the wrong way round.
And you create this situation where you put airplane mode on just to get access to the data that's already on your device.
And that's really frustrating.
So although the network is a natural place for progressive enhancement to concentrate, we haven't really done anything with it because it was near impossible.
But that's no longer the case.
Service Worker landed in Chrome in late January, and it will arrive in Firefox in the next couple of months.
So from a page you'll call this.
also get a series of-- you also get a cache that you can sort of add and remove elements from-- as you so desire.
Such as “install” when the browser sees this version of the service worker for the first time.
“activate” when install is complete, and the service worker can start controlling pages.
The most interesting one is “fetch”.
You get this whenever one of your pages is requested from the network, and then you get it for every request that that page makes even if they're two other origins.
And like other browser events you can prevent the default and do your own thing.
And the next time the page loads, it's controlled by the service worker.
A page makes a request, just take it out of the cache and send it back.
You don't need the network at all.
And the service worker doesn't do any of this for me.
It's not an inbuilt behavior as such.
And even though I'm specifying everything, it's not a lot of codes.
I'm not going to go into depth with it, but it would be like uninstall, hey, I want you to wait until I open this cache called static V1, and then I'm going to add a lot of items to it, and then once that's done you install it.
And you can even add items for other origins as well.
And then on Fetch, it's going to say, hey when you get a request, if one of the pages makes a request, just try and get it from the cache, request this-- look for a match for this request in the cache.
If you get something use it, otherwise go to the network.
And I'm not going to deep dive into the API.
I'm just showing that it's not a lot of code, but you're in full control.
And that takes the performance of SVGOMG second visits from between one to two seconds depending on connection to-- [TRUMPET] As good as instant.
And that's how fast it is on 3G, that's how fast it is offline, and that's how fast it is on Wi-Fi.
The network no longer has any impact on the performance at all.
User didn't have to go to an app store for this.
They didn't have to install and wait.
All of the installation steps happen in the background while they were actually using the app or the site.
But you can't just serve content from the cache forever because we'll add new features, right, we'll fix bugs or whatever.
We don't want users stuck on some old version.
This is our install code from before, and every time the user visits a page on your site, the browser will check for an update to Service Worker in the background after the page is loaded.
And if the file is different in any way, if it's bite different, it will consider it to be a new version.
When the browser next loads the page it will see that difference, and it will start a new service worker for that.
And it runs alongside the old version because the old version is still being used by the user right now.
And this new one isn't ready yet because it needs to go to the network.
It needs to do its install step, get all the assets, including these new changed ones, and then it puts them in the cache.
And this is why we name the cache something different because we don't want to disrupt the one that's already there being used by the user.
Now by default this new version will wait.
It will wait until the old version isn't being used anymore, but once pages using the old version they close, there's nothing left to control.
The old version isn't needed anymore.
It becomes, well, redundant really.
And it goes away.
[LAUGHTER] And then the new version can move in and take over, and everything's great.
It can start controlling pages, and it's first order of business is the activate event, and it can get rid of the old page's caches.
That's what this slightly gnarly piece of code is doing here.
It's going through all the caches that are there and deletes any of the ones that it's not expecting to be there because you're not necessarily upgrading from the previous version.
It could be some previous numbers or versions ago.
Waiting for the old app to go away before starting a new version like this, that's playing it safe.
But sometimes you're happy to take over quicker than that, and then you can do this.
You can call self.skipWaiting()
which is basically saying, Hey, it doesn't-- don't worry about it I'm just going to move in and I'm going to take over, and everything will be fine.
On SVGOMG, like, if the user doesn't interact with the page, if I find out about an update and the user hasn't interacted, I just refresh the page, pick up all the that new stuff.
We can see that happening here.
The user goes to drag a file, page refreshes, so now they're on the latest version.
However, is the update is a little bit slower than that, the user manages to drag a file in, start using the site, refreshing would be disruptive, so you can just like show a message, invite them to refresh.
But you can also do things more complicated.
You could use index db to track the difference between the version you've got now and the new version, and you might even have a change log there, so if you know the new version has an important security fix you might just refresh the page.
I mean that's a bad experience, but it's better than the user being left on this slightly dodgy version.
Or if it's just a minor tweak to a feature that's not used very often, maybe not tell the user at all.
You're in control of the user experience so you can do what you want.
So what we saw before was just a version update, like getting a new binary from the app store, I guess is the equivalent.
But some updates are dynamic and continual.
For example, the page could make a request, which the service worker could fulfill from the cache, but after doing that, it could then go to the network to find a new version of that file and put in the cache.
This is good for the things that don't always need to be completely up to date.
Things like avatars.
Things that change, but if they've got one version, it doesn't really matter for one view.
But for cases where the user must have the newest thing, this is the pattern I really like.
The page sends two requests.
One is fulfilled from the cache, and that's the first render which is really quick, and it's also the offline experience if the user has no connectivity.
Whereas the other request goes to the network, and if it returns it goes to the cache and the page, and the page updates the rendering.
And it's kind of up to you how the page handles this double dose of data.
As a kind of demo of one thing you can do, I created this, trying to fill kind of pseudo sort of like-- it's kind of the Twitter model, a kind of series of micro posts.
But here's how it loads.
Straight away you get data from the cache, instantly, but then it's going to the network in the background, and if results come back it just says, Hey, there's some stuff there.
You invite the user to scroll to see it.
This is good for streams of content, you know, because the user can just scroll to get the new stuff.
For other cases you might want to tell the user, Click here to see the new posts, or update the text or whatever.
We absolutely must protect users from these varying conditions in the network.
Unfortunately sometimes when I suggest this, I get this response.
Users don't stare at websites and think, Oh, thank God this is a website otherwise I'd be really annoyed.
Even in sites we should be making use of this offline first stuff.
People use native app versions of very, very… you know, sites like news sites because they have things like notifications and things like offline.
And these are things that we could just do on the web now.
Something like Wikipedia, right? I gave this ago actually, had a stab at what offline Wikipedia might look like.
Here it is added to my home screen, but it's just a website.
Excuse the poor design.
I literally do not know what I'm doing at all.
But if we look at an article on Mad Max, there's a button in the top right that's Read Offline, and you can click that, and then sometime later I can get rid of that tab, and later when I have like next to no connectivity I'll be able to just call it up instantly.
There it is in my cached articles, and it loads instantly.
So I might cache things that I know I'm going to need on a trip or something like that.
But let's say I wanted to navigate further.
I want to look at the article on Charlize Theron, so I'll click there.
And we get a problem with this because we've still got next to no connectivity, were in lie-fi, but this article isn't cached.
We can't cache all of Wikipedia, right, that would be ridiculous.
But we're actually working on a feature that will let you improve the user experience here.
And this is only in Chrome dev at the moment.
And we're still working on the spec, but here's what you can do with it.
So back here once again, click the link.
But this time after a couple of seconds we'll say for the user, Hey, this is not looking very good or this has failed, we're having trouble downloading this.
Would you like us to download it in the background for you and let you know when it's ready? So tap that, and the site says yo can I show you notifications? This is a good point to ask for permission.
It makes sense, so yeah, fair enough.
And then I can close the browser, put the phone in my pocket and go do something else with my life.
And then it vibrates, I get a notification, it tells me the article is ready.
I can click on it because it's cached, and it just appears instantly.
But implementing this feature didn't require any clever server stuff.
This was a combination of low level features working together to do something.
This is background sync.
All background sync does is it lets you register to run a piece of code, either now if the user has connectivity or when they get connectivity.
And you could use this to send data.
In fact that would probably be the main case if the user clicks Bye or they click Send on an email or Send on a chat.
You'll just get background sync to do that.
So even if they navigate away or you don't have connectivity, it will still send in the background.
But yeah, you can also use it to download bits of content.
It's still a couple of months away from hitting stable.
We need to flesh out the spec, we need to see if other browsers like it as well get their kind of thumbs up on it.
I mean we need to think about the privacy thing pretty-- well, we have thought about the privacy thing pretty seriously, but that was one of the things that delayed us.
I mean, native apps get to run background code pretty much all the time, right, but the web has a higher standard of security and privacy, so we want to keep that.
We're hoping fore these one-off bits of code that are likely to just to run straight away.
We're hoping we can do that without asking for permission, but for the regular background syncs, that's definitely going to be a kind of permission.
So Service Worker lets you get these interesting new features, you know, offline first, let's you get on screen in 150 milliseconds, and you can do this today in Chrome, which depending on what you're building is either a large portion or the majority of your users.
And Firefox will get it soon.
IE said it's on their to do list, but they don't have a date for it.
Apple, as usual, kind of working in secrecy, so we don't really know.
But don't let that stop you.
There is a way that you can use it safely today.
I'm sure you've heard of it.
It's progressive enhancement.
Service Worker has been built from the ground up to be progressive enhancement friendly.
Even on browsers that support Service Worker, it's not going to be there on that first page load.
You're going to have to work without it.
So if you want Safari to implement this, use it because then your stuff will be faster in Firefox and Chrome, and that will force their hand.
They'll, a implement it or be slower.
I mean, people don't expect things to work offline, though.
It's going to take a while people to get used to that.
Do you have those toilets on trains here that have the sort of large, revolving door? I don't know.
You get inside and you're greeted with these buttons, this kind of control panel like this, and you have to press the flashing-- press the D, then you press the flashing L-- I mean the instructions are all there, and they're in Braille as well, so even blind people know they have to wait for the flashing L which is really useful.
I don't trust this.
I don't trust this because it has failed on me.
I was sat on the loo, and I was slowly revealed to the carriage like a bad game show prize.
[LAUGHTER] I probably didn't hit the buttons right.
It was probably my fault, but I prefer doors I can lock and check, but I didn't have that kind of physical check here, and that's similar for offline.
It's difficult for users to trust that the web can work offline.
One thing we're doing-- and this is the last thing I'm saying-- I'm about to be heckled by Bruce-- is if you visit a site a couple of times, and it works offline, it has the manifest, Chrome will start saying, Hey do you want to add this to your home screen? And once it's on your home screen it feels like a native app so user expectations start to change.
It's like, Oh, yeah, that looks like a native app.
I would expect that to work offline.
So that's my challenge to you all.
Save users from lie-fi.
This is how we beat native apps at user experience.
Thank you very much.
[APPLAUSE] Because I'm a nice guy, I'm not going to invite you to sit on the hyper moisture absorbent comfy chair.
I'm going to banish you to the toilet.
No, no, I-- I've got-- I think I've got-- oh, OK fair enough.
No, you're out of time.
We've got to clear out