Fronteers — vakvereniging voor front-end developers

Delivering Responsibly by Scott Jehl

Transcript

(Scott Jehl) Thanks so much Bruce.

That was hilarious.

Good morning.

Thanks so much for having me here.

I'm really excited to be in Amsterdam.

This is my second time here this year, and I just love this city.

You're so lucky to live here.

It's beautiful.

Today I'm going to be talking to you about delivering fast, resilient, and accessible sites.

I'll cover some challenges that we face in doing that.

Some practices and upcoming standards that we can use to make sites usable on any device as soon as possible.

As web designers and developers, we each tend to have our specialties and responsibilities.

But broadly speaking, our job is generally to deliver sites that respond to our users needs.

Which sounds simple enough.

But if you built sites that need to work across many devices really well and really quickly, you know that it's anything but simple.

It's been shown that our users' requests are often the same, regardless of their browsing context.

But the way that we respond to each of their requests should and can vary dramatically.

And that's because there's an incredible diversity in the means by which people access our sites.

Ultimately we want to deliver services that feel at home and appropriate on any given device.

And on the web today, our sites need to respond appropriately to an increasing number of factors.

It's been said that the web is a hostile medium to work with.

I think it's especially hostile to assumptions.

Assumptions that users, browsers, networks are certain to behave in particular ways or have certain characteristics.

That's because the web isn't any one platform, but more of a continuum.

To quote Jeremy Keith, "a continuum of features and constraints."

Or if that sounds overly tidy to you, you might say it's more of a scatter plot.

There's so much to consider.

So let's talk about some of those features and constraints.

Perhaps the most obviously, our interfaces need to respond appropriately to a device's viewport size.

And there's an incredible array of viewport sizes to consider today.

Across devices, viewport sizes have reached a nearly continuous gradient from watches to phones to e-readers, tablets, laptops, desktop, public displays.

And many devices offer two orientations to browse in.

Some offer split screen.

So viewport size isn't necessarily tied to screen size.

It's almost silly to focus on viewport size at all anymore, or any particular sizes because they can be anything.

It's nearly continuous.

So this calls for fluid user interfaces as opposed to a series of fixed designs that we toggle between.

So responsive design.

Along with viewport size differences, we need to pair the fidelity of our interfaces, which can be expected of a device's capabilities.

That means qualifying the application of our enhancements, our CSS, our JavaScript, so that we don't break an already usable experience when we apply them.

Going to let this slide finish playing.

It's a comparison of features between simple and enhanced.

And you can see the sort of the non JavaScript, or sort of simpler experience, and then the enhanced one on the right.

And feature support variation in differences happen for all kinds of reasons.

Not just by accident, and not just in older browsers.

For example, millions of people use proxy browsers, like UCWeb in China or Opera Mini.

Now Opera Mini is this really great browser.

Especially if you have a simpler device or you're conscious of your data usage.

And who isn't it? So it proxies everything between you and the server to compress it and make it really fast.

It probably also breaks your JavaScript if there's significant complexity there.

And it doesn't support things that we tend to rely on like icon fonts.

So features vary.

Input and output mechanisms very as well.

Most devices offer multiple means of interacting with a site.

Between hardware buttons, software gestures, mouse, keyword, assistive technology.

So we need to be careful not to assume that any of these modes of input or output are exclusively in play at any time.

Factors like screen size and touch gestures don't always correlate anymore.

They kind of happen all over the place.

And so those are some device and UI related considerations.

But mobile devices brought to us other constraints to the web as well, like widely varying network speeds and conditions.

According to Ericcson's mobility report-- this is from 2013-- just two years, ago 60% of the world's connections were sub 3G.

So that's 2G to be clear.

Edge, right? And today that 2G and 3G majority is still very much the case.

This is from Ericcson's 2015 report.

And if you focus on the green areas, that's 4G coverage.

So you can see in places like the US, where we have 40% coverage of 4G, but still, 3G and 2G dominate.

In Western Europe, 60% 3G.

And the projections for 2020 are encouraging from that report, but only really encouraging if you aim to reach people in the US and Western Europe.

Otherwise that green is going to take a while to fill in.

So great network speeds aren't really evenly distributed yet.

But speed isn't the only consideration with networks as well.

Network reliability and consistency are a huge factor in how we deliver.

And requests that we make across the web come with no guarantee of delivery.

And they can and do drop, hang, time out.

These things happen.

Requests can be blocked deliberately too.

Like in China, where all of Google services are blocked.

Google Fonts, their JavaScript CDNs, things that we rely on when we deliver our assets may or may not be delivered in China.

Even on a stable connection, if we can rely on the network, requests often fail for more personalized reasons.

We often say the progressive enhancement is not about supporting people who disable JavaScript, because who would do that, right? Well, we might want to relax that position a little bit now.

Because recently we're seeing new and very common mainstream ways that many things can be disabled or blocked.

Some of the most popular ad blockers that you can get for desktop browsers come by default with a lot of popular CDN is blocked.

Things like Privacy Badger, AdBlocker.

So enhancements may not be getting to our users, and we have to be ready for that.

Content blockers are really interesting.

This is a new feature in iOS9 Safari that allow users to block types of assets.

The screen cap is from one of the most popular apps and app store right now.

It's called Purify.

It's a content blocker.

And it's still on the home page in the app store.

And this is what you get when you open it up.

It's this panel.

This is all it is.

And the first options are presented with are some major features of browsers that you can turn off in the name of performance and privacy.

Some interesting things about content blockers is they block requests, but they don't disable features necessarily.

So JavaScript may be blocked, as far requests to a script just won't go out.

It'll fail.

But JavaScript is still enabled.

So classic work arounds like no script tags aren't going to help here.

This is more of a network failure condition.

There other side effects to consider too.

Web fonts.

People have been trained to know that web fonts take awhile to load.

So they'll be thrilled that they can easily just turn them off now.

So we have to be ready for that.

Fallback fonts.

Things like that.

And if you do icon fonts, then you really have to think about fallbacks, because they'll get blocked along with fonts as well.

And I think if these things break your site, people aren't going to think that it's their fault that they disabled something.

They're going to think your site is down.

So we need to be ready for these things.

And really, who would blame people for reacting to the web this way these days? Our own practices, for many years, have set the stage for blockers to become enormously popular.

"The New York Times" just had a story come out.

They found more than 50% of all mobile data comes from ads.

They also ran a story on content blockers, and why you should be using them.

So you can expect those to be very mainstream.

So over all, these factors form a really hostile medium.

But it's important to note that these are not just factors that make our jobs harder.

These factors are our jobs.

Our job is to reach people and respond to their needs.

And reach, I think, is the greatest advantage of using web technology.

If we do our jobs well, our sites can reach folks who access the web under very different circumstances than many of us do working at our desks at home.

And that's inspiring to me.

But despite that we have this distinct advantage when we choose web technology that we can have enormous reach, it's common to encounter sites that are built in ways that inhibit that reach, that aren't resilient.

Sometimes that's due to assumptions we make about our users or our networks.

Sometimes it's due to conveniences that we desire as developers.

Some of the most popular web application frameworks that we have today are built on pretty risky assumptions.

Imagine a home purposefully built so that you could not enter it if the power goes out.

Even if all you want is shelter.

Just a roof.

It sounds silly, but that's exactly how we design and build a lot of our web apps today and rely on JavaScript to execute and run properly for content to be delivered.

Often a site's content and basic functionality is reasonable to offer without using any JavaScript at all, but we choose practices that are entirely reliant on the web's most fragile layer.

And it's easy to forget about that fragility, I think.

As developers and designers, many of us work in relatively ideal conditions.

To work efficiently, we need really fast reliable networks.

We need access to latest and greatest devices.

So it's easy to forget that we're often an edge case ourselves, amongst our own users.

And when we forget, the costs are really real, and they transfer to our users.

Today the average page weighs over two megabytes.

I've had this chart in my talk for three years now.

In 2012, I think it was 1.1 megabytes.

And we were really up in arms then too.

So it's growing.

And the figure comes from the HTTP archive, which is a great site.

I think it's based on the top 10,000 sites on the web.

So it's an average.

It may even be optimistic, because we encountered lots of sites when we click out of Twitter, things like that, and go to news sites that are much heavier than two megabytes.

And this is a real problem, because people access the web over data plans that are capped.

They have monthly or prepaid caps.

So it's a real cost to every byte.

Tim Kadlec recently built this site that calculates that cost.

It's called, What Does My Site Cost .com.

It's very plain and direct.

And I ran it through an article on Wired.com

just to give you an example.

That particular article was 11 and 1/2-ish megabytes.

And it costs, in some parts of the world, 4 US dollars just to load this page once.

So that's really prohibitive to access.

So weight really matters.

But aside from that, building sites that only work for the luckiest, most fortunate users on the web isn't just bad for business, it's kind of boring too for developers.

The most interesting work that we can do is the challenge of reaching more people and connecting everyone.

That's what the web is all about.

So rather than designing for the best case scenario, we need to design for reality, because that's where our users live in the real world.

But how do we, as web developers, the privileged developers with these great connection speeds and devices, how do we know it's like out there? Well, fortunately we have it so good we have to build tools to simulate what the real world is like.

These are some of my favorites.

In Chrome's Dev Tools, there is a emulation mode where you can turn on different connection speeds like 3G, and get a feel for how your site loads over that connection speed.

That's really useful.

If you want to use that sort of network throttling in other browsers, you can use a more system wide throttling solution like Network Link Conditioner, which is a Mac preference pane you can get through X tools.

Or this other tool, which was named by, I think an elite level troll.

It's called Comcast.

And it's a tool that simulates not so good connection speeds like those you might experience if you have Comcast as your provider in the US.

Everyone loves to hate on Comcast.

Another way we can simulate real world conditions is testing with extensions on and blockers.

Put yourself in your user's shoes.

This is what people are actually doing.

Disable things.

Disable JavaScript, fonts.

Try to break your site and then fix it and make it stronger.

On the browser and device testing side of things, we have tools like BrowserStack.

BrowserStack has live device testing.

It's so nice.

These are not just screen shots, but actually often real devices, not just simulators, that you're interacting with in real time.

Of course nothing replaces real device testing in your own hands.

So performance, animations, touch gestures, these are areas where you should not trust just an emulator alone.

So you've got to get a device lab.

And to build a device lab-- there are a lot of great resources out there, but-- I find just searching Amazon to see what's popular at any given time is a great way to know what we should be testing on.

See what people are buying.

Often it's Trac Fones, and really simple rendering engines.

And the tool I'd recommend most is WebPagetest.org.

This is such a great tool.

I use it throughout the day as I'm building.

The way it works, you can enter a URL, a browser and device combination somewhere in the world.

And it will pull up your site and load it, and give you all sorts of rich information about how it loaded.

So these are the sort of tools that we need to be using day to day.

We need to be considering functional testing as a major portion of our development cycle.

Not just a stage where we throw it over the wall to QA and let them test later.

I find that sometimes it takes longer to test the code I write then actually write it.

And that's probably to be expected.

There's so many factors to consider.

So I've identified some of the factors that we need to think about, and some ways that we can reproduce the conditions that we find out in the real world.

To thrive in a hostile unpredictable environment, we need to build for resilience.

And I think right now might be the most exciting and also challenging time to be building resilient web services.

So for the remainder of this talk, I'm going to focus on delivery.

Because responding quickly is the first step to building anything that works very broadly across devices.

From a technical perspective, I think right now is a really unique time to be building on the web.

We're at a bit of a technological crossroads.

On the one hand, we've spent years developing tools and practices that have just begun to prove their potential.

And we've found these really clever and reliable ways to work around the weaknesses in our delivery pipeline.

But on the other hand, we have some really major infrastructure changes that are just now happening on the web.

And a lot of the practices that we've been building up over the years may no longer be needed.

One new infrastructure change to keep our eye on is HDB2, which is the new iteration of our protocol http.

And H2, I'll call it, is not a breaking change, but rather a feature that capable browsers can opt into.

So maybe a better metaphor is more of like an on ramp than a crossroads.

Sort of like a progressive enhancement scenario.

We've been running up against barriers in http1 for a long time.

And we've developed really great ways to work within its constraints, really creative ways.

In an H2 world a lot of those workarounds will still work, but they won't be necessary.

And some of them will kind of become anti patterns as well.

It negates the need for a lot of things, really.

Work arounds that reduce requests like image spriting, concatenating files, inlining files.

We've also been splitting our files across many domains to work around limitations, and how many active requests we can have in a browser, which was very low.

HDB2 gets away from that limitation.

But a lot of our practices are going to still be necessary.

Things like getting our file size as small as possible, reducing requests as far as not making them in the first place if they're not necessary, because we don't want to load more data than we need to.

CDNs are still going to be necessary.

And of course all the fault tolerant practices that we have.

Progressive enhancement, cross device strategies, they all still apply.

And like most things on the web, HDB2 won't work in all browsers.

IE versions prior to 11 will never have support for this protocol.

Nor will older versions of iOS, Android, Opera Mini has no support yet.

That said, look at this amazing support we have already.

It's crazy.

Across the board, it's actually a lot better supported in browsers now that it is on a lot of the server implementations.

So it's something that we should be factoring into our workflow today.

And at least planning for how we're going to be building with it in the near future.

So to do that, I think we need to prioritize practices that work both today and tomorrow.

One is optimizing file size.

There are a lot of easy and well known ways that we can reduce the weight of our pages and our assets.

To start, we should be optimizing our files so that they're later for network travel.

So things like images and fonts, we can use tools to get them as small as possible.

We can be minifying text files.

So removing unnecessary white space in CSS, JavaScript, even HTML can be minified.

And then of course for transfer, we should be using compression.

Gzip to make it travel over the wire as small as possible.

In addition to merely optimizing are images, it often makes sense to offer different versions of our images, based on different browser conditions.

So viewport size, screen resolution, network conditions.

These are factors we can use to decide which image is the most appropriate to deliver.

And our community has worked really hard to develop some new standards lately that make this really easy for us to manage.

So our set and picture are some new web standards that are built for responsive images.

And they are at least partially supported already in most browsers.

So it's really nice.

They're really nice built in fallbacks too.

So we can use them right away and know that the they'll degrade to an image safely.

And if it's necessary, given the features that you want to use, you can polyfill portions of them as well.

But you don't need to.

The fallbacks are built in.

We also have new browser features that are exposing new features to the server with our requests from the browser.

Client hints is a new type of header or headers that we can opt into sending with requests for things like images and stylesheets.

And they let the server know a little bit about the client side, so it can make a more educated decision about how to send an asset back to the client.

I'll give you an example of how that works because I think it's pretty neat.

At the top the page, in the head of the page, we would have a meta tag that just specifies some of the headers that we'd like to send with subsequent requests to images.

So up top we have DPR, Device Pixel Ratio, viewport width, and rendered with of the image.

And then we have an image somewhere in the page that comes after that.

And you can see when that goes out to the server, these are the sort of request headers that we see.

The usual, except the type of asset that the browser will accept, but also things like rendered width, viewport width, say it's 720 and the rendered width of the image is half that because we've specified that with responsive images attribute sizes that says this should be rendered at 50% of the viewport width.

So this is a case where the server can make a really nice appropriate decision about what image to load.

And we don't have to specify all the sizes right in our markup like we do with source set and picture.

So it's really interesting how we can combine these features.

So making our delivery payload smaller helps a lot.

Because data costs money, and delivering less makes things load faster too.

But beyond file size, we need to be thinking about how to stream load our path to rendering the page so that it spins up as quickly as possible.

And that's where file size is a little less of a concern than how we prioritize our asset loading pipeline.

A lot of people tend to think the bandwidth is the big speed problem on the web.

But in 2012 Ilya Grigorik from Google wrote that it turns out that in most cases the web's big performance problems have more to do with latency then with bandwidth.

And what Ilya is referring to is the physical distance that our code needs to travel between the server and the browser.

A trip to the server and back takes measurable time.

Sometimes a lot of time.

So even though our code is hurtling around the world over fiber wire and even at nearly the speed of light, that distance takes time.

This is why CDNs matter.

So after optimizing size, we still need to factor in proximity delays.

Round trips to the server can be 100 of milliseconds at best, or over a second.

And all that adds up.

What's worse is that we often require many round trips to the server, just to start rendering a page.

That's because when a webpage is first requested, the first thing that arrives back at the browser is the HTML.

And that HTML often references external files like CSS and JavaScript that it needs to render the page.

So the browser will then go make another round trip to get those before it can start rendering something.

So we want to minimize the round trips that we make them when we're trying to get something rendered onto the screen.

And one way to avoid making round trips is to carry more with you on each trip.

Sort of like carrying groceries back from the car.

Anyone else do that? I do that.

So for a long time now, this is the sort of process we've had to do to make a website render fast.

Bring all the important stuff with us the first time we return.

Now to do that in a way that works in any browser today, we kind of need a work around.

And Google has this tool called Page Speed Insights.

It's a website.

It has a lot of great tips.

It also analyzes your site.

Gives you really catered tips to your site.

And this one in particular relates to this process.

It says identify and inline the CSS necessary for rendering above the fold content.

So the idea is that we need to avoid making additional trips to the server for anything that will need to render the top portion of the page.

Above the fold, so to speak.

And I know what you're thinking.

That that term, I thought we got rid of it.

Now it's back.

We're talking about it a lot.

And of course there is no fold.

It varies across devices and viewport sizes.

There is no one consistent viewport height.

But I don't think this advice needs to be controversial.

Pages are viewed from the top first, top to bottom.

So Google's just telling us to prioritize getting the top of the page rendered as soon as possible.

In CSS, taking that advice starts to look something like this.

If you can imagine the left is our full style sheet.

You could considered the right to be a subset of that style sheet that's necessary just for rendering the top portion of the page.

So our critical CSS rules.

Visually, that breaks down a little more interestingly, I think.

This is a article page on Filming Group's website, where I work.

And this is the design as it's intended to be showed through the whole page.

And a red line denotes where we would consider to be the fold when we're analyzing this page.

And of course it's arbitrary.

But we tend to use a pretty generous region of the page.

Something like 900 pixels tall.

Which works across all our break points.

And on the right you can see the design with only it's critical CSS rules apply.

So just under the red line we start to see the design break off.

So we know it's working.

There's no CSS for those portions of the page.

So we've isolated the CSS for that part.

So how do we know which CSS is relevant to the critical portion of the page? Well, we have a lot of great tools out there.

I can recommend one that we work on at Filament.

It's called Critical CSS.

There's also a grunt version of it.

There's a version that can work with Gulp.

So all those tools that you use in your build process, this falls right into it.

And what it does is it opens up a template of your site in a headless browser and analyzes the full style sheet and figures out which portion of that style sheet is critical to the top portion of the page rendering.

And at Filament, when we work on a client project, we'll run this task on each unique template of our site.

So not every page of the site, but each unique template like the home page, an article template, things like that.

And the configuration for that breaks down sort of like this.

We have a series of templates like home, services, about, and some options for each one.

Here's the URL to the full page.

Here's the URL to the full style sheet.

And here's where we want to write-- here's a file path where we want to write the critical styles to.

So then we have this isolated file for each template that contains just the CSS necessary for that page.

So once we have that we can start working with it.

We can start taking the advice that was talked about in that Google recommendation.

And to avoid round trips, we're going to include that critical CSS right into the HTML instead of referencing it externally.

This allows us to cram the important rules into that first round trip, and render immediately when it arrives at the browser.

So the way that looks in the head of the page, to bring CSS in any HTML document we have the style tag, style element.

And you can just insert styles into it.

But since we have isolated the styles for each template into individual files, we'd probably have something a little more dynamic.

Include them with a server side include or build process kind of step.

So that's how we would pull the critical CSS into this page and avoid that external request.

So that's the critical part of the CSS, but we also need to load the full stylesheet to render the rest of the page.

And we want to do that without blocking page rendering.

So we know that a style sheet, an ordinary style sheet link, is not going to work for us.

Because anywhere in the DOM that a regular old link to a style sheet exists, it's going to halt rendering as soon as it's encountered.

That's often in browsers like iOS and Chrome immediately upon when it's parsed by the browser.

So you don't see the page at all.

It just goes out and fetches that.

Even if it's at the bottom of the body.

So one future-- looking forward looking way that we can fetch that style sheet in a non-blocking way is link rel preload is a new feature that's coming.

And we can reference our full stylesheet through that.

Now rel preload is not supported, as far as I know, in any browsers yet.

Maybe by the end of this talk, Jake could have it implemented in Chrome.

But at least for now, it's just a forward looking standard.

Now rel preload will fetch that asset, but it won't do anything with it.

It won't apply it.

So we need to do that part ourself.

And one way we can do that in a supporting browser, a browser that supports real preload, is to bind to its onload handler and change its rel to stylesheet when it arrives.

And then it will apply as a style sheet.

So that'll work in any browser that supports rel preload.

But again, no browsers do yet.

So one way we could go about this is to polyfill it.

And polyfills use JavaScript.

So let's jump back over to JavaScript and inline the critical portion of our JavaScript as well, which brings me to a question, what kind of JavaScript would be considered critical anyway? Ideally, no JavaScript is critical to rendering a functional usable page.

But having a little JavaScript upfront, running right away really helps us deliver more catered and appropriate experiences, especially to newer browsers.

So I kind of consider it not really critical JavaScript, but high priority JavaScript.

Things like file loading functions, feature tests may be polyfilled if they're really small and really need to be there.

And just some bootstrapping logic to get the page up to speed and enhanced appropriately.

So for JavaScript inlining things we have the script tag.

So I'm going to inline some JavaScript here as well, just like we did the CSS.

And for this example I'm going to need a file loading function for CSS.

So there's one that we work on at Filament called load CSS.

It will load any style sheet asynchronously.

That's all it does.

So we can use that to pass a stylesheet URL.

And now that we have it, we can start to polyfill that link rel preload.

So in order to polyfill it, we need a feature test to know if rel preload is supported.

This is one that's been proposed in the W3C threads.

But basically it's just a test to see if preload is supported or not.

And if not, we can use load CSS to polyfill it.

So here's how that would look.

I've added a ID to the link so we can find it really quickly.

And in our script tag, I've got a couple functions upfront that I've abbreviated load CSS.

Our little preload supported test.

And we can just say if it's not supported, then load that links href.

And at the end here, I have a no script fallback just in case a link to the full style sheet for if route if it does script really is disabled.

So moving on from CSS loading, I typically include a script loader as well to load the heavier JavaScript that might be necessary in rendering a more enhanced experience.

Things like our DOM framework, other UI improvement.

And for that we can use a script loader like load js.

There are others out there just to load an asynchronous file.

Of course there are native ways and browsers that we can load a JavaScript file asynchronously.

And they work really well.

They're really broadly supported.

So that is an option.

Async and defer attributes do work really well.

But what I find is kind of nice about using a script loader is it allows us to qualify whether or not we make the request at all.

So in this case, I'm using an approach called cutting the mustard, where I'm testing for a couple of features.

Sort of a diagnostic test is query selector supported, is add event listener supported.

And if so, we can load our script.

So that's the sort of a conditional logic we can use a freeze and a script loader.

So to bring that all together, this is kind of how to the head of the page looks now.

We have our critical styles up top.

We have our link to our full style sheet.

And then we have some script to bootstrap the page.

And then our fallback.

So how this going to change in the near future? There are some ways that http/2 is going to change and simplify or improve this workflow.

And that's because of one particularly interesting feature, I think, called Server Push.

In either protocol, latency is still an issue.

Round trips still take time.

It doesn't matter what protocol we're in.

And that's why we inline things in http/1.

That's why we in line the critical CSS.

Well, in HDB2, that's no longer necessary because of a feature called Server Push.

And the way that works is-- it's really neat-- the browser or the client can make a request for a particular asset, and the server can send back additional assets that it knows are relevant to that asset that was requested.

So we could say, can I have this HTML document? And the server will say, sure.

And in addition, you're going to need these CSS files to render it.

And that's something we couldn't do in the past.

So we kind of simulated it with inlining resources into our HTML.

So we're simulating Server Push with that inlining.

So what that might mean that in an implementation on the server side is something like this.

We would still want to extract the critical rules for rendering the top portion the page, but in a modern browser that supports HDB2, we won't want to inline it.

We would want to use Server Push.

So this is just a hacked together PHP version of what that might look like.

Just a little if, else condition.

If HDB2 protocol is in play, then we can just link directly to our critical CSS style sheet.

And then we're going to set a couple link headers.

And this is kind of a clunky way-- at least it took me awhile understand how this works.

But apparently in PHP's implementation of the http/2 server, when you set a link header with a rel preload to any particular asset, it'll push those.

It'll use Server Push.

So here were saying, link to the critical CSS and also push that down with the HTML file.

And also push the full one.

And then our else condition is to inline it.

So this is for any browser that doesn't support H2.

So you can see that sort of logic could apply.

And on the topic of requesting that full CSS file, there's another feature coming that's being discussed that I think will be an interesting alternative to that length preload method of loading our full style sheet.

The Chrome team and Jake, in particular, has been working this out.

And I think it's really interesting.

The idea is that Chrome is considering changing the way that style sheets that are referenced in the body are fetched.

Currently, a style sheet linked anywhere in the page will block rendering immediately during the par stage.

So that's often before you see anything on the page.

If they change the behavior, it'll render anything that comes before that style sheet in the DOM, but not anything that comes after it.

And that's sort of interesting, because then we could consider maybe loading secondary style sheets that way and blocking whatever comes after it until that style sheet is loaded.

Also interestingly, if Chrome changes its behavior on this, it'll sort of start to line up with how other browsers behave.

iOS unfortunately behaves like Chrome.

It'll block everything.

But most every other browser either requests that body style sheet asynchronously, or blocks in this way that Chrome is proposing.

So I think that's something to keep our eye on.

So we've moved our CSS and JavaScript to a deferred non-blocking sort of workflow.

So we can show the page as soon as it arrives.

But even though the page is ready, a major blocker in perceived performance comes from font loading.

And that's because the default way that some browsers load fonts right now is kind of awful.

Many browsers, like iOS Safari, hide the text of the page while the custom font is still loading.

And we call that behavior the flash of invisible type.

You're probably familiar with it if you have an iPhone.

This is a page on the Fast Company website.

And you can see that the HTML has been delivered.

The CSS has been delivered.

It's ready to show you, but they're still hiding the text because the custom font isn't ready.

And unfortunately in iOS, it pairs this timing of how long it will wait with just ordinary time outs, so it could be 30 seconds before you see text.

Other browsers are a lot better about that.

Three seconds before they show a fallback.

So at least it's not as common across other browsers.

So recently we've been taking an approach to combat this behavior at Filament Group.

I read an article about this.

We're using some JavaScript to listen for font loading events and apply the font when it's loaded.

And in the meantime show a fallback.

So in this particular approach we're using Font Face Observer, which is written by Bram Stein of Typekit.

And it works something like this.

It's sort of like Modernizer in that we're keying off of a class they can be added to the HTML element, or something high up in the page.

So we start with a fallback font, Sans Serif, in this case, for all of our H2s.

And then once the class is in play, we can use our custom font.

And the JavaScript that enables that looks something like this.

We use this Font Face Observer script.

We're listening for Source Sans Pro, in this case.

Once it loads, we had a class to the HTML element and enable that font.

So that enables a sort of progressive font rendering strategy, which it isn't always desirable, but we've found it's kind of nice because it lets the default text show immediately.

So this is an example of our website homepage before and after this technique was deployed.

Up top, you can see it's ready to show, but until the custom font gets there, there's no text.

Whereas on the bottom, at one second we have something usable.

In the future, to pull off this sort of progressive font rendering, we have a new CSS property.

So we won't have to use JavaScript at all.

The new name for it is font-display.

It was font-rendering for a while.

It's still being discussed.

But what's interesting is that we can mimic that entire font rendering behavior that I just showed you with one property alone.

So in this case, to mimic it exactly as I showed, font-display swap is what we would want.

You start with a fallback font.

And once the custom type has loaded, it swapped it in.

There are other options here.

And I think a lot of people consider those options to be preferable.

And I think as long as a small time out, like three seconds, in may be nice to just hide the type all together, and try to show the custom font when it's available.

There are other properties for font display that say don't use a custom font at all if it's not in cache.

And just wait for the next page load.

So really great tools that we're getting in CSS alone.

So those are some techniques that we can use to improve our delivery already today, and also sort of looking ahead to tomorrow.

And if you're interested, I wrote an article on Filament's site that shows just how much these rendering practices can help.

I did a case study on that article on Wired.com

that I showed earlier that costs $4 to load.

And just by optimizing the rendering path alone, I was able to drop almost nine seconds off the time that it takes to get something usable on the screen on 3G.

So currently it takes almost 13 seconds on an Android Chrome device.

Brought it down to a little under 4.

Keeping in mind, that's still with http/1 in play, and I didn't optimize anything.

So I didn't change any image assets that were being loaded.

I didn't shrink the size of the page at all.

In fact, it stayed exactly the same.

Almost 12 megabyte page.

But just by changing the way that we request assets, we can really drop the time down for how usable the page is, or soon the page is usable.

So it's a pretty amazing impact that you can have just by making those changes.

So I know it's the first session of the day.

And I got pretty technical at second half of that talk.

I covered a lot.

I think there's a lot to think about.

A lot of factors to consider.

But I think we need to remember that we can build beautiful complex things on the web that are broadly accessible.

And to be honest, it's harder to build things that way.

But I think building resilient services is also part of our job.

So my name is Scott.

I work at Filament Group.

That's my book if you're interested, and thanks so much for having me.

[APPLAUSE] (Bruce) Scott.

(Scott Jehl) Yes.

(Bruce) Come over to the comfy chair.

We'd have a cup of tea, but I've left my teapot back at home.

I've got some questions from the audience.

Many people are excited by http/2.

Can I ambush you with some questions about that? (Scott Jehl) Sure.

(Bruce) It seems to me and it seems to many people this is a considerable advancement in the way that we're going to talk to the server.

And maybe you get a lot of bang for that buck.

What do I, as a developer, have to do to take advantage of http/2? Or does it just magically work for me? (Scott Jehl) In my experience, I've found that's the tricky part so far because the server implementations for http/2 are catching up a little more slowly than browser support for the protocol have.

But they're out there.

There's a node versions, there's Apache versions and Gen X versions that you can use today.

I showed PHP.

Can use that.

That's probably the easiest way to-- at least for me-- I like to just pop open a PHP file and try it out before configuring a more involved server environment.

It's ready to be used.

And it's a sort of opt-in feature for browsers.

So the first request from the browser carries a header that says it's ready to upgrade.

It can accept the protocol, http/2.

And then they make the connection.

So the default is to start with one.

(Bruce) So if I've got a host, or if I work for an organization in which there are Dev Ops people, I just say make it happen.

[INTERPOSING VOICES] (Scott Jehl) And I know some CDN providers have made it easier on us already too.

(Bruce) Funny, you mentioned CDNs.

Another question from the bit at the beginning of your talk about assumptions was, should we just stop using CDNs, given that they'll be blocked by content blockers and they can have lots of latency? (Scott Jehl) No.

I don't think so.

Latency is still the factor that contributes most to why we want a CDN.

We want to get our data servers as close to every user as we can, so the round trip is shorter.

As far as relying on external domains for CDNs? I don't know.

Maybe that's a little riskier? But no, I don't think the practice is going away at all.

(Bruce) Something else that's dear to your heart and mine is responsive images.

With the picture elements, we can send bigger pictures to high DBI screens.

But lots of those will be on ad networks.

Do we need some mechanism, some magical mechanism, that's as yet unspecified, for actually saying this is a low bandwidth environment, so send the cruddiest image? (Scott Jehl) So there are two responsive image standards that we can work with.

There's the picture element.

There's attributes for the image element, source set and sizes.

With the former, we're very much in control of specifying a particular asset that should be loaded if our conditions are met.

So we're using media queries, which are non negotiable.

If they apply, the whatever asset is associated with that media query will be loaded in the browser.

So that's how picture works.

Image in source set is a little different.

We're kind of taking a more hands off approach with that one and saying here's some conditions and optional assets that we have available and we let the browser decide, based on what it knows best.

And that could be conditions like network speed and things like that that the browser knows about and we don't.

So in almost any case where you're not art directing the break points of a responsive image, you should probably be using source set instead of the picture element.

And I think that's at least one way that we can offload that logic to the browser.

(Bruce) There's a lot of talk about some kind of bandwidth media query.

And the people who know about such things say that it's actually not possible.

And I declare an interest here.

But one of the things that we do in Opera is we know that the user knows whether her device is running slowly because of bandwidth.

And it's an option in the browser.

The user can choose, always choose the lower grade image, regardless of what my device is.

I tend to think that this is something that a user opt into, rather than we, as developers, to try to detect and decide.

(Scott Jehl) Yeah, sure.

Because you could be on a 4G connection and high resolution screen, but your data plan has a really low cap.

And you're very conscious of your usage.

So in that case, the browser may not be acting in your best interest.

So I think preferences are always kind of the thing that should trump anything else in that situation.

It would be nice if preferences for data usage got worked right into responsive image formats.

(Bruce) Thank you, and thank you for your book.

Your recent A Book Apart book, is it? (Scott Jehl) Yeah, "Responsible Responsive Design."

(Bruce) Cool.

And "Designing with Progressive Enhancement" is still a great book.

How many years old is it now? (Scott Jehl) It was 2010.

(Bruce) Hopefully, hopefully the perfect storm of so many different devices, content blockers, people being increasingly interested in users outside fast bandwidth countries hopefully now is the time when people will actually start listening to the message of building resilient sites.

(Scott Jehl) Yeah, I think it's all of these things that are just now happening, sort of just exposed conditions that have already been there for a long time.

We've always had heavy reliance is on these things that are not so dependable.

And now that something like iOS Safari has a feature that you can just block web fonts very easily.

And people are reading in "The New York Times" that it'll make the pages faster if they do that, suddenly it's a reality that we have to care about.

But it was always there.

Requests were always failing.

It's just out of sight, out of mind.

Now we have to care a little bit more, I think.

(Bruce) Cool.

So care please, people of Fronteers And give it up for Scott Jehl.

(Scott Jehl) Thank you.

Thanks, Bruce.

Post a comment