Fronteers — vakvereniging voor front-end developers

Faster JavaScript web apps by Alex MacCaw

Transcript

Thank you very much, Paul.

I hope you guys are all fed and watered.

But it's a real pleasure to be speaking in front of you.

And it's also great to see people speaking in not their mother tongue.

It's hard enough to make a talk in English let alone something that isn't your mother tongue.

Anyway, so today I'm going to be talking about faster JavaScript web apps and basically optimizing web performance.

And we're going to be going pretty fast, because there's a lot to cover.

And I'm hopefully gonna leave you with a few practical tips, so by the end of this, you have something to take home.

So speed matters.

And I probably don't have to convince you guys of this.

But anyway, I'll just give you some stats.

Every second of latency is about 0.65% increase in bounce rate.

Bing actually found out that 2 seconds of latency actually decreased their revenue by 4.3%.

So it's really easy to actually justify spending time on speed optimizations, because it directly affects your bottom line.

And this is the quote that I love, actually, from Thomas.

"Speed is the second biggest engagement driver on the internet just after perceived speed."

And it's kinda true.

If you can trick the user into believing something is fast, then it's just as good as actually being fast.

And here's a little UI 101.

These are the kind of timings and responses you get for your web apps.

So people are a little bit more forgiving on the web than on the desktop.

So they'll-- they'll wait a little longer, but not that much longer.

So you can see, 0 to 100 milliseconds, people think it's instant.

100 to 300 milliseconds, sluggish.

And then from then on, it just goes down the drain.

So you really want to be aiming for about less than 300 or 300 to 500 milliseconds.

That's the kind of speed that you want your page to be served at.

And there's actually a pretty interesting analysis done by this chap at Microsoft called [INAUDIBLE].

And he did a travel site analysis of the top five travel sites.

So this is Priceline, KAYAK, travelos-- Travelocity, and Orbitz.

And what was kind of interesting is all these sites essentially are meant to do the same thing, right? They take flights, and they compare them.

But their actual markup is pretty different.

So you have some sites pretty small to download.

Some sites are kind of big.

In fact, it changes by a factor of four.

So can anyone tell me which site they think was the fastest? Five? So number five? All right.

So, you're right.

So you--you might think that the least amount of bytes downloaded would be the fastest, would be like site four-- sorry, it's like three-- or the least amount of JavaScript would be fastest.

But, actually, five is the fastest, even though it's got more JavaScript and bytes downloaded.

And you can see why here.

This is a kind of breakdown of the rendering.

All that purple stuff at the top is the rendering.

And all the gray stuff at the bottom is the network requests.

And you can see site five has remarkably less rendering time and remarkably less JavaScript time and even though it has kind of the same network impact.

So really, the point I'm trying to demonstrate here is that rendering time is hugely important when it comes to your initial page speed.

So this is what they call a resource waterfall.

And you can kind of see in a bit more detail how pages are loaded.

Dark green is the DNS resolution.

Orange is the TCP handshake.

Green is HTTP request.

And blue is the time taken to download a resource.

And you notice that while Monocle I/O, which is the site that is being fetched, new requests are being dispatched.

So you can really see that the time that assets are taken, or like downloaded, really depends on the structure of your page.

Because the HTML's kind of passed incrementally.

And you can see here we've got this DNS resolution and handshake, that blue line and the orange bit.

And you can see it's actually kind of a limiting factor for a lot of other requests here.

In other words, bandwidth is not really the limiting factor.

It's actually network latency, the time taken between server and client.

And you can see further requests down the page don't actually need to do that DNS resolution, because of keepalive.

They're using the same TCP socket.

And you can see various events here, DOM content loaded, start render complete.

And kind of an overall, you can see about half the time is spent on network.

About half of it is spent rendering.

So that's our two bottlenecks, basically, network and rendering.

And that's what we're going to address in this talk.

So network is kind of easy to optimize in a way.

You just follow some good defaults, and you should be most of the way there.

I'm going to give you 12 rules for network optimization.

But they basically all boil down to these two things, eliminate and reduce the unnecessary network latency and minimize the amount of bytes transferred.

Because the user only gets value when work is painted to the screen.

So here are the tips.

Here are the rules for network.

Number one, quick response.

You need to respond to the request as quickly as possible.

And Google search will actually return the head tag of the page before it's even passed the request.

Before it's done anything, it'll return that.

And this means that the browser can actually go through that HTML and it knows what resources it's going to fetch subsequently.

Don't preload data.

Don't do loads of SQL queries in that initial index initial request.

If you do need to preload data, put that in a second deferred script.

I'm going to cover that a bit later.

Use keepalive.

You're probably already using this.

It's enabled by default in pretty much everything.

But it is important.

You probably want to check it is actually enabled.

Keepalive will reuse established TCP connections.

Because your three way SYN, SYN-ACK, ACK handshake is kind of expensive, especially if there's SSL negotiation.

So use keepalive.

Don't redirect.

Now, again, you may scoff.

Like, who redirects to the top level? But actually, half of the world's top websites-- well, about 63% of the world's top websites-- have a top level redirect.

So I kind of advise you don't use ww., just

have a naked domain.

Don't redirect, and-- and you can actually save up to, like, 200 milliseconds there.

Use a CDN.

CDN will decrease the amount of hops, um, that your request from the client to the server has to go through.

And it'll put the data or the assets as geographically close to the client as possible.

And this is especially true with static resources.

You-- if I were you, I would put all your static resources on a CDN.

There's no reason not to be honest.

Again, you got to reduce resources.

Um, it might seem obvious, but a vast amount of websites have-- a lot, about-- the average is about 90 plus individual resources, which is about 770 K, which is a lot.

And the vast amount of them are JavaScript images.

So you have to focus on that when you're reducing resources.

You got to minify JavaScript and CSS.

Actually, gzipping the response will take out a lot of the duplication in there.

But minifiying JavaScript will actually rewrite your JavaScript, so you still get space savings there.

You got to concatenate resources.

Again, this is going to reduce the amount of TCP requests.

It's going to make things a lot faster.

Because your enemy here is latency and not bandwidth.

You got to gzip responses.

And actually, this is the-- had the biggest effect out of all the tips here on my performance.

Now, for some reason this isn't turned on by default, in a lot of servers.

It's not turned on by default in NGINX and Apache, which kind of sucks, but I presume for compatibility reasons.

But to be honest, you've got nothing to lose by turning it on.

And I think, honestly, they should change the defaults.

So you should cache static assets.

And what I usually do is set an expires header on every static asset for a year.

So why a year? Well, the cache protocol, the caching spec, says that anything longer than a year is not going to be actually recognized, is not going to be used.

So a year is kind of the maximum that you can use.

And then what you can do is put a check sum of the source of that asset and append that check sum to the file name.

So whenever the asset changes, then the file name will change.

And you get automatic invalidation.

And I think that is the best way of doing static caching.

Cache AJAX requests, again, you may scoff.

But only 1% of AJAX requests on the web are cached.

Now, you may want to cache at the AJAX layer.

Or you may want to cache at the JavaScript layer slightly above it with just caching objects, up to you.

But if you do cache to the AJAX layer, you can send cache property to jQuery for using that.

And that will make sure the AJAX requests are cached.

Remove duplicate code, again, another obvious one.

But 58% of the web has duplicate code on it.

I actually found two versions of jQuery on stripe.com.

So it can happen to the best of us.

You got to minimize and compress images.

About half of the bytes on most websites today is images.

Really avoid large images.

Never resize images in CSS.

Remember, PNG is a lossless.

Often, formats like JPEGs will be smaller.

And you should sprite images when it is necessary.

Your standards made for IE, now this is a header that you can set, X-UA-Compatible header.

If you just search for standards made IE, Google it, it'll show you how to set it.

A lot of frameworks like Rails set it for you.

And this will make it render the HTML a bit more intelligently and actually speed up the request.

You should definitely put this in the header, in a HTP header, not a tag in the page.

Because if you put it in a tag, then IE passes the page, says the tag, figures out it's got to change its rendering mode, and just starts again.

It sucks.

Put style sheets first.

So you want the browser to-- to issue that style sheet network request first, because it will block painting until it has a style sheet.

Inline styles invalidate the CPU cache.

Use them sparingly.

Only queue the required styles with the page.

Don't share between pages, unless there's, like, obvious styles that's shared between all your sites.

Stop blocking JavaScript, use defer.

If you can, just use defer on every single JavaScript tag you have.

Again, test-- Paul is mentioning a bargain. jQuery

UI and jQuery.

So test in IE.

But you should not be blocking the page.

If you don't have defer on it, then it's going to really, really impact the performance of your app.

And delay loading unnecessary JavaScript, things like analytics, Google analytics, mix panel, that kind of thing.

If it's nee- needed for the initial page load, delay loading it.

All right.

So the network is fairly straightforward.

But I want to give you a few specific tips that I found especially useful.

NGINX page speed is a module developed by Google that will automate much of the network optimizations you should be making.

It'll do some nice things.

It'll do things like image optimization, stripping, meta tags, that kind of thing.

It'll minify JavaScript.

It'll defer JavaScript.

It'll minimize images, that kind of thing.

If you're using NGINX, then it's no-brainer to turn this thing on.

It probably will make a huge performance impact on your IO.

Don't use large cookies.

This is something that I have done in the past.

And it's really affected the performance.

This, for example, is a request to monocle that IO.

These are just the HTTP headers for this request.

And here is the cookie, right? This is basically almost double the amount of the other headers there.

And it's-- it's not compressed.

So don't store session data in cookies.

Just store an ID in cookies.

And store the session data on your server somewhere in memory, memcached perhaps, that kind of thing.

'Cause this cookie is going to be on every single request to your site.

Now, if you're clever, then you probably put your, uh, assets in a CDN.

And hopefully, you won't have any cookies going into your CDN.

But still-- [AUDIO OUT] --JavaScript.

Now, I kind of mentioned this earlier.

But you don't want to be using your naked JavaScript tag.

Due to previous kind of browser behavior, whenever the browser gets across a script tag, it just stops everything, loads the script tag, and then continues, because of things like document.write.

This day and age, no one should be using document.write.

Just add defer on the script tag, it'll mean it'll load asynchronously.

So it'll-- it'll block before the page load and execute before the page load.

But it won't block any other resources loading.

So this is key.

And so if you're-- if you're preloading data and so you want this data to be available on the initial page load, then you're going to have to put that data inside of a separate JavaScript.

Like I say, you don't want to be doing any SQL requests or SQL queries on the initial page.

So you need to put it in the setup.js file.

This is a little pattern I use.

Um, so you will require this on your index.html page.

And setup.js is not

actually JS script.

It's actually-- this is a bit of Sinatra.

It's actually an action in your web app.

And it'll basically put-- do the SQL queries that it needs to do.

It'll populate that JavaScript.

It'll return that JavaScript, the JavaScript executed on the client.

But the key thing here is this is not on the initial page.

This is not on the initial request.

So this is not slowing down the initial request.

Because you want the browser to be able to say, OK, now we have all the resources.

We can go and fetch all the assets we need.

setup.js is one of those.

So, Steve kind of covered this earlier.

But there's a bunch of meta tags you can use to prefetch stuff.

And so DNS prefetch we're going to do and DNS resolution.

Prefetch will actually go and fetch the page and its assets.

Prerender will actually prerender the page.

And you can actually do this programmatically.

You can just append it to your body like this.

And Chrome will go ahead and prerender it.

For example, in Monocle, which is, like, a sort of social news site, when someone is hovering over a link for more than a few seconds, I go ahead and prerender that page.

That means that whenever they click on that link, that page will be loaded in instantly.

And you can actually see pages being preloaded inside, uh, Google-- Google Chrome's Task Manager.

This is basically the only way you can see if a page is actually being preloaded.

But it's worth noting you can only prerender one page at a time.

And if you try and prerender another page, then it'll-- it'll replace this page.

So now, we've talked a lot about network.

I want to talk about rendering.

And rendering is often unoptimized and is often less talked about.

But it's basically the other bottleneck to your initial page request.

And you can't really talk about rendering without talking about reflows.

So a reflowing and a web browser are first the process where the rendering engine calculates positions and geometries of the various elements in the document.

So in other words, it figures out how elements should be displayed before the paint.

And some browsers call this reflow.

Some browsers call this a redraw.

It's just-- it's just terminology.

And this is an example of Ge-- Gecko rendering the Mozilla.org

homepage.

It's calculating the layout and positions of all the elements.

So you can see all those elements being calculated.

And then notice, everything is being recalculated again.

At about 16 seconds through that video, um, something happened which invalidates the layout.

It means-- it means it had to recalculate the whole thing.

And this is the kind of thing we want to avoid.

This is the kind of reflow optimizations we want to be making.

So what really triggers a reflow? Well, a lot of things do.

Basically, interacting with the DOM does, adding, removing, showing, hiding DOM elements, scrolling, resizing, adding classes.

In fact, all these properties here trigger reflows.

And basically, you want to access these properties sparingly.

Because they're all going to be very expensive.

So reflows are often expensive.

But you can reduce their impact.

And the three places where you really notice reflows is the first page load, animation loops, and, uh, scrolling.

So we really need to optimize all three use cases, make sure they're as fast as possible.

So one tip for making-- optimizing reflows is to change classes in the DOM tree as low as possible.

Now, this will limit the scope of the reflow to as few notes as possible.

And hopefully, it won't do a full document reload.

You want to avoid DOM thrashing.

Basically, what this means is if you do a read from the DOM, and then you do a write to the Dom, and then you read from the DOM, and then a write to the DOM.

You're going to do multiple reflows in there for no good reason.

So what you can do is batch up your reads and your writes.

This will only cause one reflow.

Another option is to actually put all your writes in a requestAnimationFrame.

And this is just a bit nicer for the browser.

Because the browser can then execute that requestAnimationFrame whenever it's convenient for them-- for the browser.

And hopefully, it'll be convenient in the whole paint cycle.

This is another example of inefficient code.

You're appending three elements to this element one.

Now, this is going to cause three reflows.

So what you really want to do is actually append all those elements to a document fragment, and then append that document fragment to the page.

You're kind of batching up DOM changes.

And that will only cause one reflow.

And then the fastest thing you can do is just actually set the inner HTML of an element.

And you can see before and after.

Before, in Chrome's Inspector, you can see all the reflows going on.

They're kind of expensive.

After, we've got rid of almost all of them.

So if you are doing animation loops, use requestAnimationFrame.

Don't use setTimeout or setInterval.

The thing is the browser needs to refresh or paint about 60 times-- 60 Hertz.

And if you-- if you are slowing that down, then it's going to be really useful-- noticeable to the user.

When you're setting-- creating a set interval loop, it's very difficult to know what is the right interval.

Because if it's too small, then you're going to do work that never ever gets painted to the screen.

And if it's too long, then your animation will be really [INAUDIBLE].

So the obvious answer is don't use setInterval, use requestAnimationFrame.

And you can actually just shim this in all the browsers.

This device is not really limited animation loads.

If you're deferring any kind of rendering, really, use it.

It's much kinder to the browser.

You should debounce scroll events.

Now, scroll events have to be as fast as possible.

Or event handlers have to be as fast as possible.

When you're listening to scroll events, those things get fired a lot.

And so you don't-- you probably don't want to listen to every single firing of that event.

So what you can do is you can write this nice debounce function.

This will make sure that even if you call this function 1,000 times, it'll only be called once until there's a pause.

In this case, the default is 300 milliseconds.

And then in a-- when you're adding a scroll event listener, you can just wrap it in a debounce.

Now, you want to make-- you still want to make sure you're on scroll event listener is as fast as possible.

And you may even want to put that, any rendering it does, in a requestAnimationFrame.

So be careful with onbeforeunload.

Now, this is very useful technique to basically warn the user when there are-- perhaps there are pending AJAX requests or data hasn't been saved.

And onbeforeunload is an event that's fired as soon as you navigate away from the site to another page.

Now, the thing about onbeforeunload, is if you have it set, it really messes up Chrome's caching, which is a kind of bug.

But there's no way to work around it at the moment.

So, basically, only set onbeforeunload when you need it.

So you can see here, we have a handler which sets the onbeforeunload and remove handler, which removes it.

And then you can see it in a greater context with jQuery.

You can see we're using the global event handlers whenever there's an Ajax request.

And we're adding the handler.

And then when that request is finished, we're removing it.

So that basically means that the default state of onbeforeunload is null, which is what we want for caching reasons.

Essentially, do less on the client.

Do less before domready is fired.

Don't resize images in CSS.

Avoid complex CSS selectors, inline styles.

Batch up DOM communication.

Take elements under flow.

So position them absolute, instead of having them in their reflow flow.

Solving a performance problem is kind of like solving a crime.

So first, you have to collect evidence.

And then you have to interrogate suspects.

And lastly, you need to correct-- collect some forensics.

And so let's solve a promise-- problem.

Let's look for some evidence.

And our crime scene is this app.

This is a app I've been building to help source engineers.

And notice, we have an infinite table down the right-hand side.

Now, this thing will just carry on loading.

There's about 1 million engineers in the database.

So clearly, you can't display them all client side.

So they're basically loaded in as needed when you scroll down this table.

Now, let's open Chrome's timeline and have a look, see if we've got any performance problems.

So we're going to use timeline feature and the Inspector.

You can see we're going to select the frame option, and then we going to hit Record right at the bottom.

And then we're going to play around with the app.

And then we're going to hit Stop.

And then we can see what the result of it.

And Chrome will render this pretty nice table here.

So you can see exactly the frame rate of your app and all the things it's doing, all the, basically, performance problems you have.

And you can see already that we've got some big performance problems here.

Almost no frames here are being rented under 60 Hertz, which kind of sucks.

And you can see a lot of purple up there.

Purple is bad.

We don't want to see purple.

That's rendering time.

And you can see we've got a specific problem up here with, basically, when we're scrolling down and new records are being loaded in.

So this is a really big problem.

We probably want to try and solve that.

So what we can do is hover over that specific render, see what caused it.

And you can see here, it was actually caused by adding jQuery class, and then which was added by this show loading function.

And that actually triggered a whole document reflow, which is really expensive.

Now, it turns out that actually I had been kind of lazy when it comes to loading indicators.

And I had just been adding a class of loading onto the table and removing that class when loading was finished.

And you can kind of see it here.

It's a little faint loading indicator.

But basically, it turns out that it's a bad idea to add classes for loading indicators.

Because it's causing a whole document reflow.

So we could probably be a bit more clever about that.

Now, when you're detecting a memory leak in Chrome, the first place to really go is the Memory tab in the Inspector.

And the profile of a normal application should look more like a sawtooth, right? So memory is allocated.

And then the garbage collector kicks in and deallocates them.

So you can see that little there is where the garbage collector actually kicks in.

Now, that would be a normal application.

This is my application.

All right, so you can see we got a problem here.

This-- the memory is just going up and up and up and up and up.

Now, you can see the garbage collectors kicking in.

But actually, it's not having much sort of effect.

Memory is still going up.

And there's kind of a clue here to why.

You can see this line here is all the DOM nodes.

And that's just-- that is just going up and up.

We're just adding more and more DOM nodes to the page.

Now, this memory graph doesn't actually include DOM nodes.

DOM nodes are native memory.

But they-- they're kind of linked.

So there are objects that reference that DOM node-- those DOM nodes.

And so you can see, we're kind of-- the main problem here is that we're just adding too many DOM nodes to the page.

So it looks like we need to do a bit of manual garbage collection.

So when the user is scrolling that big table, we need to actually delete some of these DOM nodes that are outside the viewport and try and get our memory management under control.

The next tool in our arsenal is Chrome's Profiler.

And you can basically collect some JavaScript CPU pro-- CPU profiles.

Same kind of technique as before, you click Start, you play around with your app, just-- perhaps, just perform a specific action, and then click Stop.

And then you can look at all the data.

And it will show you which operations are taking the longest.

And the sort order is bottom up, so basically listing functions by impact on performance.

And you can also see timings by percentage, which is useful for exposing bottlenecks.

And here you can see there's a little exclamation mark.

And this means the code has been unoptimized.

So the VA compiler has two modes.

It has optimized mode and full mode.

And it tries to render everything in optimized mode if it-- if it can.

Sometimes it falls back to full mode.

And this can happen for a variety of reasons.

But in this case, its because jQuery extend has a try-catch in it.

And if it sees a try-catch, then it just falls back to full mode immediately.

Now, if you hover over that little exclamation mark, it'll tell you what's wrong, what caused the V8 compiler to drop out of optimize mode.

You can also drill down into events, see what coursed them.

In this case, the-- the bit of JavaScript that's taking the longest is this WebKit, scroll view into-- scroll view if needed function.

Now, it's kind of, uh-- there's not much we can do about this.

Basically, it's kind of expected that any communication with the DOM is going to take a long time.

Heap snapshots, your next tool.

You can take one snap shot, and then you can manipulate your app, and then take another snapshot.

And you can see we have two snapshots down at the left-hand side here.

And then what we're doing is we're comparing them.

So you can see, it's a little option down here.

We can select comparison.

And we can select which snapshot we want to compare it to.

And then it's going to show you this interesting delta.

It'll show you the object allocated, objects deallocated, and this delta.

This delta is kind of useful if we want to figure out-- maybe this is causing a memory leak.

If we have a positive delta the whole time, then that means that objects are being allocated.

And they're not being deallocated properly.

So to help performance primes-- performance crimes-- you have to lay down the law.

You have to continually record performance.

And you have to notify yourself when something bad happens.

And Google has this incredible service called PageSpeed Insights.

It'll tell you a lot about the performance of your app, especially the networking aspect of it.

So you can find it online here or you just Google PageSpeed Insights.

And it looks a bit like this.

You just enter your URL in here.

And it'll go off.

And it's got about 100 different rules.

And it will go off and, um, apply your site, see if your site passes any of these rules.

And it'll give you some tips on optimizations you should be making.

What's kind of interesting is that they actually have an API.

It's PageSpeeds Insights API.

And they give you, like, a ridiculous 2,500-- 25,000 even-- requests a day.

So basically, a free API to this PageSpeed Insights.

Now, what we can do is we can use this API for continuous integration.

This is a little bit of Ruby that I've written.

But basically, this is-- it's a client to that API.

And whenever you run this script, it's going to hit that API.

And the API is going to analyze your site, come back with a bunch of stats, some of which-- well, one of which includes the page score.

And then we're writing that to a CSV file.

And so what I've done is I've actually tied this in with a post apply hook.

So it records data after every single deploy.

And then I can get a really good head-- heads up and identify deploy as the really significantly impact performance pretty easily by just going through the CSV file.

It'd be kind of awesome, actually, if this was a service, a monitoring service.

Um, maybe if any of you guys want to start a startup out there, that might be a good idea.

So in conclusion, have a budget and stick to it.

300 to 500 milliseconds is a good place to start.

Keep performance in mind.

So it's easy to deal with the network stuff at the start.

But remember not to forget to optimize rendering.

Remember, speed matters.

Thank you very much.

[APPLAUSE] Come on.

That was great.

Thanks very much.

Um, one question that, uh, I saw some people asking about, deferring loading data.

Using that as a technique, does that conflict with search engine optimization? Basically, I think this has come up with Monocle before, which is like, if we don't have the data there in the markup, is that a challenge with getting a place in Google? Yeah.

So this is definitely an issue that you need to be aware of.

monacle.io, if you

want to check it out, is actually all rendered client sites.

There's no HTML shipped to the browser.

But it turns out you can actually fix this really simply.

You can fix this in about 10 minutes.

Google have this AJAX crawling spec.

And there's a little tag you can put in the page that tells the Google bot that you support the spec.

And then the Google bot will request your website.

And they will append a little query parameter onto your website.

So you can detect that and basically serve up a bopped-- a bought optimized version of the page.

And if you have a look at the Google rankings of monocle.io,

you'll see that it's probably indexed.

There's no problem with actually indexing JavaScript apps.

Um, so you've done kind of a bit of work on performance for both Monocle and for sourcing.

Where do you think that your got, like, the most impact of kind of-- of the improvements that you landed? Um, embarrassing it was probably gzip actually.

You know, gzip's not turned on by default on a lot of servers.

And I was using Heroku.

And I kind of left that optimization til the end.

And you should just really include it by default.

That actually reduced page load time significantly.

Um, the-- you showed a bit of the developer tools in Chrome.

And I happen to work on them.

So that's-- that's OK, I guess.

But there is a question of, like, what-- what is-- what is the story-- what does the cross-browser story look like? So are there other tools on-- available for other browsers for some of these things? I-- I believe IE has a set of tools.

I've never actually used them though.

Um, presumably, if you optimize for V8, you're going to be optimizing for a lot of JavaScript engines.

But to be honest, I don't have a good answer for that.

Um, we might-- we might get into this when Paul talks, too.

Perfect.

We will? OK.

We will.

Um, it's a good topic.

And so I think we'll-- we'll hit it.

One of the things-- what is your preferred technique for optimizing page load when it comes to dealing with images? Um, basically making sure they're as small as possible, compress them, resize them to the actual size they are on the page.

Um, that's the kind of bulk standard things.

Progressive JPEG? Yes.

Progressive JPEG.

Um, you just have to get your-- it's a-- it's a server supported thing.

You have to make sure your server supports it, right? Not with progressive.

OK.

No.

Um, it's fine as far as the serving aspect.

But, um, yeah.

[INAUDIBLE].

We'll have a perceived user benefit at least-- Yeah.

---is the idea.

Um, cool.

And, um, what is-- have you done something-- so you used-- you showed the PageSpeed Insights API.

Have you built anything-- I'm basically cli-- curious around, like, have you took-- taken a look at over the lifetime of your projects and kind of tracked the performance over, like, weeks and months and being able to kind of see, perhaps, like, you regressed performance somewhere.

So actually, I've just added that.

So unfortunately, no.

I don't have that data.

But it would be kind of a cool web service.

I was thinking just putting together a little Heroku app that you could ping whenever you deploy.

And it would show you a graph over time.

Because it's definitely something you want to keep an eye on.

Yeah.

Yeah, absolutely.

All right, cool.

Well, thank you very much-- Thanks so much.

---Alex.

Thanks.

Post a comment