Fronteers Spring conference

Interactive panel discussion Technical performance

This is the panel discussion after a set of three talks on Technical Performance delivered on April 1, 2016 at Fronteers Spring Conference in Amsterdam


(interviewer) There's like a stunned-- a deafening stunned silence on Twitter.

As everyone's watching, they're thinking, what? What the hell is going on there? So first of all, since you've just been talking, how can we detect this and prevent it? Is the kind of thing that we would be able to understand is happening and avoid or detect? Oh, that's a really good question and there's no obvious answer to it.

Because, I mean, I've been using Facebook in this demo, but this is nothing that Facebook is doing wrong, really.

It's just inherent to the techniques that are present on the web and the new web APIs that are there.

(interviewer) Yeah.

So it's not quite clear how this could be fixed.

Facebook could, for example-- in this particular case, they could choose to add some padding to their response bodies so that all the response bodies are about the same size.

But that would be technically really difficult to implement.

Like how do you determine what the correct response size would be? And it would also be terrible for performance, of course, if you just make every document larger.

Browsers could also be implementing some kind of padding when it comes to the timing, for example.

Like if you put something in the cache, it resolves a promise, and the browser could maybe wait a couple of microseconds-- or I don't know-- at random before it resolves that promise.

But that might actually also relate to performance degradation in other cases.

So it's a really tough problem and there's no obvious solution to this.

If anyone has any ideas, let us know.

I think I did see a couple of suggestions about, kind of, adding some, like, random timing into some of these functions for when you're looking at the timing attack.

But again, I guess you're, you're adding performance issues, right.

That's the, that's the way you-- Yeah.

But not only that.

Because depending on how it's implemented exactly, you could also still figure it out.

Like if it's truly random, you could just do the same measurement and figure out what the proper value is anyway.


So yeah, it's a tough problem.

It's witchcraft.

[CHUCKLES] So Tor-- sorry, can you hear? Yeah.

OK, cool.

So Tor Browser actually made Sniffly much harder because they were rounding-- their JavaScript timer rounds to the nearest 100 millisecond.

So that's your timing resolution in Tor Browser.

Um, I think there's various tricks you can do to get around that, but.

That's cool.

So are there any other kind of practical applications of this that we, that you haven't shown us that we should be terrified of.

Because now I'm just, now I'm just turning the internet off.

(subject 3) Well, basically any-- Not for everyone.

I should make it-- [CHUCKLES] (subject 3) Every time you have like two URLs and the knowledge that one URL-- the response is larger than the other response-- and if that information tells you something at all, then you can kind of abuse this trick.

Like for example, you could abuse it against Twitter.

Because there are protected accounts there, and if you visit the profile for a protected user, you won't see any tweets.

You will just get the error message saying, hey, you're not following this user.

But if you do follow them, you will see all the tweets.

So there is a big difference in the response size there.

So just by using this timing attack, you could easily figure out if your current user on your website follows a particular user.

And you can start applying that to a list of Twitter users.

And then you could-- you get into fingerprinting your users and stuff like that.

So it's, it's kind of scary.

(interviewer 1) Yeah, it's scary.

OK, wow.

So just kind of as we're getting into the issue of, kind of, HTTPS as well and uh, and how users can potentially feel safe again-- one day, I hope-- on the web.

They're thinking we should all be using HTTPS on our sites now, right? Just, just regardless.

That should just be what we're all defaulting to.

Yeah, that'd be great.


And so what is the easiest-- I mean, I know you've talked a little bit about that already, but what is the cheapest, easiest, quickest thing that we can do today to, to get HTTPS onto our sites? Ooh, that's a really good question.

I actually think it's, um, getting the large hosts-- like, you know the layer cake? Getting the bottom layer.

Um, because that just requires, you know, Dreamhost or Wordpress or whoever just putting a switch in that says use HTTPS.

And actually Let's Encrypt-- someone pointed out, Let's Encrypt has actually partnered with Dreamhost to do this already.

Oh, it has? Yeah.


So it should be easier now for us just to get out there and do this? There's no excuse now.

Well, yeah, because certificates don't cost money anymore, so.


So we should just grab hold of them.

So how-- so I'd-- until recently, I've just been using things like CloudFlare.

Just saying, OK, well I'll use CloudFlare and just turn HTTPS on.

Is that, is that still sensible or should we be doing things differently? Yeah.

Um, so I think CloudFlare has, like, different tiers of SSL.

Like, you can use your own certificate with Let's Encrypt on CloudFlare, or you can use-- they have an option where, like, they manage your SSL certificate.



Indeed, do you guys all think that users understand the importance of HTTPS? And you know, I know that, in Chrome in particular now, the green padlock is getting more and more visible.

And I suspect that other browsers will start kind of championing this a lot more as well.

And I think, I think at the Chrome Dev Summit in November, there was talk of, rather than championing things that were over HTTPS, actually calling things that were over HTTP and actually saying, well, this is not secured over HTTPS as the default.

Do you think, do you think that's the sensible thing to do? Do you think that users understand the importance of this yet? Or does it matter? Um, I think most users just see a lock icon.

Like, that's the most you can really hope for is that they see that one lock there.

There's-- like, I don't think most people know what SSL is under the hood-- Right.

--or, like, what RSA handshakes are, et cetera.

Um, so I-- I do think that, in a world where more and more things are moving to HTTPS, it make sense to show a bad signal, like a red bar with a cross over it for sites that are still HTTP.



You guys kind of agree with that? You think that that's realistic that that's going to start happening? I mean, are Opera going to do that, for instance? Uh, yeah.

We're actually working on some UI changes related to the address bar.

(interviewer) Yeah.

And as much as I love the concept of URLs myself, um, you know, it's really easy to start faking things like locks in front of your address bar.

Because people just see a lock anywhere on your web page and they think, oh, it's secure, it's fine, even though they're not over HTTPS at all.

So, like Youn said, if you convey a negative message, I think that's much more effective than trying to do something positive.


Yeah, right.

Yeah, I think most users might care about that if they're on their bank site.

And they're kind of starting to be familiar.

But everywhere else, that's not really-- that's not something that they, they think about.


So I kind of want to come to images a little bit.

And again, that's another-- I'm out of my depth in this session because there's so many things that blew my mind in the last three talks.

Um, and a lot of people have been asking about the kind of image formats that these, these techniques can be used on.

I mean, we're obviously looking mostly at JPEG and progressive JPEG there.

Do some of these-- are there opportunities for other image formats as well along the same line, or is it just a completely different ball game? So this, what I showed today only works for JPEGs right now.

Because PNG has interlacing.

So interlacing is kind of like progressive encoding, but you can't influence the creation process of the interlaced layers.

In PNG, it is always Adam7.

Um, so yeah, you can't use it.


Um, PNG will-- interlaced PNG will benefit from HTTP/2, but you can't influence the creation of it.

So you just have to trust that it does a good job, which it does.


And, uh, when we were talking about JPEGs, are there a particular kind of images that are going to-- this will work better for than others? The kind of assets that are in there? And I'm thinking particularly about the automation of this.

And can you just say, OK, well, here's bucket of JPEGs.

Because that's the unit of-- that's the collective noun for a lot of JPEGs, right? Um, there's bucket of JPEGs-- and I'm just going to blast through all of this, through the same kind of optimi-- do you get the same kind of yield on the optimizations across that, or-- That's a really good question, and the answer is no.

(interviewer) OK.

What I showed today is a proof of concept that works well enough for these images that I've used.

And the, like I said, medium of medians is 6%.

There are, of course, easily-created edge cases that will yield nothing or even negative, negative results.

However, I'm working on a tool that will automate the scan, scans.txt generation.

So my five scan layer demo today is just to show that it works as a proof of concept.

However, to, to get the optimal result for every input JPEG, we would need to make an assum-- like in a heuristic approach, make assumptions about the JPEG then check the results, see if its is good enough, and then, like in a binary search, just narrow down until we finally hit the sweet spot for the perfect scan layer combination that will get the smallest file size and the best scan results.

And is this the kind of thing that people should be doing on their own sites and their own-- that the DevOps team should be involved in that, or is this the kind of thing that could be kind of sourced out to Akamai, for instance, to be doing that.

Is that the kind of thing that Akamai would be involved in, or is it a step removed from that? No.

Definitely Akamai is a place to do this, but you can also do it yourself.

It's in every open source JPEG encoder, so you don't have to go and buy this as a, as a ready-made solution.

You can also implement it yourself.

Like we just said, it's-- right now, it's just a proof of concept, so there is no code to do this properly yet.

Um, but I've been-- I'm working on it.

So as always, I'm proof-of-concepting most of what I'm talking about.

So I will put up something on GitHub soon.

(interviewer) Nice.


I think people will be ready for that.

Estelle kind of tou-- jumped-- touched on sourcing image assets at different sizes and kind of generating them at different sizes as well automatically on the server.

I'm really conscious of use cases where there are things like CMSes in place and there-- you don't really know what kind of content's going to be uploaded.

And often, we kind of collect the content in its largest possible size of resource so you've got the best possible source there, and then churn through, you know, the different outputs of that.

The kind of tools that you've been talking about, does that play nicely with generating lots of different sizes of assets as well? Yes, they do.

(interviewer) There are no issues there? No.


So I've been, I've been actually working with clients who generate, well, more than 180 different sizes out of a master image.

180? Yeah, 180, and their CMS net still works very nice with the optimizations that I've proposed.

(interviewer) Nice.


And so, since we've-- you know, we also have been touching today on HTTP/2.

Do we need to, uh, unlearn a lot of the optimizations that we've done in the past for the way that we deliver our sites? You know, for a long time, we've had this focus on reducing the number of requests and all of those kind of things.

Does that matter anymore that we've done that, or should we be reverting to how we may have done things before and, like, having lots of HTTP requests? Yeah.

When it comes to things like CSS spriting, well, you don't really need to do that anymore over HTTP/2.

But in a certain way, we are already moving towards that.

Because more and more people are using SVG, because we want resolution-independent assets and it's just so much easier to have separate SVG files for each icon that you're using.

But yeah, the same thing goes for concatenating your JavaScript into a single file.

That's not necessarily the best option anymore.

But the main thing is, you have to measure all these things and try it out for your specific use case, because there's no generic advice that works anymore.


And I guess-- because it's difficult to know whether or not you're going to be serving over HTTP/2 or, or not.


So your build output that you have for your site might-- you might want it to be different depending on what the case is.

So are there any-- what's the, what's the answer for that one? There's going to be a depends answer coming along, I'm certain.

But is there any advice for how to deal with that.

I mean, can you conditionally serve things and, and be smart about that? Um, yeah.

I guess in theory you can.

I've never actually done that on the server side.

But, like, I imagine, as HTTP/2 becomes more popular, like, people will make tools to have your server, like, have a different code path for HTTP/2.

And you think the benefits are worthwhile in having-- in being able to service both, or should you say, OK, HTTP/2 has arrived, so we should just run at that and not worry about how things are optimized for things that are falling back-- (diane) No, definitely.

Like I said in my talk about the initial congestion, you know, when I ran my tests, I found out that HTTP/2 was actually performing worse without having an optimized initial congestion window.

And I was wondering about that.

And I was thinking, what is wrong there? And I talked to other people in my company who were applying HTTP/2 for customers.

And they figured-- they also told me about problems of adopting HTTP/2 because specific sites were actually having a problem with the first couple 100 milliseconds of delivery.

And then we talked about this, and we figured out, yeah, it's the initial congestion window.

So we need to tune for that.

And then we discussed how we can figure that out on their current platform if we can.

So it's kinda nice.

So I think the answer is here, we totally need to ship different aspects, different assets for HTTP/1 and HTTP/2 to get the best experience for both worlds.

(interviewer) Right.

So our, our builds are going to get just a little bit more complex for the moment, right? Potentially? Well, the adoption rate for HTTP/2 is really good in modern browser products too.

So I think this is going to be a quick path.

(interviewer) You think so? (subject 1) Yeah.

Although I would note that there is many parts of the world where people don't update their browsers as much.

Like, people have, like, these old phones.

Like Facebook and Yahoo and probably other companies have these labs where, like, developers can go and use the phones that people use in, like, third world countries and emerging markets and see, like, how bad the user experience is.

So if you had to pick one and optimize for that, right now, it would still be HTTP/1.1.

(diane) That's kind of disappointing, but yeah.


I was kind of hoping that we'd be ready to be at that tipping point when we say, OK, well, this is an enhancement that we can-- this should-- nothing's stopping us to do that yet.

OK, all right.

Well, so um, I think we're ready for the break.

So let's just say thanks again to Tobias, to Matias and Diane.

[APPLAUSE] (diane) Thank you.

(interviewer) Thank you, guys.

Post a comment