#fronteers14

Nicolas Gallagher - Making Twitter UI infrastructure

Fronteers 2014 | Amsterdam, October 10, 2014

Slides

Transcript

Here to present a topic which is has been described on the Fronteers site as TBA, so something to look forward to.

Give a big hungover welcome to Nicolas Gallagher.

[APPLAUSE] Hey, good morning.

So I work at Twitter.

And Twitter has, I assume, a series of uncommon challenges that, at the moment, necessity that we start building a UI infrastructure to make available to the rest of the company.

So I hope that you'll find it interesting to hear about what we're working on.

It's not a success story or a sort of how to, or anything like that.

It's simply sharing what I've been working on with a colleague of mine, Rob Sayre, who's the tech lead of the web engineering at Twitter.

And I'm going to break it down into three sections-- basically a brief history of the problems that we're facing and the way that we're thinking about dealing with them, the principles that we're using to guide the infrastructure work that we're doing, and then a brief overview of the system itself.

So you might be thinking, I thought you do CSS? And I was thinking this, too.

But when I thought about it a little bit, it turns out that something I am quite interested in is infrastructure work.

And I've spent most of my time at Twitter working on it in some capacity.

So before I joined Twitter, I kind of got into dotfiles, about automating your machine environment, having install scripts.

And that really sowed the seed of working with the file system, and taking some source files, and generating, like, output.

Once I joined Twitter, I worked at TweetDeck for a few months.

And while I was there, I ported their Python build scripts to Node using Grunt at the time.

Once I moved to the US and I worked at Twitter, I joined the Cards team and was involved in creating the first new mcore service outside of Twitter.com.

And mcore is the name that we use internally for the Scala web server that Twitter created.

At Twitter.com, I worked on some

build tools integrating a CSS preprocessor.

And I did a bit of related work on build tools and test infrastructure for some of our open source projects, too.

And that kind of has really helped inform the little that I know about infrastructure work and tooling.

So a brief history of how we got to the state where we need to build generic UI infrastructure, and the chances that we've had to learn from our prior experiences, and the problems that we've had with our existing applications.

So at Twitter, we build and design a lot of web apps.

Some of the ones that you might think of off the top of your head are Twitter.com, the mobile apps.

We have two mobile apps-- TweetDeck, the recently launched analytics dashboard, the ads platform, Vine.

But we also have a bunch of smaller applications.

And we build web apps for specific audiences, partners, internal, like, defacing dashboards, all kinds of things.

And because we have a predominantly service-oriented architecture, sometimes we even have multiple apps that to a user look like one app.

But they're split up into multiple services.

So building web apps at the scale that we operate at is quite demanding work, quite challenging.

There are many interactive-- many interacting parts.

They interact in many unpredictable ways.

And these complex adaptive systems involve a lot of experimentation and a lot of unpredictable emergent phenomena.

And so there's a lot of pressures on the products that we create as organizations.

We kind of end up with these products with large numbers of people in the company all invested in the outcome, all trying to have their requirements met.

And this means that we can't really think about our applications in a kind of mechanistic, linear way, characterized by simple cause and effect, because small changes can have really large effects.

As we run a whole bunch of experiments, we never really know which ones are going to work, or which ones are going to have big impacts.

So it's very hard, if not impossible, to implement a strategic plan for anything other than the relatively short term.

So rather than resisting the nature of these complex systems and the lack of complete control that you have and that's inherent in them, we need to work with them and bake them into our daily operations.

And this kind of contributes to web engineering being an expensive pastime.

So aside from the actual cost of booting up infrastructure, we also incur the cost of lost engineering time, so people spending time kind of repeating work that's already been done, working a lot on the tasks that aren't the immediate tasks that they set out to do, buggy products, failing our users, dealing with the difficulty of the web platform in its current state, that kind of thing.

And this is compounded by the additional factor that everything is so tightly interconnected, both like in the actual code bases that we currently have, and within the companies themselves.

So engineering is only part of a larger whole, and not really a kind of a distinct role that you can separate from anything else.

Almost everyone who works in engineering is also working with designers, kind of negotiating with product managers, dealing with requirements from business development, from many competing requirements, basically.

So no one really owns the products in itself, because it has to service the needs of the entire company and the people who use it.

And those needs are always changing.

And despite that, web apps at Twitter are being commodified.

So we have over 3 and 1/2 thousand people work at Twitter.

I think that well over 1,000 of them are engineers.

And I don't think that's very common for a company to be able to afford to maintain multiple high-quality web apps, or to have the requirement of that so many new web apps need to be created.

So just in the month that I've be working on this project, three separate teams in the company came to me, and I'm sure they come to other people on the frameworks teams, and say, hey, we're building a web app, what should we use? Where can we find tools to do X, Y, and Z? So I imagine that web app is now increasingly an item on a checklist that many teams have.

It's one part of their strategy, and they don't expect that it should take a long time to get them started.

And with the service-oriented architecture that we increasingly have, that also requires more applications.

But when we talk about web app, there's actually dozens of pieces of complex functionality, processes, workflows, years of accumulated experience and expertise, all wrapped up in this idea of a web app.

And so the frequency at which we need to spin out these new apps means that we can't really afford to do it from scratch anymore, because it's too costly to do that.

And it's a drain on, like, individual time.

And when you notice a pattern being repeated, which is this question, what should be use to build the front end of our app, then you decide that you have to do something about it.

And we can't act as internal consultants or expect our colleagues to accept a lack of consistency or the lack of a provided service.

So it boils down to what can we do to try and enhance the expertise that exists within Twitter? And that expertise I see as more than just individual skill and knowledge about narrow technical aspects.

It's also how we, as an organization, develop, support, and deploy those, the narrow technical expertise to various different situations.

And so many people at Twitter have many more years of technical expertise in various areas than I do.

And so a good use of my time, I think, is to try and help other people better apply their own expertise to creatively solve the problems that we have, rather than to deal with generic system design and battling with tools.

And, like, an example that I've been using to talk about this is the tunneling shield that Marc Brunel made.

So Marc Brunel is the son of-- the father of Isambard Kingdom Brunel.

And he developed this innovation called the tunneling shield.

And it was really there to help them dig tunnels under the Thames.

In the 1820s, he came up with this.

And I don't think they started building the tunnel until, like, the 1840s or something.

But the problem that they had was at the time, digging a tunnel meant excavating some land, laying whatever it was that you needed in the tunnel-- for example, a railway track-- and then covering it up.

And you obviously couldn't do that if you were going to dig a tunnel under a river.

And on top of that, if you try and just excavate a tunnel under a river, the riverbed is, like, pretty soggy.

And the soil is very fluid.

So it's prone to collapse.

And so this tunneling shield acted as a support mechanism to provide structure to the tunnel, just long enough and just enough space to allow the masons to lay down the structure of the tunnel itself.

And so the way that it worked was that the shield itself had a series of cells in it.

And individual workers would stand in their little area.

And they'd excavate the wall face.

And once all of them had excavated the, like, five inches of soil in front of them, the shield would be move forward and the space behind would be filled in with brick work.

So there'd only ever be a tiny amount of the tunnel that wasn't supported at any given time.

And so the shield itself didn't dig the tunnel, but it allowed Brunel to deploy the expertise of the workforce that he had available to him.

And everything they knew about tunneling, excavation, coordinating as a team, to make this previously impossible engineering challenge happen.

So our infrastructure project has the need to create, to maximize automation to reduce developer error, to reduce the number of regressions that we ship so we improve the user experience, and to improve the productivity of engineers in general at Twitter.

So we need to rely on this robust automation, so that that support structure can be available to anyone who needs it.

And people can spend more of their time on things that require their intelligence and creativity, rather than constantly finding out how to build this, the basic generic infrastructure, like, asset deployment and all that kind of thing.

And the end result is that hopefully their to-do list for people spinning off a new web app will be really reduced.

Hopefully, none of it should be for infrastructure.

And one of the great affordances that we have as programmers is the ability to clone automated systems in moments.

And what we would want to do is leverage that to provide infrastructure that serves Twitter's requirements and allows teams to start making the unique aspects of their app almost immediately.

So the project's only about five weeks old.

We started on the first of September.

And it's already gone-- undergone a number of, like, refactors and scope adjustments.

So there's nothing, like, particularly stable, I would say.

And so to help keep it focused and to really kind of help explain the key thing that we're doing is focusing on the guiding principles that we're using, as opposed to the specific tools that we're using today to implement what we're after.

And there's only seven, because that's what I could fit on the recap slide at the end.

But seven is a good number.

So the first is that we wanted to make sure that we focus on solving a known problem.

So we know what our infrastructure requirements are, because we've had them for many years on Twitter.com, TweetDeck,

related applications.

Those are things like optimized assets, bundling, deploying those static assets to a CDN, localization to support all the languages that we have, or that we have to support for all the markets that we deploy applications to, browser testing, unit tests, functional tests, that kind of thing.

So we picked an app to build this infrastructure into as an initial customer, so that rather than making something in a vacuum, we're picking someone who can be used as a test bed for this.

So we've made our best guesses about the near future as well, and rolled that in, and used that, like, the known problems plus our best guess of the near future to limit the scope of what we're working on.

And the worst case scenario from picking-- from deciding to solve a known problem and picking a customer, for want of a better word, to solve the problem for is that worst case scenario, you have infrastructure for one application.

And that's better than having no infrastructure for anyone.

We also want to build an environment that fosters the outcomes that we want, rather than focusing on which tools we want to use.

So the outcomes that we want include a shared language between design and engineering teams, simple habit-forming processes and workflows, and performing unappropriately tested applications.

And so those are the things that we're really focusing on, trying to find tools to help us realize, rather than starting with what tools do we think are good? And a sustainable web app is not one that we want to last forever, but one that we can easily adapt and that can adapt to the changes in the company while still making an impact.

And we think that means a looseness of fit, so loosely coupling the layers in the infrastructure and in the application to make it easier to pull pieces out, replace them without disrupting this high level like workflow and habits that we're trying to get developers or engineers to build up.

And the same goes for the infrastructure itself.

So something that we need to iterate on and to use as a silo, a place where we can accumulate knowledge, experience, curator tools, and workflows that we find are successful for small or large teams.

One of things that I'm most concerned about is making sure that we use components as the primary unit of scale for building the user interface.

A component is a highly cohesive, functional building block.

And so we want to build the application out of these things.

And our components happen to be made up of HTML, CSS, JavaScript, and Scala, because our servers are Scala.

So our views are predominantly written in Scala.

So each of these components, despite being made up of multiple layers of technology, defines its own API.

So for a given input, it renders the same output.

And this allows us to have like a predictably rendered app.

We can test these units in isolation and have confidence that wherever they appear in the application, we expect them to render the way that they would wherever they appear.

And this should help us simplify UI development by avoiding the need to know about the entire application when we're working on these separate parts.

And that really pays off when you have a large team with many people, and you can't actually really know what's going on in the entire application.

And the last thing that you want is to be editing your small part of the app and to be affecting someone else's unwittingly.

And components also help cater for ownership around teams and individuals, iterating in smaller units, rather than entire applications, code deletion because you can just delete a directory and everything disappears, and portability for reasons related to deletion, which is that because all the component and all of its tests, everything that you need to realize it is in one place, it's easy to kind of pick it up and move it around.

And the end games for the team is that we're looking to is for them to build some kind of product, some kind of application.

That's what they've been asked to do.

They haven't been asked to, like, rebuild infrastructure.

And so that's really what we're trying to spare them from doing.

So we can do the research, and curate tools, and assemble software so that others don't have to.

And one of the principles that we have at Twitter is that when we're faced with many equally good technical options, consistency is the best option, because consistency helps to make it easier for people to contribute code to projects that they haven't seen before.

And that consistency helps us to achieve economies of scale, and to kind of leverage the habit-forming that we're after.

This is related to us looking to use reliable and battle-tested software that already solves our problems, rather than reimplementing among ourselves, so that we can spend more of our time focusing on finding creative solutions to problems that are unique to Twitter, rather than generic infrastructure issues.

So not everything that we need to do needs to be like engineering in the kind of classic sense of creating specialized tools for particular purposes.

The French anthropologist, Claude Levi Strauss, coined the term, [FRENCH], which he used to describe people who made use of what was already available to create new outcomes.

And that's really something that I'm looking to do with this.

And finally, we want to provide as much high, and to some degree, lower level overviews of systems.

So if someone wants to get like a high level view of, like, how does this thing work and what does it do, we want to have documentation that really focuses on that kind of level of things.

And then people can drill down into the granularity later.

And so this is really to try and shift away from having this idea of the internal consultant, or the knowledge sink, the-- I remember when I first joined Twitter, the kind of situation where you'd say, so who should I talk to about this issue, or this service, or this part of the code base? And a name would pop up.

And then you'd ask around and say, does anyone know where Dave or Samantha works? And they'd say, oh, they left last year.

And so when someone leaves, all that knowledge goes away with them.

And we're really trying to make a company that, or an engineering culture, where more and more of the knowledge is written down and pulled out of individual people's minds.

Because we think the better use of shared knowledge is preferred to, like, bureaucratic control structures and knowledge hoarding.

And we actually have a project called DocBird that autogenerates browsable docs for entire projects.

So you can put a docs directory in a project, use something a bit like Markdown, and this project will build that.

And then there's another application that you can go to and browse documentation for almost any product at Twitter or any application at Twitter.

And this is summarized-- well, like, this idea was, like, encapsulated with this little quote that I found from the former CEO of HP.

And at essence, it's saying that the core problem is a lack of communication within the company, that there's a lot of knowledge within the company.

And the real value is in trying to make that available to anyone to make more profit or to make better applications.

So overview of the system itself-- most of it is node-based tools.

There's some stuff in Scalaland for the first thing here, which is the, like, automatic component gallery.

So what we really want to have is UI components with defined interfaces, and then a mechanism for creating kind of living style guides or component galleries automatically without having to have, like, special code comments in the CSS or anything like that.

What you want is to be able to browse and point a page that has all the components that are in the application.

And as soon as you make a component, it should appear in that page without you having to do anything.

You also want browser unit testing and end-to-end testing, because we have to support, like, functional tests in a variety of browsers or right down to like, mobile phones, you know, all the desktop browsers, like a lot of environments, asset deploy pipeline, localization, and the simple developer workflow.

And each of these tasks in and of themselves is quite huge.

So, you know, components involves transforming that code, bundling, profiling, optimization, deploying the static assets.

The browser testing involves unit test framework, web drivers, visual difference testing, binaries for various web drivers for each browser that you need in the environment, VNs.

The asset deploy pipeline is like code organization scoping of CSS, JavaScript frameworks, templating, clever splitting of bundles.

Like, a lot of work goes into all of these things.

So the core separation that we have is between browser modules and the infrastructure itself.

And so in theory, the infrastructure could be completely removed and all the source code for the actual UI would be unaffected.

As long as the infrastructure can consume the way that we're cutting up the UI into these components, then we could replace it with anything, in theory.

So it looks something like this at the moment.

So we have a configuration directory lib, which is all of our own, like, node modules, library code, third party node modules, scripts that execute things like builds, tests, and then the source directory.

And so we're really using the file system to reflect the architectural design, rather than for some other arbitrary organizational purposes.

And really, what we're hoping for is to have a light footprint.

So the majority of the source code that we have in the infrastructure and in the application code is source code for Twitter-specific problems, rather than a generic infrastructure.

So the way that I've been thinking about UI components as well is to use this analogy that I got from a photo of a deconstructed clock radio.

So if you imagine that when you disassemble the clock radio, rather than it falling apart into speakers, and resistors, capacitors, and the like, like wiring, it fell apart into a pile of aluminum atoms, or steel, or a little box full of plastic.

And every time you wanted to build a clock radio, you were told to build it from these core like raw materials.

That would just be completely impractical.

And yet, that's how, historically, we've built a lot of our applications.

We've cut them up into HTML, CSS, and JavaScript, and then used those kind of like really low level of technology types and had this like implicit boundaries around parts of the UI.

So you might look at a TweetBox.

But there's nothing really in the code base that represents that TweetBox.

It's completely created from a pile of raw materials.

And so instead, if we imagine or if we take our user interface to be cut up into these higher order functional components in the same way that the clock radio is, then we can really focus on defining how it is that each functional unit is used.

So when you take a capacitor, the capacitor kind of provides two prongs.

And it's expecting to get slotted into a circuit board.

The speaker is expecting a certain type of wiring.

You know, microchips have a certain number of pins.

They kind of define how they're meant to be used.

And they say, for a given input, this is what I'll do.

I'll provide this output.

And that's exactly how we want to be thinking about building our interfaces as well.

We want to have well-defined boundaries, where the component itself says, this is what I expect to be provided with.

Like, these are the boundaries.

These are my parameters.

And as a result, I will render in this way always.

We want to get as close to that as possible.

So we have this segmentation around functional boundaries, segmentation of responsibility around ownership of these functional units, and to have as many owners of these subsystems as possible.

So the actual browser modules themselves are cut up into third parties module and our modules.

And it looks a bit like this, pretty simple.

And our modules exist within the web module's directory.

So this is our Android app that I've used as an example of how it can be broken down into these functional units, into these UI components.

And so you can see that each one of them exists on its own, but can also be built up of others.

So the tweet itself is built up of avatars and icons.

The Compose widget at the bottom is also built up of icons and text areas of some kind.

So we need to think of them as these composite objects, like a hierarchy of components.

And the way that we're building it is like this, which is very much like how the shadow DOM spec describes the shadow DOM.

So you have these functional tree fragments.

So when you're implementing a component, you will say, this component has a variety of nodes beneath it.

It includes an icon, but you can also have insertion points.

So you say, within my tree, I define a space where I will accept other trees.

When you combine these tree fragments, the, like, rendered composite tree on the right is what gets output to the browser.

But we want to hide the actual, the complexity of the tree fragments from developers, so that people can use them in the way that if you've looked at custom elements, it's kind of the same principle.

You want someone to say, I need an icon.

And I need an icon of type, you know, whatever, you know, compose-- tweet compose, and not have to worry about whether the icon is a span and using an icon font or an SVG.

We want to kind of build this habit that anyone who needs to use an icon to build their application references the icon template and passes in some parameters.

And behind the scenes, whoever owns the icon can decide if and when to change the implementation.

And that implementation, when its changed, the consequences of which propagate across the entire system automatically.

So someone isn't doing a find and replace across hundreds of individual spans throughout the code basis base.

It's like the actual HTML for each icon, for each component, is closed off, basically.

So the actual web modules directory is, at the moment, just completely flat.

And all the UI stuff is prefixed with UI just to help group it, because eventually we'll have, you know, hundreds of JavaScript modules and UI components living side by side.

We have this, like, naming convention-- UI dash and then a Pascal case, a name for the UI component.

And also another convention-- so anything that's a page ends in page.

And then looking at an individual component, each one has it's own unique name, obviously, because it needs to in the same way that you'd use unique classes.

And because everything's flat and there isn't a hierarchy in the file system in the same way that you don't really get a hierarchy when you're working with the DOM or the naming that you have available to you in HTML.

That name is used for defining a scope in the CSS.

So all of your styles hang off the icon class-- you know, this icon prefix that you have before the class.

Then we also have the Scala view template.

We happen to use Google Closure templates for this.

And if you're building a more complicated icon that would benefit from being chopped up into further subunits, then you can use like a library directory in the same way that you would if you've ever written a JavaScript module or node module.

And all the tests for this component are also in the same directory.

So the entire component and everything about-- who owns it, documentation for it, every part of technology that implements it, its interface, its test-- all of them are in one part of the code base in one directory.

And that kind of helps with that portability.

So I think a bit like Google, we use OWNERS files.

And OWNERS files are really good, because I think they help to encourage code quality, maintenance of modules, and mentoring in some way, so that if one day you're tasked with adding an icon, let's say.

You're working on a feature and you need to add an icon, a new icon type, you can go into the icon folder, mess around with the code, make sure the tests pass.

If you need to talk to someone before you put something up for review, then you can hopefully find out who it is from the OWNERS file or a group's file, which tells you which team owns it.

But when you do put it up for review, our review tools automatically find out who owns it, asks them to review, and makes sure that they sign off on any changes that you make.

So the OWNERS files really help to create-- although we've cut, although we're looking to cut the UI up into these functional units and have ownership for each of these units, the reality is that everyone is going to have to kind of play around in each other's sandboxes.

And so if you're working-- if your own the tweet, some team that needs to build a card into the tweet, or some new ad product, let's say, they'll need to integrate that with your tweet.

And they'll need to work with you.

And so OWNERS is one way of helping to facilitate communication between teams, and to make sure that nothing bad happens.

For the browser testing, at the moment, we are using Karma for the unit tests and Intern for the end-to-end tests.

And we have a convention around the file names, so that the tools can kind of automatically extract the unit tests and the end-to-end tests.

So when you create a new module or a new test in a module, then automatically the test are added to the test runner for the unit test.

And automatically, your functional test will be run as well.

And the nice thing about Intern is that it takes all of the web driver APIs, so like Android driver, iOS driver, Firefox driver, IE driver, Chrome driver, and the like, and normalizes them to work around the bugs or unexpected differences in the way that they respond to inputs that you provide them.

And so it gives you this normalized interface, which is great.

But also, working in this way with this kind of pattern, I think you could probably see how, if you were doing an end-to-end test on the home page and you wanted to click on a sign-up form and then check that, you know, a timeline had appeared, and all this kind of stuff, despite the fact that you're writing a test for the home page, you have to have knowledge about the internals all of these components that the homepage might made up of.

So here, I'm actually reaching into the sign-up page, the sign-up module, to find a part of the functional, like, fragment tree and to input something into the sign-up like name field.

And so something that we're looking to do is to apply this page object design that is, like, a part of, you know, this kind of Selenium suggestions these days, but focus it on the components.

So each component would define its own methods that reflect the things that a user can interact with.

So it can hide the details of repeatedly telling the browser how to do things, and leaking component implementation details into other component tests, and provide this kind of much nicer way of accessing behavior of each of these widgets.

So the asset pipeline is one of the key parts.

And the keystone of the entire asset pipeline is a piece of software called Webpack.

And Webpack is something that Instagram have also had success using to build their application.

And one of the great things about Webpack is that it creates a dependency graph of all of your client side assets.

So it's a module bundler that's kind of initially focused on JavaScript.

But anything that you require in your JavaScript, images, CSS, it can extract and kind of generate non-JavaScript assets for you.

And it actually was pretty well with kind of CSS imports and building a dependency graph from those two.

And so the things that we use that for is to define a series of entry points to the application, which again, we're looking to do via a naming convention, so that Webpack automatically extracts every entry point.

And then you can have a series of-- well, Webpack provides an interface for things that it calls loaders and plug-ins.

And loaders are a way for you to intercept files as it's pulling them in.

And you can do things like preprocess JavaScript, transform CSS, extract images, and that kind of thing, and then also plug-ins that allow you to drop steps intoo-- or drop into multiple steps along in the compilation pipeline.

And there are many, many, steps in the pipeline that it exposes.

And so the combination of loaders and plug-ins allows you to do a lot of custom work on the asset graph as Webpack creates it.

And so the way that we're using it is to create a common chunk.

So we can say, these are all of the entry points.

And we use the common chunk plug-in to extract all of the code that is common, all of the common JavaScript to all of the entry files, and all of the common CSS as well, so we can leverage like long-term caching for the common code and have smaller entry points for each individual page.

So when you go to the homepage, you'll load the common chunk and the homepage chunk.

And then when you go to the profile page, you would only have to load a small profile chunk as the profile page chunk instead.

So it's really great.

And I would recommend having a look at it.

It also has support for internet for localization.

So that's what we use to generate all of the language-specific builds that we have of, you know, string replacement in JavaScript files.

So we replace English strings with strings from multiple languages and bidi-flipping CSS.

So we have CSS that works right to left and left to right.

And the last piece, the topmost layer, is our workflow commands.

And so we want that to be really simple and stable, so that we can kind of create habits amongst the engineering teams.

So the core workflow is pretty-- we want it to be pretty basic.

You make a change.

And we want to get you to the review state as quickly as possible by automatically rebuilding all the assets.

Again, this is something that Webpack kind of helps us out with, where it works out what the incremental compilation needs to be.

And it would just rebuild what was changed, and a bunch service level, like master commands.

So the first time you come into a project, you want to run a command to install various dependencies.

So in our case, that's like a VM, Vagrant VM, and to provision the VM with all of the tools that we need, running a build of the server-- compiling the Scala, that is-- running tests, starting the server, starting the client code.

So that's really the infrastructure that I've just been talking about kicking off a watched task that will automatically rebuild the assets and lint your files for you and kind of generate commands.

So when you saw the example of the icon, there's a bunch of kind of boilerplate, for want of a better word, like various files, all the different types, the kind of the way the Scala views are the things that the Scala views import are various other packages.

And so all of the stuff that we don't people to roll by hand every time, we just generate.

So if you want to build a new component, you type in a command like this.

It checks that a component by the same name doesn't exist.

If it doesn't, it spits out everything that you need to get started with that component.

And you could be rendering, like, Hello World within a few seconds of typing in the command.

And that's kind of really what I'm talking about when I'm talking about, like, workflows that engender good habits and that kind of get out of the way and allow people to immediately start building what it is they initially set out to do, which is to build some part of the application, rather than working out, how do I get anything rendering? And what do I do? Where do I put my CSS? Where do I put my JavaScript? We actually have many more granular commands than this.

So there's various ways that you can test the client code.

So you can test the asset pipeline.

You can test the styles, like code style, linting checks, and all that kind of stuff, and end-to-end tests, unit tests, and so on.

So just to recap, we're focusing on trying to solve a known problem.

So that way, at the very worst case scenario, we have an application with a good pipeline.

We want to focus on the outcomes and the processes that we want people to be working with, rather than the individual tools.

So hopefully, the outcomes and processes themselves can live on longer than the tools that we use to make them come about.

So for example, those command workflows, if typing in make client-start kicks off Webpack one day and kicks off some other piece of software in a year's time, we don't really want anyone to have to know or worry about that, or to send out emails saying, by the way, the commands have changed.

You want to provide, basically, an interface for dealing with writing the application code and have that be stable.

We need to focus on designing for adaptability, so that we can, like, iterate rate quickly, and so that we can kind of cater to the constantly changing requirements that we have as a company, make components a unit of scale, because it's really portable-- it helps us build robust and predictable UIs-- have ready-made solutions, rather than configurable solutions.

So don't really want someone to have to spend a bunch of time deciding which unit test framework they want.

We curate the tools and just say, this is good enough.

And we don't really need to bikeshed over the specifics.

Making use of what's at hand, so we can contribute to open source projects that do what we need, as opposed to maintaining our own, which is always precarious.

Because you effectively have to maintain that for years.

And you should only really be doing that for things that haven't been sold before, and to have, like, plenty of documentation of ownership to try and provide a means by which we can all communicate.

Because fundamentally, there are these problems, a kind of human problems or communication issues.

And when you put people in an application that doesn't really foster communication, that isn't really designed around how to get people in these, like, individual teams to share information, then it's very hard to do so.

And so we want to design all of that up front to try and create these kind of good environments without people noticing.

So I want to close with a reflection on basically the black boxing of technical work.

So for-- to use as an example, when you walk into a room and you hit a light switch and everything is illuminated in the room, you rarely think about anything other than cause and effect.

Nothing gets in the way of you doing the task that you entered the room for, whether it's going into the bathroom to brush your teeth or have a shower, or going somewhere to sit down and eat, or you know, read a book.

And if it didn't work like that, you'd spend a lot of your energy basically working out how to turn a light on before you could do anything.

And the fact that we rarely think about it at all is an example of this black boxing.

And if you really stop to think about what's going on there, you'd recognize the enormous logistical, scientific, and engineering system that makes it so easy that you could forget that you're even hitting a light switch to illuminate a room.

You've got people searching for and extracting natural gas.

It's being piped all over the world.

It's being burned in power stations, converted into electricity that's in a huge, like, electrical grid covering most of the country.

And along those wires is transmitted that electricity.

It's hooked up to your house.

There's entire companies that are set up to kind of provide you with electricity.

And then you walk in and you hit a light switch.

And without even thinking, your room is illuminated.

And whether or not the energy to illuminate that room is coming from someone burning coal in a power station or a wind turbine, you never know.

You wouldn't know when you hit that switch one day that maybe your energy provider has switched to renewables.

And this is an example of black boxing-- reducing things to an input and output, and hiding away the complexity that is inherent in that.

And this is insightfully described by a French philosopher called Bruno Latour.

And he said that blackboxing is a way that "scientific and technical work is made invisible by its own success.

When a machine runs efficiently, when a matter of fact is settled, one need to focus only on its inputs and outputs and not on the internal complexity.

Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become."

And that kind of opaque, obscure infrastructure is what has inspired me.

And the lofty goal that we have, and perhaps the measure of success of work like this, is whether engineers even really notice that it's there, and they start taking it for granted.

And so we might not get there, because software is this kind of murky business.

But it's worth aspiring to something like that.

Thanks.

[APPLAUSE]