860

December 13th, 2024 × #module-federation#microfrontends#web-infrastructure#rust#bundlers

Module Federation Microfrontends with ByteDance’s Zack Jackson

Zach Jackson explains ByteDance's web infrastructure and Module Federation which allows dynamically linking parts at runtime

or
Topic 0 00:00

Transcript

Wes Bos

Welcome to Syntax Today. We have Zach Jackson on the show to talk to us about module federation.

Wes Bos

I just saw Zach, like, a week and a half ago at a conference.

Topic 1 00:07

Zach Jackson works at ByteDance, author of RS Pack which is a Rust based Webpack alternative

Wes Bos

He's the author of RS Pack, which is the Rust based bundler, has the whole same API as as webpack. It's it's really neat. And just that 1 talk, which was 20 minutes long, super short talk, blew my mind in terms of, like, all of the stuff he's working on. He works at ByteDance and was explaining ByteDance's company, TikTok and CapCut and all kinds of other neat products, but just explaining the scale of of the problems that he's solving. And it just I just walked away being like, I wanna talk to this guy for, like, 9 hours.

Wes Bos

But welcome, Jack. Thanks for coming on. Yeah. Thank you for having me. Give us the introduction of, who you are and and what you do. III gave a pretty poor 1 there, but let's hear it from you.

Guest 1

Yeah. Sure. So, I'm a infrastructure architect at BYDOTES.

Guest 1

So I work on the Wes infra team there. I've been in open source space for, you know, past couple of Yarn, so I'm probably more well known from that standpoint. So I, maintainer of Webpack. I'm part of the core team, for RSpack, and I'm the creator of Module Federation. So that's the the big highlights.

Wes Bos

Awesome. And that's specifically why we wanted to have you on today to talk about, which is what Module Federation is because it's it's 1 of those words that I've been hearing, going around. And then anytime I talk to somebody at a conference that is from a very large company, I always find that very interesting to like, how do you build stuff relatively fast when you have 7,000,000 teams, you know, in in all these different builds? And 1 little thing that you said during your talk was that people are on call for the bundler. Imagine being on call for Webpack.

Topic 2 01:23

Module Federation allows dynamically linking JS parts at runtime in a stable way

Guest 1

Like, you have to be the expert on that. And I just like, man, that's because it's true. Like, what happens if the the build is breaking? You can't merge something. That that's that's pretty wild. Right? It's kinda similar, like, to CI. You know? Like, Wes you always have, like, your DevOps. And when CI is down, it's okay. Well, there's only, like, a handful of people in the company who generally deal with it. But when you get like, when things scale up enough, the you know, your actual compilation process becomes equally as crippling, I suppose, because, not everybody knows how to fix those. They're also another niche area. And then on top of that, the more exotic the applications are, the more, like, intricate the build process is. So, you know, if you're, like, trying to bundle, say, a native app and you're like, oh, hey. Why is, you know, this 1 thing not working under this circumstance? Your general dev doesn't know if it's Scott, like, ESM or something. Usually, devs aren't like, oh, yeah. I know how to, you know, handle, like, a multidimensional output build and, you know, what's going on between the threads or other random things like that. Oh, yeah. So, yeah, it it does end up actually being a large portion of the on call JS just making sure that builds Yarn, functioning correctly. And that was 1 of the reasons why we ended up building RS Pack was to just reduce that pressure.

Scott Tolinski

Wow.

Scott Tolinski

So how much web does ByteDance do? Because I know so much of my experience with the products is all native apps JS how much of it is web?

Guest 1

All of it. Even the native apps. Really? Really? The native apps that we have so we have a project called LinX. We'll be open sourcing it, sometime early next year. But LinX is essentially our in house React Native. So all the apps are LinX apps. Everything is a LinX app. It's all all backed off of the same stack. So so there's a really good article for, like, some background info on how the company operates. If you just Google, how ByteDance became the world's most valuable Scott up, it'll be the Harvard Business Review article. It should be the first result. But it goes into really good detail on, like, how the company is structured.

Guest 1

And so 1 of the big things is, we call it SSP, which is essentially shared services platform.

Topic 3 03:56

ByteDance has a Shared Services Platform allowing specialized teams to focus on a slice of the pie then easily combine parts later

Guest 1

And so the whole idea behind it is really you have, like, very tactical, specialized teams who focus on only 1 slice of the overall pie. So you'll have, like, an algorithm team or a compiler team or a kernel team and so on and so forth. And then when product Node something, you can just go get the algorithm team, or you can go get the kernel team, and they'll write you know, modify the runtime that all those stuff runs on. So everything's, like, really compartmentalized, but it allows for you to have this really kind of, like, high density skill levels. And and yeah. So that's just kind of how, like, the the company is structured, but it gives us a lot of flexibility in terms of, like, what we need the products to do. And then on the other end, because of SSP, the whole idea is, like, how do you have a unified architecture? So what we're maintaining here can be adapted and supported in a 100 different ways. But at the end of the day, we have a very consistent stack that's just very porous, but it's not like 1 giant, you know, 1 size fits all. It's more like how do you create the sum of parts that are designed to work independently, but when combined, work very effectively together? And so that's kind of the whole idea behind it. We build all these things as stand alone parts that happen to lock into place into a bigger product when needed.

Wes Bos

Wow. So because, like, if people are thinking, oh, what else is there? There's obviously TikTok, the the mobile app, the Android app, but then there's there's the website, and then there's also CapCut, which is like a a whole flow of video editor.

Topic 4 05:31

Many ByteDance products are web based including TikTok website, CapCut video editor, VR headset Pico, workflow builder C0ZE

Guest 1

There's, Douyin, which is the Chinese version of TikTok. It's the the bigger 1. There is, Pico, the VR gaming headset. Yeah. Yeah. PIC0.

Guest 1

So we own Pico. There's a Yarn code, which is a kind of like cursor.

Wes Bos

What?

Guest 1

Yep. There is Node, C0ZE, .com, which is like a AI bot builder, workflow builder tool, or something.

Guest 1

Man, I'm trying to think of the other things. CapCut, Lemonade, Lark, LARK.

Guest 1

It's like Microsoft Teams, but built better and does more.

Guest 1

So that's what everything everything that we do other than the IDE, everything that I do at work is inside of Lark. Calendars, booking flights, looking at payroll, requesting time off, writing documentation, Kanban boards for the extent that we have those type of things, chat, email. Like, everything is in Lark. The only thing that isn't in Lark is is the browser and the IDE. Other than that, you basically never leave never leave it. So there there's a ton of products, and these are just like a handful that I know, but I know that we have over a1000.

Wes Bos

Wow.

Wes Bos

That's that's unreal that there's that much. So talking about module federation, obviously, you need to be able to share stuff between it. So what is module federation?

Topic 5 06:56

Module Federation allows dynamically importing chunks from different builds at runtime

Guest 1

So module federation is essentially the ability to dynamically link parts together at runtime in some kind of a stable way. So there's 2 versions of federation that came out. There was v 1, which was the 1 that I had PR ESLint Webpack back in 2019.

Guest 1

So from that perspective, the whole idea was, you know, it kinda started with an obsession probably in, like, 2017 of because I'd started to work in micro front ends already, and but it was the LOSA, LOSA architecture. So lots of small applications, which essentially meant you mount a bunch of apps onto divs in the browser, and each app is kind of your old school traditional micro front end where, you know, it's a a whole bunch of little parts that each have a DIV. They mount onto the DIV, and you would, you know, use, like, externals or something. So everybody needs React, then, you know, you'd have, like, the vendor choke that sits at the top of the page with, like, React, Redux, Lodash, a few other things.

Guest 1

And then all the parts would just hook into that. And that's kind of where I started to see the pain points because so this was when I was working at Fiverr, and we had, like, an old Rails app that we were modernizing. And so how we did it was micro front ends. So the whole idea was the Rails app would kind of do a post request of its data, would post it to a render layer, and the render layer would return the head, the foot tags, and then return the markup, and then Rails would render the markup inside of its kind of shell. So the the challenge that we started to see with that was, okay, you can use externals, but the you have to manually configure it. So if you don't want everybody to duplicate, say, like, Lodash, you would have to externalize it. But then if you do, you everybody has to be on the same version of Lodash. And what you're developing with locally usually isn't the same versions of the dependencies you actually have in the vendor bundle because that shift from, like, a different build. The the 2nd kind of challenge that you would see as well is with externals, it's synchronous. So that means you have to have a render blocking tag at the top of the page before any of the other apps run. So whether or not I use Lodash doesn't matter. I have to download it every time, and every piece of shared code has to be render blocking and loaded regardless of if the application's actually using it.

Guest 1

So this is where I kinda started to see the pain point. It was really difficult to manage and orchestrate.

Guest 1

And, how we were doing the render layer as well was, everything was an NPM package. And then you would publish the NPM package, and something like Renovate Bos would pick it up and automatically open a PR to, like, the render layer. And you'd have to wait for the bot to open the PR, and you'd merge it. So now when you send a post request with the package name, it knows which thing to render and can send back the, you know, the correct assets for it. The problem that we ran into with, or that I had run into with this approach was, 1, it was really slow. Like, everything what I refer to it as, like, the double the the double deploy.

Guest 1

So let's take an example of you've got a an NPM package. Let's say you have, like, button.

Guest 1

A really small example, but let's just say it's button. And then you have a header.

Guest 1

And now you want to update button, and you need to update it inside of header. So first, you have to pull button, develop on button, push button, release a version 4, then you have to go pull header, install a button into header, push header, and then finally, you can actually see a pop up in the render layer where you have to do another in install deploys. You're looking at, like, 3, like, kind of pings back and forth before you can actually get the product ESLint production. So you end up losing a ton of time just trying to push things through the pipeline of, like, a normal package based or I refer to it as kind of, like, the install Bos delivery pipeline.

Guest 1

Most of your time is spent, like, running around the org.

Guest 1

Like, at other bigger orgs, same kinda issue. Oh, I need to I wanna let's say it's just header, and you wanna roll out header to, say, 10 or 11 different parts of the org. Maybe you've got, like, you know, community content, you have ecommerce, and you have checkout.

Guest 1

So then just getting that synchronized is that'll take probably 6 hours if it's just you doing it to go to each team, nag them, wait on their QAs, run it through the pipeline, which takes, you know, 30, 40 minutes to actually get it into environment 1. And then after that, you gotta run it to an, you know, preview environment and then finally to get it to production, and you have to do that 3 times over. Plus, everybody has a merge queue because there's other things in the product that are also trying to move in. Oh, Sanity probably just kills your, like, enthusiasm for Yeah. Like, you know, you can some days, I could go to work, and I would all I would do is just stare at CI and talk to people. Wouldn't actually do any work. It's 10 minutes to fix the typo, and then I'd spend the rest of the day trying to, like, scrape this thing through the company and actually get it into production. So it's just a huge productivity killer. So so back when at Fiverr, the thing that I was, you know, not happy with was the, 1, the latency that it takes to actually get things out, and then secondly, the the bloat. It's like, okay. Well, I'm downloading so much because the externals doesn't work very well, or I have do this giant render blocking tag with, you know, 300 kilobytes when I'm only using maybe 50 kilobytes of what's in there. And so the idea had come to me. Well, why can't we just dynamically import chunks from a different build? So if I already have React, why can't I just get the code that I don't already have, like the actual feature code for header? I have other vendors already because I'm a React app as well. But if I don't have Lodash and they need Lodash, why can't they just supply me the things that I don't already have? And so that's essentially the concept of federation is there's 2 parts to it. The first half is the compiler part, and what we do in the compilation step is, essentially, we say, alright. What are the things that you wanna share? The things that if you say, okay. I want to share React. I wanna share Redux, Lodash, whatever, we will then split your bundle so that the there's a React chunk and a Lodash chunk, and those are separate from, like, your main bundle.

Guest 1

And then there so there's exposing, and there's sharing. So sharing is cool. SemVerse safe Npm packages. You can say, hey. Do I want this to act as a singleton? What version of it do I specify that I Node? Somewhat how NPM install would work with your package JSON, and then we shard the bundle accordingly. And then there's exposing, which is similar to, like, JSON exports field, so Wes you can say, alright. And I wanna expose header to you. So now only what's in the exposer, like, the exports field, can I import on the other end? So the sharing's all like magic under the hood, and then I could say, you know, like, you know, checkout slash Yarn, and then they would have exposed their cart component as whatever they wanna call it. And then when we're exposing, the same thing happens. We shard the bundle so that the cart chunk or that cart module and whatever things it depends on is separated from the rest of the bundle. So now I'm able to access those chunks independently.

Guest 1

Then the runtime aspect is is, essentially, we output another entry point that I just kinda call the remote entry, and that's basically just a special entry point that has 2 that exports an object with git and a net. Git is how you would say, cool. Give me something that I'm exposing. So that's accessing the export field. And a net is where we kind of do, like, a circular, like, kind of waterfall series where we go, okay. When I initialize it, I'm gonna pass everything that I have to share. I'm gonna pass it down to this remote entry, initialize it so it's warp, hey. Here's everything that's already available in the system to share, and then it's gonna finish its initialization, assign any shared objects that it has onto it, and, basically, it goes through everybody. So everybody waits for the whole system to initialize. Everybody's aware of all the sharing. And in that process, that's when we say, okay. They need Symver 2 for this. They need Symver 1 for that. Is it a singleton? Okay. Then we need to find who has the best version for the requirements of everybody connected, and it kind of does all the negotiating for Symver whatever at runtime as the system's initializing.

Guest 1

And then finally, once everybody's agreed on the versions, we seal the object, which essentially means we do something similar to kind of, like, object freeze Yeah. Where we say, okay. Now you can't mutate the version of React you're gonna use from now on because it's now loaded in memory, and somebody's utilizing it. So if later on, a new a better version of React comes in, the system can't decide to use that 1 later because we already have this version of React in flight. So it's kind of like initializing, and then after that, we start getting the components that we want. And then, Calvin, you receive that on the consuming end is, it's essentially like a combo of externals plug in and some special module factory stuff we do, where we say, hey. The external is the remote entry file, and then there's a special factory that'll say, hey. Call the get method on there. So when it sees the external, it goes, okay. Get the remote entry, do the initialization, and then it goes over to, like, this virtual module that we create on the consumer side when you're importing these nonexistent modules. And that 1 says, okay. After the external has initialized, then call the get method on it and get the module I want and bring it back to me. Oh. Man. Man. And so that whole

Wes Bos

thing where you're you're talking about figuring out what versions are are needed, that's not part of a build. Like, you don't have to rebuild the entire website That's all at runtime. 1 tiny little face. All at runtime, that's not happening, like, on request. Right? It's happening how often does that happen?

Guest 1

It happens, like, in the browsers. As soon as you pull the app up, like, before the app, like, starts rendering the app, it's actually busy negotiating with because we don't know who's connected to the page at that point because there's no, like, server back end. There's no static point, so the remotes can all be dynamic. So imagine if you have, like, AAA AB tests or a dashboard that the user configures. So there's no way for me to say, oh, you're gonna be using modules ABC, and d, so I can't pre negotiate that. At least in v 1, we couldn't pre negotiate that. So it was Wes it gets to the browser, once we start pulling all of them in, they then seal themselves, negotiate who's gonna use what, and then the app starts running. And the whole process there takes, like, 45 milliseconds. So it sounds crazy. Like, a lot of things are happening, but it's Yeah. It's extremely optimized.

Guest 1

Node then we brought out v 2. And so v 2 was essentially like, when I had built v 1, the whole idea was, you know, I wanna just load chunks from another build that I don't have. And it was very much I wanna solve my problem, and I didn't really think about because back then, micro front ends was kind of iframes or just mounting a bunch of apps. So there was not really a good baseline for what's needed, and the stuff didn't really scale. So it was really just, okay. I don't like the problems that I see, and I want them to go away. So I just selfishly built the solution that worked for me, which obviously, like, you know, a couple years on, didn't exactly fit every enterprise need. And, also, the tech grew a lot more than I had expected. I really just expected me to be able to use it and maybe 1 or 2 other companies, and I didn't expect about 45% of the Fortune 50 companies to be running on it. So, obviously, like, there was some design oversights back in the day that we ended up correcting with v 2. And so so initially, it was, I just want the chunks from elsewhere.

Guest 1

What we had found out, though, is that's not really the main major issue. It's not getting the scripts here, and you just wanna dynamic import things from another build, especially with the introduction of something like, you know, import maps.

Guest 1

It you know, it was kind of I always get the question, well, what's the big difference between, say, import maps and Federation? And I'm like, well, with v 1, you know, starting to get the edge. I'm like, oh, it's getting a little tight. You know? They're getting a little similar. Federation obviously had the the 1 thing going for it that it works in any environment. So this isn't just browser. This works in server. This works on native. This works in embedded systems. You know, it works anywhere that you can evaluate JavaScript. So, you know, it was a little bit more flexible, but still, you know, to to the, you know, normal user, that's not a big enough wow factor necessarily to say, well, why not just use import maps? So v 2 kinda came out to address this because what I realized is it's it's not really I want to get chunks from somewhere else because, you know, getting the script is actually the easy part. The thing that I that we actually want is essentially, like, express on import. So I want middleware, not just, like, a map that I can statically say, hey. A is a and b is b, but I need something that's, like, context based. So if button is being imported by who's the parent? So if header's loading button, maybe it has different rules that need to be applied.

Guest 1

Or what happens if it fails? Then I need some kind of error callback so that isn't in the user's app code where they have to try catch it. Because what if they're using import from or require or an uncatchable import? I actually need something very similar to, like, Express with a request response flow where I could redirect the Wes, or I could intercept it, and I can modify it.

Guest 1

And so that's a lot what v 2 was around JS, like, okay. Well, what we thought we needed was just the ability to load chunks from somewhere else, and, really, what we need is something similar to, like, a express like middleware system on actual dependency and module loading itself.

Guest 1

Now speaking of v 2, this is also where it's not just in the browser always like you were saying on every request.

Guest 1

So with v 2, we introduced this manifest protocol, which is essentially like a JSON graph that that helps describe what each part has and what they can provide. It's it's like a JSON version of the JavaScript entry point, a representation of that. So what that means is I don't have to evaluate this. I don't have to initialize the system to figure out what it's going to do. Since now federation is just an NPM package, we have all the algorithms that do all the share negotiations and that figure everything out figure everything out at runtime.

Guest 1

We now have that just as an NPM package. So what we do at ByteDance is we actually have a server where Wes, you know, we when we deploy everything, we have something called VMock. So VMock JS, basically, the in house version of a project called Zephyr Cloud that I that I'm part of, and they're both actually come from the same source, which was my original project called Medusa.

Guest 1

And so they were when I first started working at ByteDance, I went and, like, logged into our CI, and it was, like or logged into our, like, our Git back end. And it was like, oh, hey. Welcome back. And I'm like, what do you mean welcome back? This is my 1st day here. And I go and I, like, check my Git history, and I had commits going back, like, 3 years. And it turned out that, like because some like, my original Medusa project and a lot of my federation work, they had just pushed the whole history into their Deno. So, technically, I had been contributing there for a long time. Way before you even got hired. That's hilarious. And I said, welcome back.

Guest 1

So it it was pretty funny. And I also, like, saw things in VMock. I'm like, wait. I recognize the name of that function, and I go and plug into it. I'm like, I remember writing that. Like, that's my typo in that function name. Oh. So it was very funny to go and see it. But but, effectively, like, if you wanna get a better understanding of, like, what VMock is, Zephyr is, like, the outside version that's very similar. And, again, they both share the same kind of core. But, what VMock is essentially like a server back end to help manage and control this. Because, yeah, cool. You can have all these remotes and all these parts, but the problem you still run into is the scalability issue. You need some intelligence somewhere along the way because what happens when you've got like, in our case, we have over 8,008 8 or 9,000 micro front ends. Wow. And some of these things have over 2,000 versions with maybe over a 150 of active in various products at any given point. So how do you manage who has what, where it is, and then, also, how do you keep it consistent? So with the JSON protocol, what we can do is what what VMock does is we have an edge worker. So the edge worker receives the JSON manifests of the remotes and then runs the algorithms on edge worker to create, essentially, what we call the module snapshot, which is effectively a lock file. And when that lock file is delivered in the markup somewhere, Federation switches from its nondeterministic mode of, okay. Let's see who arrives, and then let's do the circular initialization, negotiate all the versions, figure out who comes from where. That gets turned off, and instead, it goes into kind of, like, hydration mode instead of render mode Wes it says, obey the snapshot and never change from it. And all the thinking was already done on the server, so just resume what the server was doing and pull the scripts in and and, you know, start the system. Don't actually think and talk to anybody else. We've already done that for you by just running part of the algorithm back on the server side so we can deliver that from the edge, which also means anytime we wanna redeploy a version or change any part of the system, it's just the change in a SQL table. There is no redeployment. There is no real build because, technically, by building it, it's already deployed.

Guest 1

And so then it's just a case of, okay, so which part of the kind of the the module graph do you wanna swap out with what? So production is a SQL row, and that's really it. It's not it's not a Wow. Zip file that you put on a Lambda or whatever. All that stuff is already deployed somewhere, and the only thing that makes production production is the Node snapshot VMock return. It's just a map of

Wes Bos

so this is the build, and this is the manifest of all the pieces you need to run your build?

Guest 1

Or, like, it's like, okay. Here's header, and header provides React. And it has this version available, and it exposes a, b, and c. And then here's Node app, and Home app has React as well. And it also has Vercel, so on and so forth, and its needs it as a singleton. And so we take those 2 JSONs, and we go, okay. Well, the home is gonna import ABCD, and e. So we go Gitmeid Home's manifest and Gitmeid, Bos, d, e's manifest, and then we go generate module snapshot from manifest, and that will output a kind of graph for us. And then we just send that graph down as, like, an pnpm script or or, you know, like, a JSON tag or whatever, and then Federation knows to look for the module snapshot and switch into hydration mode. But, essentially, we're running the same thing the browser does just minus script evaluation.

Wes Bos

Jeez.

Scott Tolinski

Freaking evil genius over here. I know. This is a scale I can't comprehend. I mean, you mentioned how many people are using things like mod module federation and and micro front ends. But at what point does this scale become necessary for people who are yeah. What is the the threshold there in which you need to start having these issues?

Guest 1

So I would say Kent Dodds had a really good way to put this because somebody had asked them like, hey, Kent. What do you think of, what do you think of module federation? And this was in, like, 1 of his tweets where he was saying, that he's super or something. And somebody was like, okay. Well, what do you think of Federation? Because that's usually a very controversial subject.

Guest 1

And Ken had the best, kind of reply to it. It solves a problem I hope I never have. Yeah. That's essentially that's essentially what it is. If you don't know why you need it, then you probably don't JS the first thing I say. Mhmm.

Guest 1

But the I would say, generally, there's 2 there's 2 areas where federation comes in really helpful. So the first 1 is a scale issue. So, okay, the build is just too slow or, you know, you have these giant merge trains and you're getting bottlenecked here, or maybe your release cycles are really quirky or you work in an industry like banking or the military or, you know, other industries like that where getting large bundled releases is really tedious, so decomposing them into smaller, more, you know, liquid parts is very helpful. Also, just size of repo, stuff like that. Like, we have some repos that are over a 100 gigabytes, and it takes about 3 minutes to change branches because my SSD you know, we call that repo the SSD killer. So there's just the thing where the monorepo literally stops functioning at a certain level.

Guest 1

Then there's also the other side of it, which is organizational problems.

Guest 1

So what happens when teams just don't work well together? Like, they just refuse to communicate. Usually, you'll see this in, like, very product led companies where, like, engineering's essentially the crown, and the product just colors whatever they want. So then you have a lot of dysfunction on the engineering where, hey. You technically could do this with just better engineering processes, but you're not afforded the luxury of it. So the easiest way to make everybody behave together is to split them apart and then just glue it together in the last mile. So you those are the kind of the 2 major Scott then there's a couple others. Like, let's say, you white label a product. At any size, those are really useful, or you do a lot of AB testing in multiple regions. Having to not, you know, code that into or even if you have, like, an AB test team who's separate from your engineering team, usually, you would put that in, like, you know, Optimizely or Google Tag Manager or something. And that conflicts with, like, your React code because you're essentially just, you know, patching the DOM on top of it, and that also means that your tag manager is competing for compute with the application. So the revenue funnel is competing with the thing that's measuring the revenue funnel. So in cases like that, regardless of size, it's a lot better when you say, well, what if the tag manager just dispatch something that told Federation what to do and everything is still managed under a centrally aware system that's prioritizing? Show the user something before you try to measure the monetization of it. There's nothing to monetize on a white screen. Mhmm.

Guest 1

So coming to size and scale, though, I would say, generally, I see I see it start to become useful with, let's say I don't know. I've seen I've seen them as small as, like, 5 or 6 parts. But, usually, the parts we're talking, it's like checkout, community, commerce. And these are, like, super high risk funnels that you wanna just keep separate, but you don't want like, if you're an ecommerce, this is a big 1. You don't want that page reload when you jump zones because that has a big revenue hit. Every second that adds on to the on the transition of something equates to x number of 1,000,000 of dollars. So that ability to have a single page application like feel without increasing your blast radius, that's also really good. And I've seen those where okay. Well, each team might have, like, 30 or 40 people under it, but there's maybe 5 teams or 5, you know, director parts, essentially, that ESLint it apart. Usually, I say, like, the golden numbers, probably, when you have, like, 10 to 15 parts is when you start to maybe wanna consider it or, you know, look at your monorepo and compose it really well. On the monorepo end, though, what I usually tell everybody is start with the monorepo until that starts giving you problems.

Guest 1

And then just when you build the monorepo, pretend it's federated. So what that essentially means is don't do weird cross importing in the repo. Like, don't have this package depend on that 1, and you have, like, 4 circular imports Mhmm. Where now you couldn't take this half of the monorepo, split it, and put it in a different code base because it's too tangled. If you maintain really good boundaries between that monorepo and just keep a really well organized 1, I found thinking like it's going to be federated and that the code isn't in the same folder actually makes it so that you your monorepo will scale longer. And then if you do wanna split it up, now you've already primed the system where, okay, well, it still just JS the same package import. That just becomes a remote entry, not a Yeah. Sim linked file in the monorepo. And then I can easily switch it over to more at runtime composition from a install link based composition. Wow. So I have a in my monorepo, I have, like, a utils package, and I throw all my, like, shared

Wes Bos

utils. The format money is a very simple function that I have in there, and I use that in all of my 3 or 4 different, properties.

Wes Bos

Is that a bad idea for that type of thing to have a utils package that shares utils? How would you publish a utils package in module federation?

Guest 1

So for for things like that, it usually for me, it also depends on, like, the like, who who owes it and then Yeah. Like, how granular do you wanna get? Because granularity can also backfire on you. I'll see a lot of users who are like, oh, well, let's make everything federated, and we'll have, like, a 160 parts to make up, you know, something that doesn't need a 160 Sanity runtime parts. So I always, like, think about the balancing act of it. So 1 JS, like, well, how often are you deploying those utils? Is it a major friction point? 2, like, the rate of change on there, does the API contract potentially break more often? Is it something that maybe is just worth duplicating because it's alright if it drifts a little bit? Now if it's something, say, like authentication, that's something you couldn't have Drift on. So if it was, say, auth utils, I would probably put that as a remote or as a shared module. That way, I could say, hey. I can always go and get x version of auth. Or if it's like a, you know, kind of the end facing part, you could expose the authentication handler. So everybody just pulls it in and then use something like VMock or Zephyr to say, okay. You know, home app is gonna use this version of auth. That one's gonna use a different 1, and we can roll back or roll forward. So Zephyr is a little faster than VMock, so I'm gonna go with the Zephyr number. But, it takes about 300 milliseconds to change any part of the system. So roughly 50 more milliseconds than it takes to complete a mouse up, mouse down event, and that's your full deployment, like, update to all your users everywhere.

Guest 1

Wow.

Guest 1

So, you know, it it kinda depends on, well, where do you need that speed, that that agility in there? At the same time, I'd usually try to say, well, like, maybe component library? Okay. Depending on how you do it, that might be a good 1 to put as a shared package or maybe have, like, a vendor remote who doesn't expose anything.

Guest 1

They just have kind of like the old school vendor bundles. But what if you had a vendor remote where you say, hey. I'm gonna have all the packages that, generally, everybody wants. And then with Federation, I have the whole middleware concept so I can write a runtime plug in with the resolve share hook and say, hey. If the vendor remote has a version that's viable for you, prefer the shared module from the vendor remote over your own. If not, go into negotiation and find the best option available.

Guest 1

So in those kind of cases, I might bubble them up a little differently so I can just set preferences on where the module comes from, but still have, like, you know, a bit Node, noncentralized kind of control.

Guest 1

So in the case of utils, I would say, probably, it's really small, and, you know, that would probably be something I would just consider, like, bundling in. Because is it worth another network request to, like, go out and get the extra chunk parts, or is it worth maybe an extra 3 or 4 kilobytes of bloat per bundle that's using it, plus the tree shaking, which thins it down? So, you know, you kinda wanna balance it out between, like, what's the necessity of it versus, like, what's the excessiveness of it.

Wes Bos

And then, how would you do, like, a major upgrade of of, like like, for for example, React? You're moving to React 19. People wanna start using React 19 features.

Wes Bos

How do you do that? You you can't ship multiple versions of of React, can you?

Guest 1

So you can, but here's the kind of so this is the this is the hole that I dug myself into. So when I created Federation, I really wanted the whole well, I just wanna import stuff. Like, I don't want malfunctions or the kind of app thing. I didn't want that anymore.

Guest 1

I wanted the whole, oh, you know, you should you shouldn't need to know how to do anything besides, say, I wanna share React, and I wanna expose header, and I could just go import, you know, nav team slash header, and it's just like an Npm package.

Guest 1

That works great except for in this scenario where it backfires of, oh, but React's a singleton. So how do you upgrade the parts? Because you're using the import from, so it's technically 1 render tree. Now if you were to use Federation with, like, the LoSA kind of architecture or adapter solution Wes you say, okay. I'm gonna I'm gonna render a DIV that mounts another React app onto the DIV, and then we just pass the props and, you know, like, component did mount or component will mount and just, you know, run, render, or hydrate again when the props change, then, you know, you can have any any amount of versions of React you want, and they will all recycle the minimum number of React major versions available. But if you're doing it in a single render tree Wes you import froming everything, you do kind of lose that ability because it is utilizing the whole single paradigm, and you're technically in a single render tree.

Guest 1

So for doing multiple React versions, we did come out with, like, a framework paradigm, or framework capability for federation, which is essentially Wes provide adapters to allow you to say, like, take old React to a newer React or React to Vue or, you know, a bunch of other kind of directions for different frameworks. And what we do in there is we actually rebind the routers between them. So your React router will be aware of the Vue router, or the Vue will be aware of the React Router. So you can click through it, and it roots and behaves like a normal app. But, essentially, what you're exporting is, like, create component adapter, consume component adapter. And then that would say, okay. Now you can load multiple versions of React, so on and so forth, but you would have to have some paradigm to say, okay. Here's a separate tree. So that's generally how we do it when we need to do, like, upgrades on things.

Guest 1

The other nice benefit as well is inside of the Federation runtime plug in, we have a a hook called load remote.

Guest 1

So what that does is that's like, okay. I'm gonna go and get the module I want to load. So if I want header, now I can get header back, but inside of that kind of middleware pipeline, I know who's loading header. So I can I've done this before where I can go and look ahead and see, okay. The thing I'm pulling in needs React 19, but the consumer is currently using React 17.

Guest 1

So when I pull load remote, I can say, alright. But don't just return the the the the factory function.

Guest 1

Return a new JSX function that says, okay. Get their React DOM and return, essentially, like, a proxy React component that mounts it that says, okay. Parent app, render a DIV, thing you're importing, mount onto that just render DIV, and, essentially, do it all automatically for you. You don't need the whole mount adapters that you define in user code, but you could do the same thing in the runtime plug in and say, hey. If the thing coming in is needs a different major than the thing it's going to Mhmm. Have the export default be render a DIV and then have, you know, the thing you've just loaded be, okay.

Guest 1

Put that as JSX on the render DIV you just created, and then you can, like, magically stitch it together. And if you want to see all of the errors in your application,

Scott Tolinski

you'll want to check out Sentry at sentry.ioforward/ syntax.

Scott Tolinski

You don't want a production application out there that, well, you have no visibility into in case something is blowing up, and you might not even know it. So head on over to Sentry.ioforward/ syntax. Again, we've been using this tool for a long time, and it totally rules. Alright.

Scott Tolinski

So who owns the, like, update process for even, like, minor depths? Not not like minor bumps or things like that. Like, is it, like, the team who owns the the chunker or however you refer to it? Like, who owns those those choices?

Guest 1

So this is the other thing where a lot of it, it deposed. So at, like so when I was at other companies, I would say the platform team would own certain upgrade phases.

Guest 1

So we would own things like the application shell. So we had a federated app shell as well. So when you develop locally, you pull the app shell off of stage or production, and we encapsulate local host in production's app shell. So you're technically developing as if you were using production's code, and your local app is essentially just like a zombie application with nothing. And all the providers, all the auth, everything like that JS coming from stage or prod. And then when you deploy it, we reverse the process, and we say, okay. Now the page JS federated into the shell, not the shell is federated around the page. And so we can kind of reverse that. So then the platform team would own the wrapping layers to ensure that telemetry was there, that when we update the telemetry, we would we would be the ones who would, you know, push the versions to all the product teams and so on. At ByteDance, a lot of this is controlled by the product team. So the product team will say, alright. Maybe if I wanna update, say, button or whatever, just so that we kind of have that Sanity, because, again, you don't want things changing without your notice unless you do. So, generally, we'll without your notice unless you do. So, generally, we'll say, okay. The product team owns the lock file or the VMock, you know, the the VMock dashboard, and then I can go in there and say request an update to it.

Guest 1

But I also have, like, Chrome DevTools, which allows QA to assume a certain module snapshot and test it without needing to go in and change VMock or change production or do anything. So any environment can be any environment you want. You can use production as local host and develop locally, telling the Chrome DevTools, use my local host remote and replace production, and we can turn hot module reloading on in production. So every time we hit save, it the app just like in development. But, generally, we say the log file JS owned by the product team that owns it, and when we need to contribute to it, you know, we can either request the update happen or whatever. Now, again, this can also come down to just the dynamics between the team because maybe it's something like header, and it's super generic. There's nothing that's really gonna break in there, and maybe the product team doesn't want the overhead of having to approve things or whatever, then the teams can agree that, hey. We just want the latest of header. Or what they do in VMock is they would say header star, which just means get the latest. So then every time header pushes out a new 1, it's instantly updated. But they can control the rules on, you know, where do they want the governance on their own product and what's going into it versus where do they want, you know, less restrictions on what's flowing in and out.

Wes Bos

Man.

Wes Bos

And do you ever see people use multiple frameworks with this as well? Because often I talk to, specifically at conferences, always people that work at banks or insurance.

Wes Bos

And they say, we have 5,000 devs in 14 Angular apps and 20 React apps and 3 Vue apps, and it's just like, sometimes you need to share stuff. How how do you approach that?

Guest 1

So so this also works in multiple frameworks. This is 1 of the reasons we brought out that framework bridge system. So, essentially, oh, if you have a Vue app, you can mount it, and the consuming person treats it like React. They're using JSX. They have no idea they're talking to a Vue app, so the consumer receives it in the format that they that their application is Okay. Regardless of what the other framework is.

Guest 1

The the thing I don't like about the multi framework approach and this is 1 of the reasons that micro front ends was, like, where it got popular JS, oh, you could use any framework you want anywhere. Yeah. And that's, like, a terrible idea because, yeah, your devs might love it, but your devs are the the the revenue funnel.

Guest 1

So for me, it's like, okay. Well, now I'm loading multiple frameworks which have weight on them.

Guest 1

So for yeah. And you can use it for that, and a lot do. It's also really good for, like, transitionary periods. So, oh, if I'm going from, like, Angular to React, boom. Federation lets me link all the parts together, and it behaves like 1. But, generally, I try to say, you wanna try to unify on your framework just so that well, 1, talent pool dilution, if you have to maintain 2 or 3 component libraries that all look the same but are in different, like, formats Yeah. You just have to hire 3 x and develop 3 x on it. So, generally, I like to say, well, try to keep it all in 1 framework because you have more reusable parts. It's lighter to Node. And I think that's what gave Microfront It a bad name was the Wild West of, hey. We can just use anything anywhere, and it all will work, which, yeah, it Wes. But at what cost? So, generally, I don't like that approach. Like, at ByteDance, everything is essentially in React, and that's how we use it because that gives us the most, like, flexibility and interchange between any part of the company. But we are also very big on unified architecture.

Guest 1

You Node, the sum of parts makes a product, and everything there is kind of geared for how fast can we get stuff out and, you know, how efficiently can Wes. If we can bring down the cost of r and d, we can develop a whole lot more, see what happens without it becoming like a without a bad idea being a a negative thing. Oh Yeah. Build 20 apps. And if it takes you 3 months, who cares if 1 of them is successful?

Wes Bos

ByteDance, when you're writing React, I think I saw somewhere

Guest 1

you have a meta framework that you use. Is that true? So we have 2 meta frameworks. So the 1 that we had so the big 1 is modern JS.

Guest 1

And so this was a this was a problem that we had seen with modern ESLint didn't get a very welcoming launch, and so I had to translate the posts from Chinese when I read about, like, why it wasn't well received from somebody who read it. But, essentially, the the reason was is it's like, this looks super great if you need to do anything in your ByteDance, which was kind of true. Like, it was essentially this massive you can go if you go to modern JS doc I think it's getting better now as we've reduced the scope of it. But back when I first got there, I could read the docs for 10 minutes and be like, yeah. This is what I Node, but what does it do? Because it was modern JS doc, which Wes, like, static doc gen. It was modern module, which was a, like, meta framework for authoring NPM packages, and it was the modern framework. So you go there to read the docs, and you aren't a 100% sure what it does within the first, like, 5 minutes. You just know it does a lot of things.

Guest 1

So what we'd actually ended up doing, though, is now modern doc is actually ARISPRESS, which is our ARIS Pack based, you know, static site generation.

Guest 1

Like, you know, all of our ARIS Pack Federation, all of our doc documentation sites are all just RSPress.

Guest 1

So we split that part out of modern JS and said, hey. Same idea, but having a 1 that fits all is just too intimidating and too, like, polluting.

Guest 1

So while it works great for unified architecture, there's that lack of composition, which just adds to confusion about what the point of the product is. So we Node a RSPress, which is essentially the same idea, just repackaged and reduced the scope out of modern JS, and then ESLint, which is also the same thing as modern module. It's a meta framework designed for authoring NPM packages, and now that's got its own Node, its own everything, and it's completely separate from modern JS, but it's effectively almost the same architecture just refreshed onto, like, a Rust ecosystem.

Guest 1

So modern JS is the web framework but highly composable.

Guest 1

So a big chose that we had for us was again, we need to support, like, all sorts of weird environments, like WeChat things and native apps and all sorts of stuff. And so Modern JS is actually just Remix based application, but Remix isn't a meta framework. Remix is really and this is what they've all said. Remix is just a library. It's React Router with a render handler.

Guest 1

Then they added a Vite plug in because users want some meta framework preset. So the Vite plug in is what takes it from library to meta framework. So, essentially, what we wanted was, okay, let's get, like, a a Next. Js kind of equivalent of what the Remix library can provide, and then that became modern JS. 1 of the big challenges that we'd had, though, is, like, Remix at the time was very SSR heavy, and we don't have enough compute to handle the traffic to SSR everything.

Topic 6 45:01

Almost all ByteDance apps are web based, built with React and the internal ModernJS framework

Guest 1

So because we we've got, I don't know, like, a couple 1,000,000,000 users.

Guest 1

And Yeah. If you have to SSR every 1, you will especially with highly personalized, ever changing content, you'll effectively exhaust an AWS region very easily. So we'll we'll run out of Lambdas before we can fulfill the spike in demand at any given point. So we needed mechanisms that we could control it to say, well, if I don't need to SSR this part or this root, I shouldn't have to. Or if I need to SSR it, if the Lambda doesn't reply within x milliseconds, it should be able to skip the server render, and then the edge should be able to send back the the CSR app, and it'll be able to restore itself in the client.

Guest 1

So modern JS kind of handles a lot of those other Yeah. Mechanics for us, but it's effectively, you know, very similar to Remix, similar API, stuff like that Yeah. But with, you know, micro front end support kinda baked into it for us, you know, all the parts like that. So virtually every app that I see or work on is modern JS based. Now in China, we have another app called PACE, which is essentially like if you took RSC, ASTRO's server islands, PPR. Like, if you took all the fancy acronyms that ASTRO and Vercel have kind of created, it's effectively all of those things combined, but based on Remix, and it's super lightweight. Like, its CLI JS literally run this long bash shell command that, like, makes directories and, like, writes a whole it's like a 1 liner bash that's, like, that big, and it just stencils out the app.

Guest 1

But pace is really, really cool. It's like, probably our more modernized, like, super light, adaptable kind of response to server islands, RSC, and so on. And but that's primarily only used on, like, Douyin, on the Douyin web, which is the Node page for the Chinese TikTok. And I think a lot of that was really just proving ground for, okay, like, how would RSC work? Like, you know, what do we wanna do with it? And then now we, like, fiddled with that for about a year. We're looking at, okay, now we'll pull those features back into modern JS. So modern JS is essentially, like, where all the big stuff happens JS that we have maybe 2 or 3 other web frameworks that aren't open sourced Wes, like, faster iteration or experimentation happens just because it's it's cheaper to write a new framework than absorb the risk of changing the 1 that we know works.

Wes Bos

Another thing I wanna ask you about, because I I tried for a year to find this out, and I I did I did finally get the answer, but I'm curious if you know about it, is ByteDance has its own JavaScript runtime.

Wes Bos

And I for the longest time, I could not figure out I was trying to I was asking everyone, like, could somebody please tell me what they're using this for? And then finally, I found some post in Chinese, and I translated it. I found the author of it, and and he tweeted a couple lines back at me. But are are you aware of of that at all? There's a few places that it's used. So the 1 is I think we have something called

Guest 1

I can't remember the name. I think I think it's byte cloud or something like that. But, essentially, we have our own, like, edge runtime.

Guest 1

So, like, I think it's more popular in Asia, but, essentially, kind of like how Vercel has whatever they've Scott, and, CloudFlare has, what's theirs? Worker d. Worker. We've also yeah. We we've got our own Edge runtime.

Guest 1

And then we also have, something basically similar to Hermes, which is in React Native. We've got, like, a similar to Hermes kind of makeup as well, for, you know, doing whatever, like, Hermes would do, as well. The the main differences, though, I would say Yarn, we've pushed a lot for web standards and making sure it's actually compliant with, like, all the web specifications.

Guest 1

Hermes, I know, doesn't have full standards in their APIs. And then the other thing that we also have is ours is, supports the NAPI APIs.

Guest 1

So you can use tiny runtime, but it's NAPI compatible, uses the NAPI API or can use the NAPI APIs.

Guest 1

And and I think we can compile it down to WebAssembly and, you know, all the other kind of things that are useful for it. But, effectively, like, it's, those are the the the couple areas that I know it's utilized in. Awesome. And for for listeners that NAPI is the

Wes Bos

node API for, basically kicking out to native APIs that may sit on your machine.

Guest 1

Yeah. This is like how RS Pack is really quick. So RS Pack is fast because it uses Pnpm, and NAPI is where we can bind to like, this is, I think, a challenge, like, with, say, WebAssembly based, stuff JS you lose a lot of the speed building it in that way. But if you don't target, like, WebAssembly, what you're usually targeting is the NAPI APIs that you can use. I know NAPI RS now has WebAssembly targets, so it's a lot more interchangeable.

Guest 1

But, yeah, essentially, that lets you, like, fast access to the to the c layer, and you can skip the the JS Yeah. Communications for Okay.

Guest 1

Various access.

Wes Bos

That's awesome. So 1 thing we didn't even talk about at all, and we'll have to have you come back at some point, is, like like, RS Pack entirely. Right? Like so part of the problem at ByteDance was that building was very slow. Right? Maybe you can just give us a little teaser. Maybe people can watch the conference talk. But build times are very slow, and and you need to speed that up. It's really hard now in, like, the modern times to get the full benchmarks because, like, a lot of areas are using ZMock or use Federation.

Guest 1

Yeah. So I had the business measurements on Federation, and this was still what it was webpack based. So we haven't, like, reconsolidated it with Node Rust plus that. But I know with for a product team that used Federation, you would see a 300% increase in product output. So it's a little different from engineering output, which means, oh, I can spin the wheels in the mud quicker, but product output is more like, you know, meaningful things that enter the product, you know, that that have, like, a positive impact. So, like, feature, actual feature release. So with federation, we had seen about a 300% increase, and so part of this was smaller shards, faster builds.

Guest 1

But the other part was, again, just that ability to interchange it and kind of avoid the double deploy scenario. Mhmm. But Node still, there's lots of aspects Wes, there were still challenges. So, you know, 1 of the other big areas was really just builds in general. So what what we had seen is for any build that for any CI process that takes 35 minutes or over, the the mean time to merge is about, 1600 minutes.

Guest 1

So it was kind of also regardless of size. Obviously, slower the build, the bigger the code Bos. That does skew a little bit. But when we introduce, like, Rust tooling, you know, that obviously helped kind of normalize our hypothesis here. But, essentially, what we saw is if a build is under 5 minutes, the merge latency is about 20 minutes regardless of the size of the application or anything. If the build is within 5 to 10 minutes or if the CI JS within 5 to 10 minutes, the merge latency is roughly about 3 hours.

Guest 1

Now if it's, say, 30 seconds, it's still 20 minutes.

Guest 1

If it's 4 minutes and 30 seconds, it's still 20 minutes. So there was virtually no improvement beyond getting into the the the golden Node, and, generally, you're not gonna do a whole lot of Vercel within 20 minutes just because human processes. So, generally, what we try to target is the 5 to 10 minute range for CI to complete, and that gives us a realistic, cool. It's done. People can preview it. And, generally, you know, within within 3 hours, that merge will happen. But a major constraint of this really became, okay. Like, how much latency are we losing in the product because the build takes 30 minutes, and how long are you staring at it before you change another branch and you go and do something else, or it's the end of the day, or you forget about it and you get sidetracked? And now that product velocity starts to take. So getting that getting that curve down to the habitable zone of, you know, 5 to 10 minutes, you know, I think it had something like just on a single repo. I did the math on it, but on a single Deno, that would churn from taking it from, like, a 30 minute build down to that Node. It's something like a 1000000 hours in latency a year recovered for that 1 for just 1 product, and that's just 1 of the repos among thousands of them. So a huge amount of, like, actual product value time pushed back in, more iterations, more meaningful iterations.

Guest 1

But the the actual reason that we built RS Pack yes. Builds were slow, and we saw this latency curve in here. But the 1 of the primary reasons for RS Pack was actually cloud costs.

Guest 1

That was the biggest cost center Wes the cloud. So Oh, really? Because I think, also, labor's, not quite JS, premium as it is here in the US. So manpower is not the major burn. It's compute.

Guest 1

So the primary reason for RS Pack was to reduce the cost of compute.

Guest 1

So what we had seen when we switched everything over to RS Pack Wes, across the board, 80% reduction in cloud cost, which was, huge. On top of that as well, we'd also seen, like, bandwidth egress costs go down because we've done a lot of work to improve, tree shaking and code splitting and how modules are optimized and where the chokes go. Outside customer outside user had told us that they'd seen about a I think it was, like, a $10,000,000 a month reduction in cost Yeah. Just by taking what they had currently, changing the package from webpack to RSpack, and then switching out a couple of the loaders like we suggest. And other than that, it was, you know, just a lift and shift, took them about 2 hours, and that was and, you know, you're looking at 10,000,000 in egress cost. Being responsible

Scott Tolinski

for something like that is JS is pretty amazing. Like, even being on a team to to cause that much of a shift in reduction of Scott is is out of the scope of my,

Wes Bos

brainpower here. It's pretty amazing. Yeah. We had the folks on from Amazon LLRT, and they're explaining that as well. They're like, simply, if we can cut 30% off of how some some specific things take to run, that is it might just be 30 milliseconds, but compound that by how often it's run, it's a massive savings.

Guest 1

Yeah. Yeah. I think, like, our own our own savings just on like, from our own, like, measurements and stuff like that internally, RSVPEC's ROI has been in the 100 of 1,000,000 of dollars for us, ourselves Oh, wow. In in utilizing it.

Guest 1

And, I mean, given given how much it Scott to develop, this was another crazy thing is, like and the problem was so expensive that the cheapest solution was to write a new bundler from scratch. And that was truly it was cheaper than migrating to a different build solution because, again, we've got, like, 10,000, 20,000 repos.

Guest 1

So Yeah. There was we had tried to, like I think early on, there was okay. Maybe we look at Vite, and we see, hey. Maybe Vite could be it's getting popular and all of that. And so their 1 was the the the cost of just moving everything. Cost of change management was massive. It would've taken, you know, Node huge amount just to switch everything over. On top of that, build is already the biggest on call area. Now you're gonna swap it out with a different build. And on top of that, we also had, like, things that were nonnegotiable.

Guest 1

Like, I know they're fixing it now with roll down, but at the time, it was the whole unbundled problem.

Guest 1

So unbundled, for us, anyway, like, if you have Chrome extensions on the browser or you have anything in, like, the dev workflow where maybe they have their cache still turned on or turned off, like, you know, usually, you'll see, like, habitually, that could lead to, like, minutes to hot reload the page because the network requests have to go down again. So, technically, some of that is user error, but the problem is is, again, on call pressure and the fact that it doesn't just magically do it. And then, like, I think this is more, like, issues with, I think, ES build and a couple others, but, like, the chunk fidelity.

Guest 1

Not being able to chunk things really well sounds really silly, but Wes you start looking at the the revenue pipeline on the larger end, artifact stability and integrity and, like, the ability to chunk things correctly, not make it too granular, not make it too, you know, too large became a really major concern. And, ultimately, the cheapest option was to just rewrite Webpack Sanity Rust. Now granted, there were 4 other bumblers we wrote in between and architectures we wrote. It wasn't like, oh, let's go and make Webpack and Rust. That was the last attempt after several other design attempts before we kind of realized, look. There's just nothing out here that has what we Node it to do. And, ultimately, everything else we tried was a case of everything that a bundler lacks that we would require, we're ending up we just found out we're ending up just writing Webpack again minus the test, minus the stability, minus the ecosystem to the point where it's like, okay. Well, if we need so much of these Webpack type capabilities that we know have worked for the past decade, it's probably just cheaper to just port Webpack to Rust. And then on the back end of that, we can look at speeding it up, getting rid of things that we don't like, but starting with a guaranteed Wes know it works 1 to 1 match.

Guest 1

Anything that we change from there, we know we have 60,000 unit tests backing it that it always works. And it's a lot cheaper to just, on the tail end, decide to make changes afterwards once we know we have an exact business drop in for it. Yeah. Man. And, like, the 1 thing I thought I know we're getting to the end of the hour, and I'll wrap it up soon. But, 1 cool thing I wanted to say is that, like, you obviously built

Wes Bos

RS Pack and then also made the API work with Webpack. But 1 kinda interesting thing you said is, like, that's not the only way that we could surface the technology behind this because, like, if you like the way Vite works, maybe we could build something like that, or, like, the engine of this thing could be surfaced in in other ways.

Guest 1

So this is this is a lot of hate that Webpack gets, unfortunately, is a lot of it I find is, like, people just don't like how you configure it. Yep. Like, that that's it. It you know? And Veek, you know, changed the game in terms of simplicity and streamlining the user land overhead, massively.

Guest 1

And so, like, we have something called RS Build, which is essentially the Vite style API, you know, super ESLint, no loader plug in config, totally different config layer, and it's just a build tool similar to how Vite is a wrapper around roll up and Wes build.

Guest 1

RS build is a wrapper around RS pack. So it gives you this really streamlined, oh, if you want React, there's a React plug in that sets up all 6 pieces that you want. So much smaller surface area to chain things that you want together. It has an environment API as well similar to Vite, all of that. So we have, like, those layers to separate you from the underlying horsepower of the tool, which framework authors probably still want. End users want that simplicity.

Guest 1

But a lot of people go, oh, well, this is just a Webpack port. You know, it's whatever preconceived notions you have.

Guest 1

But the main reason that we had chosen RS Pack there there's 2 there's 2 sides to it. So 1 is, yeah, there's a plug in API, and having all of that for free and work right out the box obviously saves time, saves money, gives us an ecosystem from day 0.

Guest 1

But the second aspect of Webpack is its actual architecture internally, and so this is where we ran into the most problems developing bundlers is building a bundler is relatively easy and cheap to do. Designing it and architecting it is not. And because JS Pack is in Rust, the nice thing here is the the Webpack part of it that we think of, oh, Webpack. I don't like that. Okay. What you're talking about is the bindings Mhmm. Because this is a binary. So, technically, there's no deep seated or connected things. The bindings happen to look like Webpack's front facing API and config, but the back end of RS Pak is using Webpack's architecture, how they designed their actual compiler to work, and the compiler architecture is totally separate from the user plug ins, the APIs, the config layers. All of that just happens to be a style.

Guest 1

So we did this. We're we're working on, releasing in the next year the RS Pack SDK, which is essentially saying, okay. If you wanna build a custom bumbler, do you wanna have to build your own module graph and chunk graph and compiler class and all these parts again? Probably not. So we're gonna release an SDK where you could say, alright. Well, here's all the parts, and you can make any bundler that you want. And those bundlers won't have any kind of, you know, connotation to Webpack's design or anything.

Guest 1

Purely, it's just here are the parts, and you can build it. And if you create certain set of bindings, it happens to look like Webpack.

Guest 1

So that architecture, we've taken we have a a project that we've been experimenting with called Pnpm.

Guest 1

It'll be under Yarn fist, I think, GitHub handle is hard fist, so it'll be like hard fist slash unpack. Yeah. And, essentially, it's it's using unplug in as the bindings as the Rust bindings. So instead of using, like, webpack's bindings or whatever, we said, well, what if we took Yarn pack's architecture or the webpack style compiler architecture, but we used unplug ins and bindings? It's it's API. So instead of using webpack's 1, it's just whatever unplug in does, that's the bundler's API.

Guest 1

And in there, we kind of noted, okay. It's it's about a 100 times faster than ARRIS pack, but it has 10% of the power of it. So, obviously, again, you you know, what makes it powerful, what makes it slow, all of that is really just, okay, well, when you use Webpack, for example, it's applying, like, a 100 plug ins for you. When you put your little Webpack config in there, when you say, like, target node, that's a node target plug in and a common JS chunk format plug in and, like, 6 other plug ins that make up, oh, make this thing go into Node JS output mode. So when you swim all of that down, you go, okay. Well, here's a compiler class. And if you wanna chain things onto it, then you can.

Guest 1

Obviously, you can make it super fast or simplify the API greatly. But in doing so, you might lose some of the functionality that it can have just because user API is more limited, or we're just not adding as much on there to give it that 1 for 1 experience to Webpack. But at the same time, it does kinda prove out that, hey. The the only thing that makes a Webpack is the bindings file. So in v 2, what we're looking at is, okay, so what happens if we potentially look at let's maybe replace a couple hooks that we know warp slow or we know are burdensome. Maybe we get rid of those plug in hooks or deprecate them, and we bring in something new that can do this in a better format.

Guest 1

Or, you know, what if we start changing the user API a little bit and maybe offer something that's a bit more streamlined? Or what if we use it in tandem with something like RS build, where we can already say, hey. We can simplify all of this, and we can make the back end of the compiler more oriented to dealing with the complex task. And we say, hey. The end user side, your main point of use is through RS Build, which is this lightweight, streamlined tool. But, yeah, essentially, there's no real connection between the Webpack architecture and the Webpack config and plug in API. And for RS Pack, the only real connection there is the binding.

Wes Bos

Man. Man. I I just have My my mind absolutely blown, for an hour straight. I could I could probably go on for forever, but we will start to wrap it up there.

Wes Bos

Scott, do you have have anything to,

Scott Tolinski

add here? I have I have questions, but I'm sure they would, invoke another 30 minutes of discussion here. But, no, I have nothing else to add. Zach, this has been amazing. It's been mind blowing talking to you. You've made sense of so many things that have big bill question marks in my head for so long. So, I really appreciate the time you've taken to break all this down for us. It's it's been really amazing.

Guest 1

Yeah. Thank you for having me. There there is 1 other thing I wanted to add that we never covered on the federation side, which was just Sure. So in v 2, while we came out with the whole middleware side of things, there are some other major features that we had brought out, which I would say these are probably some of the killer aspects that you don't find with the other solutions.

Guest 1

1 is that we have actual hot module reloading available, which, you know, you can turn in. It works through Chrome extension. So even if you're in, like, production, you can hit the switch on the Chrome extension. And because production is powered by Federation, we can say, oh, well, React is now the development version of React, and we can source that from a different location. And we have a plug in built into the system that tells it, hey. If the Chrome extension tells you to use the Dev React, you're gonna source the Dev React from the Chrome extension instead. It allows you to switch it on and off without actually any build complexity.

Guest 1

1 of the other major things, though, that I didn't cover JS that we support remote types.

Guest 1

So this has been probably a major issue with any distributed system JS you lose the ability of type Sanity. And so this is something that Federation does support JS hot reloadable types and remote types. So if you deploy something to stage or production or whatever, when you link to that manifest, in that manifest, we have a piece of metadata that says, hey. Here's where the TypeScript bundle is. And when the compiler starts up, it will search through the manifest, and it will resynchronize the types and extract them on your disk for that version you're using.

Guest 1

And then, obviously, like, if, say, you push new version to prod or stage and the contracts change, the next time you run a build, your types will also be updated, and you'll get a type validation error, you know, whatever type error you get to say, hey. The contracts don't work anymore.

Guest 1

So that's, like, probably 1 of the coolest features that I've seen come out there.

Guest 1

1 other thing that's really neat with the types, because we got this a lot, which was, well, how do you deal with it when it's not statically analyzable? Like, what happens when you're using our runtime since Federation is not bound to a bundler anymore? There's a plug in that helps prepare your chunks, and then there's the runtime that does all the orchestration of it, and the 2 aren't necessarily mutually exclusive. Like, you can use the runtime without a build plug in Node problem. So, you know, you can consume it anywhere, but, a lot of people also use the runtime when, say, it's super dynamic, like the CDN Deno, you know, a database controls which remotes are gonna load in which slots on the app so you don't know what goes where, or you're using state to kind of set the import that you want. Inside of there, we also have when when the dev server's running, our type engine runs in the background, and the browser will actually send the at runtime discovered, you know, because we can't analyze it statically. But when you go, hey. Load remote Yarn / Yarn, we'll go, oh, they're asking for home slash checkout, and we'll actually tell the the type engine, hey. Here's another type that we located that could only be found in the browser, and we post that back to the type engine, and it will synchronize dynamic types as well. So even when you can't statically analyze something, the browser is informing it of any any unknowns that couldn't be found during compile.

Wes Bos

Yeah. That's sick. Man, so many hard problems to solve in it. That's really fun.

Wes Bos

Cool. Wow. Well, thank you so much for all your time. The last thing we'll ask you is is what you wanna plug to the audience. Do you have anything

Guest 1

else other than the stuff you just talked about all day today? Okay. So, Node, definitely, if you're into micro front ends or anything like that, check out ZephyrCloud.

Guest 1

Io.

Guest 1

That's kind of the the the equivalent of VMock for outside. So if you're looking for stuff to manage or and it's not just, oh, manage the assets, but it kinda it's like Kubernetes for the front end. It's your end for your cloud, no markup, and it just helps you manage your cloud and where things go. And it's got stuff like I think the coolest thing we saw Wes, the they pushed a bad deploy deliberately to a native app, to a back end and to the front end. And within 1 second, the system automatically recovered from 3 critical failures throughout all 3 applications. They were all running on Federation, and it was down for 1 second and brought back up autonomously.

Guest 1

So really cool thing to definitely check out.

Guest 1

Another 1 that we've been cooking up, at ByteDance has been mid scene JS.

Guest 1

Definitely check it out. It's essentially chat GPT meets end to end testing and brow browser automation. We have a really cool Chrome extension, which I've used, like, to help manage my stock portfolio and stuff, where I can just tell it, hey. Go do this and click around here, and I hit play, and it, like, takes over the browser and and does some stuff. But there's actually a library for it too to do all of your end to end testing. That's definitely another cool 1 that's, like, under our Webinfra hood at the moment.

Guest 1

That's definitely worth checking out.

Guest 1

So that's like a vision that's a vision model. Right? I was just telling Scott about this. I was like, there's models now that can, like, look at something and figure out what buttons and actions are missed. Yeah. And it's super reliable. So it uses it's got, like, a whole inspector that comes up, but, essentially, it uses machine vision that it tags the page and figures out, like, what each aspect is, and then it interprets your prompt and goes, okay. Cool. Then I should be using this tag for this part of the page. But I've used it to, like, open and close positions. Like, hey. Every morning, run a Kroen job that goes, clicks through each stock in the list if the stock has spiked. You Node? Like, sometimes pre trading, they skyrocket.

Guest 1

So I say, hey. If it's pre trading window and the thing has risen by x amount, then I want you to, step 1, click on sell position. Step 2, sell 50% of the position, then click review order and confirm, and run that CronJob every morning before I get up to check for any spikes and sell in and out. So I obviously use it for something else, but it's designed for end to end test assertion with, like, Playwright or something like that.

Scott Tolinski

This is amazing. I've never seen this before, and I'm I'm gonna Bos spending some time with it today. So thank you for that. Yeah. It's got lots of cool uses. My,

Wes Bos

nefarious scraping brain just, started pinging, being like, oh. Wes. Right. Yeah. Your scraping brain.

Wes Bos

Cool.

Wes Bos

Awesome. Well, thank you so much. Appreciate all this. We'll have you back on at some point.

Wes Bos

You're a fount of information, and, Yeah. We'll we'll catch you next time. Alright. Thanks for having me. Good to meet you all.

Share