HACKER Q&A
📣 pranay01

Pros and cons of V8 isolates?


I was reading this article on Cloudflare workers https://blog.cloudflare.com/cloud-computing-without-containe... and seemed like isolates have significant advantage over serverless technology like lambda etc.

What are the downsides of v8? Is it poor security isolation?


  👤 mrkurt Accepted Answer ✓
The downside of v8 isolates is: you have to reinvent a whole bunch of stuff to get good isolation (both security and of resources).

Here's an example. Under no circumstances should CloudFlare or anyone else be running multiple isolates in the same OS process. They need to be sandboxed in isolated processes. Chrome sandboxes them in isolated processes.

Process isolation is slightly heavier weight (though forking is wicked fast) but more secure. Processes give you the advantage of using cgroups to restrict resources, namespaces to limit network access, etc.

My understanding is that this is exactly what Deno Deploy does (https://deno.com/deploy).

Once you've forked a process, though, you're not far off from just running something like Firecracker. This is both true and intense bias on my part. I work on https://fly.io, we use Firecracker. We started with v8 and decided it was wrong. So obviously I would be saying this.

Firecracker has the benefit of hardware virtualization. It's pretty dang fast. The downside is, you need to run on bare metal to take advantage of it.

My guess is that this is all going to converge. v8 isolates will someday run in isolated processes that can take advantage of hardware virtualization. They already _should_ run in isolated processes that take advantage of OS level sandboxing.

At the same time, people using Firecracker (like us!) will be able to optimize away cold starts, keep memory usage small, etc.

The natural end state is to run your v8 isolates or wasm runtimes in a lightweight VM.


👤 dsl
Answering the security question specifically: v8 is a runtime and not a security boundary. Escaping it isn't trivial, but it is common [1]. You should still wrap it in a proper security boundary like gVisor [2].

The security claims from Cloudflare are quite bold for a piece of software that was not written to do what they are using it for. To steal an old saying, anyone can develop a system so secure they can't think of a way to defeat it.

1. https://www.cvedetails.com/vulnerability-list/vendor_id-1224...

2. https://gvisor.dev/


👤 simscitizen
We did pretty much the same thing as Cloudflare does for workers in the Parse Cloud Code backend many years ago--when possible, we ran multiple V8 isolates in the same OS process. There are certain issues we ran into:

* Security-wise, isolates aren't really meant to isolate untrusted code. That's certainly the way Chrome treats them, and you would expect that they would know best. For instance, if you go to a single webpage in Chrome that has N different iframes from different origins, you will get N different Chrome renderer processes, each of which contain V8 isolates to run scripts from those origins.

* Isolation is not perfect for resources either. For instance, one big issue we had when running multiple isolates in one process is that one isolate could OOM and take down the all the isolates in the process. This was because there were certain paths in V8 which basically handled hitting the heap limit by trying to GC synchronously N times, and if that failed to release enough space, V8 would just abort the whole process. Maybe V8 has been rearchitected so that this no longer occurs.

Basically, the isolation between V8 isolates is pretty weak compared to the isolation you get between similar entities in other VMs (e.g. Erlang processes in the BEAM VM). And it's also very weak when compared to the isolation you get from a true OS process.


👤 ch_123
> I was reading this article on Cloudflare workers [...] and seemed like isolates have significant advantage over serverless technology like lambda etc.

You are conflating serverless (which is a particular deployment and lifecycle model) with a specific technique for implementing it.

There are inevitably going to be different performance/capability tradeoffs between this and other ways of implementing serverless, such as containers, VMs, etc.

> Simultaneously, they don’t use a virtual machine or a container, which means you are actually running closer to the metal than any other form of cloud computing I’m aware of.

V8 is a VM. Not in the VMware/KVM sense of the term, but a VM nonetheless.


👤 verdverm
> We believe that is the future of Serverless and cloud computing in general, and I’ll try to convince you why.

Reads like an opinion piece. Technically weak on details and the comparisons leave a lot to be desired

> I believe lowering costs by 3x is a strong enough motivator that it alone will motivate companies to make the switch to Isolate-based providers.

depends on so much more than price... like do I want to introduce a new vendor for this isolate stuff?

> An Isolate-based system can’t run arbitrary compiled code. Process-level isolation allows your Lambda to spin up any binary it might need. In an Isolate universe you have to either write your code in Javascript (we use a lot of TypeScript), or a language which targets WebAssembly like Go or Rust.

If one writes Go or Rust, there are much better ways to run them than targeting WASM

Containers are still the defacto standard


👤 mrwilliamchang
There is also an issue of compatibility.

Underneath the hood other serverless technologies like lambda are running lightweight VMs running linux. Therefore they can easily accept any linux compatible container and they can run it for you in a serverless way.

Cloudflare Workers are running modified Node.js runtimes. You can only run code that is compatible with their modified Node.js runtime. For Cloudflare to be able to offer a product that runs arbitrary linux compatible containers they would have to change their tech stack to start using lightweight VMs.

If you want to run Node.js, then Cloudflare Workers probably works fine. But if you want to run something else (that doesn't have a good WASM compatibility story) then Cloudflare Workers won't work for you.


👤 progval
The JVM has a similar feature since in the early 2000s: https://www.flux.utah.edu/janos/jsr121-internal-review/java/...

I don't know how popular it was in Java's heydays, but it doesn't seem used today. Being tied to a handful of languages may have been an issue.


👤 kikoreis
> Simultaneously, they don’t use a virtual machine or a container, which means you are actually running closer to the metal than any other form of cloud computing I’m aware of.

Because it's positioning a key benefit, I have a lot of issues with this sentence. First, there are many bare metal cloud options (ranging from machine types to full-blown providers). Second, a container doesn't put you any further away from the bare metal than a process.


👤 dgb23
Right now if you're running WASM on their workers you pay for a ton of stuff that JS doesn't have to and probably don't get a ton of leverage in terms of performance. This is really unfortunate, so you're stuck with JS for most types of workloads.

But it is likely one of the most accessible compute platforms for web development. Very easy to get started. Everyone and their mother know JS. Similar API to browsers (service workers). Great docs. Free tier. Tons of other stuff that webdev needs on their platform. They are adding object storage, sqlite, messaging, strong consistency for websockets on top of it. Their pricing is extremely competitive and dead simple.

I think there is a chance that more and more stuff gets built there. Isolates are a part of it and might be a competitive advantage, but from a developer's perspective they are not the selling point, but an implementation detail.


👤 stavros
I deployed a toy Worker program at work the other day, and, while I'm fairly excited about workers, IMO they still have quite a ways to go, in terms of UX:

* Going from "I have some code" to "it's running" took many, many clicks. Luckily, they have CloudFlare Pages integration now, so you can just throw your code in a repo and the server will run it, PHP-style.

* Only JS is supported, more or less.

* The documentation is fairly good, but more examples would be great.

* Integration with other sites seems lacking. For example, I didn't find a way to redirect one of my site's endpoints to a worker.

I suspect that much of my pain was because I didn't use Wrangler, though, so the above may not apply if you use the canonical way.


👤 atonse
I am really hoping that someone builds an isolates based faas runtime. I think CloudFlare talked about open sourcing their stuff.

I have 3 products where I’d allow client code to run once we can make that happen.


👤 efitz
The security analysis elsewhere in this thread is correct (v8 isolates are not a security boundary), but I think that may miss a different point.

A cloud provider would be unwise to use v8 isolates as a way to separate tenants from different customers. But there might be many cases where the same customer might benefit from having one of their shardable workloads leverage v8 isolates.

Of course not every single-tenant multi-shard workload is appropriate; it all depends on what isolation properties are needed, and how amenable the shards are to colocation.


👤 mlindner
It's getting to the point of absurdity how much v8 is inserting itself all sorts of places it has no right being. Especially as the rate of computers getting faster decreases we should be working to burn less CPU to get what we want done, rather than wasting almost all of it on running a complete web browser just to run our software. It's ridiculous.

It's sad to see cloudflare falling so low. I guess they're heading toward their ultimate destination that is the result of all companies that go public.


👤 IX-103
v8 is going to be slower and is more restrictive on what can run than containers. It will be much better when you have many relatively small, infrequently used components.

The security model in v8 is no better than that of containers as there are limits to how much isolation you can give to code running in the same process. If you look at how Chrome uses v8, it is only used in carefully sandboxed processes, so it is clearly being treated as untrusted. (Though I still think v8 has done a truly amazing job locking things down for a pure userspace application)

The start-up time mentioned in the article assumes that the isolate and context creation time is the most significant delay. For JavaScript in particular, the code will need to be compiled again, and any set up executed. In any but the most trivial application the compilation and initial execution will significantly outweigh the compile step.

Despite the issues with v8 isolates or other equivalent web workers, I would not be surprised if they become more common than containers. There's a lot of buzz about them and they leveraged skills that website engineers have. Additionally, many applications can be made more private if small pieces of execution can be distributed to a data custodian of some sort that can run small untrusted bits of code on the data and then apply aggregation or add noise to the result before sending it out.


👤 kylehotchkiss
They don't run every node API.

I was trying to install posthog serverside the other day with remix which was to be hosted on workers but received several errors about buffers not being available.

This all said, isolates have been really cool to work with. Being able to run lightweight code at the edge has opened a unique set of opportunities like dynamically routing things quickly even for static sites.


👤 wizofaus
Surely if a process is running multiple Isolates simultaneously, they're multi threaded and still require context switching? (accepted, thread switches are less resource intensive than process switches). Interestingly when Chrome runs on Windows desktops it seems to allocate separate processes for each Isolate anyway, but I'm guessing this is not baked into V8?

👤 jppope
Is this the future => Yep. but probably a few iterations from this.

Downsides? Sure. It can't run many popular languages.

Security? I'm not a security guy, but Cloudflare seems to have pretty good security.

lastly, I'm a fan of what Cloudflare is building. They're darn close to getting me off of AWS.


👤 mdasen
For me, Cloudflare's decision to go with V8 Isolates was really smart and that blog post is a brilliant explanation of how one looks at trade-offs in engineering. Truly amazing work.

That said, no I don't believe V8 Isolates are the future of computing - and I think I'll explain why by comparing it to shared PHP hosting.

PHP became so big because it solved something that was very important for a lot of people at the time - how can I deploy and run my code without eating up a lot of RAM and CPU. If you deployed Python, your app would be running the whole Python stack just for you. You'd need resources for a python interpreter, all the HTTP libraries, all your imports, and then your code. On the other hand, if you ran PHP, you'd be sharing the PHP interpreter, the PHP standard library (which was written in C and not in PHP), and the server would only need to run your code. If your code was basically `mysql_select($query); foreach ($result in $results)...`, then the server was executing basically nothing. All the hard work was in C and all the request/response stuff was in C and all of that code was being shared with everyone else on the box so there was almost not RAM usage.

V8 Isolates are similar in a lot of ways. You get to share the V8 runtime and pay its cost once and then just run the code needed for the user. It makes sharing space really good.

So how isn't this the future? Not to knock PHP, but PHP isn't dominating the world. Again, this isn't about knocking PHP, but it's not like people are going around being like "you'd be stupid to use anything other than PHP because of those reasons." Likewise, V8 Isolates aren't going to dominate the world. Most of the time, you're working at a place you have services that will be getting consistent traffic and you can put lots of endpoints into a service and you just run that service. There are things like having well-JIT'd code that can be good with long-running processes, you can use local caches with long-running processes, you pay the startup costs once (even if they might be tiny in some cases). And I should note that there is work to get serverless stuff some of those advantages as well. I believe Amazon has done work with kinda freezing some serverless stuff so that a warm instance can handle a new request. But given the cost of a lot of serverless options, it seems expensive if you're getting consistent traffic.

Again, I think Cloudflare's move with Workers was brilliant. It offers 80% of what you need at a lot cost and without needing the same type of high-effort, high-resource setup that wouldn't make as much sense for them. I wish Cloudflare a ton of success with workers - it's a great thing to try. I don't think it's the future of computing. Really, it's just too narrow to be the future of computing - nothing like that is going to be the future of computing.

If you're worried that you're missing the future of computing if you don't hop on it, don't be. There's no single future of computing and while V8 Isolates are great for Cloudflare's purpose, I don't think it provides advantages for a lot of workloads. Again, I think that is such a brilliant article and I think Cloudflare made a smart decision. I just don't think it's the future of computing.


👤 hakcermani
Does dart isolate have any similarity to this ? https://dart.dev/guides/language/concurrency

👤 lovingCranberry
Is there anything like this for Python? I would like to execute python user scripts on my machine, but isolate them from each other.

👤 jfbaro
1. Shorter cold starts 2. Secure environment 3. Heavily tested in production

These are good PROS of Isolate for Serverless Computing, IMHO.


👤 hexo
In my humble opinion, the future of compute is going back from this enormous overengineering.

👤 WaitWaitWha
(I have to admit, my first thought was, why would a some vegetable juice mess with the future of computing? And, are we talking the spicy V8, low salt, or the regular?)[0]

[0] https://en.wikipedia.org/wiki/V8_(beverage)


👤 valgaze
I love v8 isolates so far-- I'm building chat tooling with it

When added to the "edge", it means they're (insanely) fast, obliterate cold-start problem (which is killer in chat where you might have not have retry), and as long as what you write can execute between 10-50ms (with ~30s for follow-on queries) it sometimes feels like cheating

The same way Cloudflare "pushes" configuration to their network, they use a similar mechanism to push code to their edge nodes.

They have killer dev tooling too-- https://github.com/cloudflare/wrangler2

You *DON'T* need to think about regions ever-- just deploy to a lot of small regions instantly & it usually "just works" and is fast everywhere.

For extra credit, you also get access to rough-grained location information from each "node" in their network that your users connect to (globally you can get access to rough-grained local timezone, country, city, zipcode, etc): https://blog.cloudflare.com/location-based-personalization-u...

ex. for chat, could so something like to prompt for location info: https://i.imgur.com/0qTt1Qd.gif

Kenton Varda (https://twitter.com/KentonVarda) who was in charge of Protobuf and other projects gave an overview tech talk @ 10:23 speaks to isolates: https://youtu.be/HK04UxENH10?t=625

## Downsides encountered so far

- Not 1-1 replacement, think of your code like a highly-performant service worker (usual suspects: https://developer.mozilla.org/en-US/docs/Web/API/Service_Wor...)

- Many libraries (like Axios for instance) won't work since they call out to Nodejs (this might be a good thing, there are so many web APIs available I was able to write a zero-dependency lib pretty easily) They're adding bespoke support for packages: https://blog.cloudflare.com/node-js-support-cloudflare-worke...

- There's only a tiny of bit of customization for Workers required, however, there's a bit of platform risk

If you haven't tried before, definitely worthy of further examination

Re: security, it seems like a pretty good model.


👤 akagusu
Are the future of computing?

👤 ledgerdev
It seems to me that WASM is clearly a better suited technically as the core runtime of the future for serverless platforms... but the question is are isolates the VHS and WASM the Betamax in this story?