What are the downsides of v8? Is it poor security isolation?
Here's an example. Under no circumstances should CloudFlare or anyone else be running multiple isolates in the same OS process. They need to be sandboxed in isolated processes. Chrome sandboxes them in isolated processes.
Process isolation is slightly heavier weight (though forking is wicked fast) but more secure. Processes give you the advantage of using cgroups to restrict resources, namespaces to limit network access, etc.
My understanding is that this is exactly what Deno Deploy does (https://deno.com/deploy).
Once you've forked a process, though, you're not far off from just running something like Firecracker. This is both true and intense bias on my part. I work on https://fly.io, we use Firecracker. We started with v8 and decided it was wrong. So obviously I would be saying this.
Firecracker has the benefit of hardware virtualization. It's pretty dang fast. The downside is, you need to run on bare metal to take advantage of it.
My guess is that this is all going to converge. v8 isolates will someday run in isolated processes that can take advantage of hardware virtualization. They already _should_ run in isolated processes that take advantage of OS level sandboxing.
At the same time, people using Firecracker (like us!) will be able to optimize away cold starts, keep memory usage small, etc.
The natural end state is to run your v8 isolates or wasm runtimes in a lightweight VM.
The security claims from Cloudflare are quite bold for a piece of software that was not written to do what they are using it for. To steal an old saying, anyone can develop a system so secure they can't think of a way to defeat it.
1. https://www.cvedetails.com/vulnerability-list/vendor_id-1224...
* Security-wise, isolates aren't really meant to isolate untrusted code. That's certainly the way Chrome treats them, and you would expect that they would know best. For instance, if you go to a single webpage in Chrome that has N different iframes from different origins, you will get N different Chrome renderer processes, each of which contain V8 isolates to run scripts from those origins.
* Isolation is not perfect for resources either. For instance, one big issue we had when running multiple isolates in one process is that one isolate could OOM and take down the all the isolates in the process. This was because there were certain paths in V8 which basically handled hitting the heap limit by trying to GC synchronously N times, and if that failed to release enough space, V8 would just abort the whole process. Maybe V8 has been rearchitected so that this no longer occurs.
Basically, the isolation between V8 isolates is pretty weak compared to the isolation you get between similar entities in other VMs (e.g. Erlang processes in the BEAM VM). And it's also very weak when compared to the isolation you get from a true OS process.
You are conflating serverless (which is a particular deployment and lifecycle model) with a specific technique for implementing it.
There are inevitably going to be different performance/capability tradeoffs between this and other ways of implementing serverless, such as containers, VMs, etc.
> Simultaneously, they don’t use a virtual machine or a container, which means you are actually running closer to the metal than any other form of cloud computing I’m aware of.
V8 is a VM. Not in the VMware/KVM sense of the term, but a VM nonetheless.
Reads like an opinion piece. Technically weak on details and the comparisons leave a lot to be desired
> I believe lowering costs by 3x is a strong enough motivator that it alone will motivate companies to make the switch to Isolate-based providers.
depends on so much more than price... like do I want to introduce a new vendor for this isolate stuff?
> An Isolate-based system can’t run arbitrary compiled code. Process-level isolation allows your Lambda to spin up any binary it might need. In an Isolate universe you have to either write your code in Javascript (we use a lot of TypeScript), or a language which targets WebAssembly like Go or Rust.
If one writes Go or Rust, there are much better ways to run them than targeting WASM
Containers are still the defacto standard
Underneath the hood other serverless technologies like lambda are running lightweight VMs running linux. Therefore they can easily accept any linux compatible container and they can run it for you in a serverless way.
Cloudflare Workers are running modified Node.js runtimes. You can only run code that is compatible with their modified Node.js runtime. For Cloudflare to be able to offer a product that runs arbitrary linux compatible containers they would have to change their tech stack to start using lightweight VMs.
If you want to run Node.js, then Cloudflare Workers probably works fine. But if you want to run something else (that doesn't have a good WASM compatibility story) then Cloudflare Workers won't work for you.
I don't know how popular it was in Java's heydays, but it doesn't seem used today. Being tied to a handful of languages may have been an issue.
Because it's positioning a key benefit, I have a lot of issues with this sentence. First, there are many bare metal cloud options (ranging from machine types to full-blown providers). Second, a container doesn't put you any further away from the bare metal than a process.
But it is likely one of the most accessible compute platforms for web development. Very easy to get started. Everyone and their mother know JS. Similar API to browsers (service workers). Great docs. Free tier. Tons of other stuff that webdev needs on their platform. They are adding object storage, sqlite, messaging, strong consistency for websockets on top of it. Their pricing is extremely competitive and dead simple.
I think there is a chance that more and more stuff gets built there. Isolates are a part of it and might be a competitive advantage, but from a developer's perspective they are not the selling point, but an implementation detail.
* Going from "I have some code" to "it's running" took many, many clicks. Luckily, they have CloudFlare Pages integration now, so you can just throw your code in a repo and the server will run it, PHP-style.
* Only JS is supported, more or less.
* The documentation is fairly good, but more examples would be great.
* Integration with other sites seems lacking. For example, I didn't find a way to redirect one of my site's endpoints to a worker.
I suspect that much of my pain was because I didn't use Wrangler, though, so the above may not apply if you use the canonical way.
I have 3 products where I’d allow client code to run once we can make that happen.
A cloud provider would be unwise to use v8 isolates as a way to separate tenants from different customers. But there might be many cases where the same customer might benefit from having one of their shardable workloads leverage v8 isolates.
Of course not every single-tenant multi-shard workload is appropriate; it all depends on what isolation properties are needed, and how amenable the shards are to colocation.
It's sad to see cloudflare falling so low. I guess they're heading toward their ultimate destination that is the result of all companies that go public.
The security model in v8 is no better than that of containers as there are limits to how much isolation you can give to code running in the same process. If you look at how Chrome uses v8, it is only used in carefully sandboxed processes, so it is clearly being treated as untrusted. (Though I still think v8 has done a truly amazing job locking things down for a pure userspace application)
The start-up time mentioned in the article assumes that the isolate and context creation time is the most significant delay. For JavaScript in particular, the code will need to be compiled again, and any set up executed. In any but the most trivial application the compilation and initial execution will significantly outweigh the compile step.
Despite the issues with v8 isolates or other equivalent web workers, I would not be surprised if they become more common than containers. There's a lot of buzz about them and they leveraged skills that website engineers have. Additionally, many applications can be made more private if small pieces of execution can be distributed to a data custodian of some sort that can run small untrusted bits of code on the data and then apply aggregation or add noise to the result before sending it out.
I was trying to install posthog serverside the other day with remix which was to be hosted on workers but received several errors about buffers not being available.
This all said, isolates have been really cool to work with. Being able to run lightweight code at the edge has opened a unique set of opportunities like dynamically routing things quickly even for static sites.
Downsides? Sure. It can't run many popular languages.
Security? I'm not a security guy, but Cloudflare seems to have pretty good security.
lastly, I'm a fan of what Cloudflare is building. They're darn close to getting me off of AWS.
That said, no I don't believe V8 Isolates are the future of computing - and I think I'll explain why by comparing it to shared PHP hosting.
PHP became so big because it solved something that was very important for a lot of people at the time - how can I deploy and run my code without eating up a lot of RAM and CPU. If you deployed Python, your app would be running the whole Python stack just for you. You'd need resources for a python interpreter, all the HTTP libraries, all your imports, and then your code. On the other hand, if you ran PHP, you'd be sharing the PHP interpreter, the PHP standard library (which was written in C and not in PHP), and the server would only need to run your code. If your code was basically `mysql_select($query); foreach ($result in $results)...`, then the server was executing basically nothing. All the hard work was in C and all the request/response stuff was in C and all of that code was being shared with everyone else on the box so there was almost not RAM usage.
V8 Isolates are similar in a lot of ways. You get to share the V8 runtime and pay its cost once and then just run the code needed for the user. It makes sharing space really good.
So how isn't this the future? Not to knock PHP, but PHP isn't dominating the world. Again, this isn't about knocking PHP, but it's not like people are going around being like "you'd be stupid to use anything other than PHP because of those reasons." Likewise, V8 Isolates aren't going to dominate the world. Most of the time, you're working at a place you have services that will be getting consistent traffic and you can put lots of endpoints into a service and you just run that service. There are things like having well-JIT'd code that can be good with long-running processes, you can use local caches with long-running processes, you pay the startup costs once (even if they might be tiny in some cases). And I should note that there is work to get serverless stuff some of those advantages as well. I believe Amazon has done work with kinda freezing some serverless stuff so that a warm instance can handle a new request. But given the cost of a lot of serverless options, it seems expensive if you're getting consistent traffic.
Again, I think Cloudflare's move with Workers was brilliant. It offers 80% of what you need at a lot cost and without needing the same type of high-effort, high-resource setup that wouldn't make as much sense for them. I wish Cloudflare a ton of success with workers - it's a great thing to try. I don't think it's the future of computing. Really, it's just too narrow to be the future of computing - nothing like that is going to be the future of computing.
If you're worried that you're missing the future of computing if you don't hop on it, don't be. There's no single future of computing and while V8 Isolates are great for Cloudflare's purpose, I don't think it provides advantages for a lot of workloads. Again, I think that is such a brilliant article and I think Cloudflare made a smart decision. I just don't think it's the future of computing.
These are good PROS of Isolate for Serverless Computing, IMHO.
When added to the "edge", it means they're (insanely) fast, obliterate cold-start problem (which is killer in chat where you might have not have retry), and as long as what you write can execute between 10-50ms (with ~30s for follow-on queries) it sometimes feels like cheating
The same way Cloudflare "pushes" configuration to their network, they use a similar mechanism to push code to their edge nodes.
They have killer dev tooling too-- https://github.com/cloudflare/wrangler2
You *DON'T* need to think about regions ever-- just deploy to a lot of small regions instantly & it usually "just works" and is fast everywhere.
For extra credit, you also get access to rough-grained location information from each "node" in their network that your users connect to (globally you can get access to rough-grained local timezone, country, city, zipcode, etc): https://blog.cloudflare.com/location-based-personalization-u...
ex. for chat, could so something like to prompt for location info: https://i.imgur.com/0qTt1Qd.gif
Kenton Varda (https://twitter.com/KentonVarda) who was in charge of Protobuf and other projects gave an overview tech talk @ 10:23 speaks to isolates: https://youtu.be/HK04UxENH10?t=625
## Downsides encountered so far
- Not 1-1 replacement, think of your code like a highly-performant service worker (usual suspects: https://developer.mozilla.org/en-US/docs/Web/API/Service_Wor...)
- Many libraries (like Axios for instance) won't work since they call out to Nodejs (this might be a good thing, there are so many web APIs available I was able to write a zero-dependency lib pretty easily) They're adding bespoke support for packages: https://blog.cloudflare.com/node-js-support-cloudflare-worke...
- There's only a tiny of bit of customization for Workers required, however, there's a bit of platform risk
If you haven't tried before, definitely worthy of further examination
Re: security, it seems like a pretty good model.