HACKER Q&A
📣 ankush38u

Has serverless matured enough for creating user facing APIs?


Love everyone's opinion here on if the serverless functions (Like lambda functions from AWS, GCP cloud functions) have reached a position to create scalable apis to handle User apps with Millions of DAU? Also if you can share your views along: 1. Databases support available to Lambda (Due to connection exhaust issue.) 2. Learning curve for new database technologies like dynamo db 3. Some of these db technologies are vendor lock in. only available from AWS, like Dynamodb, Rds proxy 4. Do you consider a cold start an issue big enough to not move into Serverless? 5. Your choice between traditional microservices using docker and ECS/ Cloud Run or Kubernetes VS Lambda functions to create user facing apis? 6. Cost comparision of both technologies on scale?


  👤 andrew_ Accepted Answer ✓
It's been mature enough for at least four years.

1. Not an issue for me. Connection reuse support in Lambda is quite good

2. NoSQL is a good skill to keep your in bucket anyhow. DynamoDB is a different approach, but much of the same tenets you'll find in other NoSQL databases still apply. Using tools like dynamodb-toolbox [1] help greatly with paradigm shifts into Dynamo.

3. True. Ask yourself how much this matters. How likely is it that you'll need to support another cloud provider for a single product? In 20 years I've seen a platform provider switch exactly once. And DynamoDB can be exported easily.

4. Nope. But there are things to learn about cold starts; how to structure code, where to initialize things, which things should be singletons, etc.

5. Depends on the situations and needs. The right tool for the right job, if you will. I've written GraphQL servers that run on lambda which serve 300k users daily. I've also done the same using Fargate/ECS et al. Much of the decisions revolve around complexity of execution and cost factors (e.g. the cost and complexity of running lambda's to process data often versus a Fargate service). You're getting into Software Architecture now.

6. Again, depends on the situation. You'll need to start thinking about what individual services/components/things are doing, what they need, and how they need to run. Gather that information, and then start cost comparisons using the pricing tools the provider has.

[1] https://github.com/jeremydaly/dynamodb-toolbox


👤 SvenAl
Hey there ... I've been building serverless applications since 4 years now and have gone through a steep learning curve while the technology was very young. I think today you can easily build scalable serverless services using Lambda and other offerings. There are nuances on how to avoid specific bottle necks and work around them, but you'll only spot them probably once they happen.

The database is one of the most important choices you can make. Any database that requires a TCP/IP connection and doesn't work with an HTTP API is out of the picture due to the way lambda functions work with such databases. See more: https://www.webiny.com/blog/using-aws-lambda-to-create-a-mon...

The learning curve for DynamoDB is steep, but nothing that a senior developer can't tackle in a few weeks. It's a worthy skill to have, especially if you work in the AWS ecosystem.

Vendor locking will always be there, but really, don't worry too much about it, especially in the beginning. There are ways to protect yourself by abstracting your business logic and having a layer between how the logic interacts with the underlying serverless service. Later if you do need to move, the move will still be a bit painful, but not as much.

Cold start is not a problem at all if your bundle is not overly big. If needed you can always have a few provisioned concurrency functions.

Cost, benchmarks and similar - checkout this page: https://www.webiny.com/docs/performance-and-load-benchmark/i...

Disclaimer: I'm one of the authors behind Webiny - enterprise serverless CMS. Happy to answer any additional questions. Hope this helps!


👤 verdverm
If you are building a full API, will you deploy it as a cohesive unit or as individual endpoints?

What advantages does server less offer over the equivalent container based scale to zero?

If you are serving 1M DAU, you probably aren't going to worry about cold starts. If cold starts are an issue, scale to one, or use container orchestration.

I've never seen the appeal of server less. I want more control over the environment, like language version used, and the container based alternatives have the same capabilities, so I reach for them.

Server less will generally be more expensive if you are always getting reauests. They are better for rarely called endpoints. Are you worrying about scale, costs, and runtime properties too early in development?