HACKER Q&A
📣 tdfirth

Does anyone else find the AWS Lambda developer experience frustrating?


Hey HN,

I've been using AWS lambda a bit recently, mostly as a way to glue together various bits and pieces.

Maybe I'm doing this wrong but does anyone else find the experience to be really frustrating?

I can unit test bits of the code just fine, but at some point I always end up stuck in a slow feedback loop where I deploy the code, do some manual invoking, go and dig through the logs in CloudWatch, add another print statement in my lambda... and so on.

What I want is to run the lambdas locally, ideally more than one, and then exercise them with streams of test events (perhaps captured from a real environment). It would be quite cool if I could define BDD style tests around them too.

Anyone have any suggestions or share my frustrations?

I have heard localstack is quite good although I haven't given it a go yet. Would that work for me? I did try SAM but I was a bit underwhelmed and I don't want to use a separate IaC tool for these.

Alternatively, do other FaaS providers solve this problem?

Thanks for any help.


  👤 jiggawatts Accepted Answer ✓
You've discovered what many other people have: The cloud is the new time-share mainframe.

Programming in the 1960s to 80s was like this too. You'd develop some program in isolation, unable to properly run it. You "submit" it to the system, and it would be scheduled to run along with other workloads. You'd get a printout of the results back hours later, or even tomorrow. Rinse and repeat.

This work loop is incredibly inefficient, and was replaced by development that happened entirely locally on a workstation. This dramatically tightened the edit-compile-debug loop, down to seconds or at most minutes. Productivity skyrocketed, and most enterprises shifted the majority of their workload away from mainframes.

Now, in the 2020s, mainframes are back! They're just called "the cloud" now, but not much of their essential nature has changed other than the vendor name.

The cloud, just like mainframes:

- Does not provide all-local workstations. The only full-fidelity platform is the shared server.

- Is closed source. Only Amazon provides AWS. Only Microsoft provides Azure. Only Google provides GCP. You can't peer into their source code, it is all proprietary and even secret.

- Has a poor debugging experience. Shared platforms can't generally allow "invasive" debugging for security reasons. Their sheer size and complexity will mean that your visibility will always be limited. You'll never been able to get a stack trace that crosses into the internal calls of the platform services like S3 or Lambda. Contrast this with typical debugging where you can even trace into the OS kernel if you so choose.

- Are generally based on the "print the logs out" feedback mechanism, with all the usual issues of mainframes such as hours-long delays.


👤 andreineculau
I've been in the AWS world for 4+ years now and my immediate feedback is don't run any local emulators. None! Write unit tests for the internals and test your system end-to-end. I say that both because AWS services can have unpredictable behavior that you need to account for but also because local emulators are at best functional, but in reality far from emulating the AWS world on a 1:1 scale (especially the unpredictable behaviors i mentioned). So instead optimize for many local unit tests and many live end-to-end tests (implies many deployments and parallel environments prod, staging, dev, etc)

When it comes to Lambdas, given the reasons above, there's only one thing that can improve the experience: PROXY. Before i went on parental leave i had the idea is creating a proxy lambda which can be configured with an IP and port number. That IP and port is for your local dev environment. This way, when developing you can instruct a live system to short-circuit and to proxy calls to a local lambda available om the respective port. Trigger end-to-end tests by invoking AWS services that will eventually call the proxy lambda, and then your local lambda with the same environment/context/input, reply with output which will reach the proxy lambda, which will output the same content forward.


👤 iends
"Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP." - Fire And Motion, Joel on Software

In my own work with serverless I think about this a lot. In my day job, we built a new service and we pay AWS a few dollars per month to run. It would have cost us around $100 a month in AWS costs. However, the operational complexity is extremely high and we are also coupled to another service via Kinesis. For a billion dollar business, the trade off doesn't seem worth it.

I wonder how much of serverless is just AWS firing at Google (and Heroku, DO, etc) and other competitors. It certainly hasn't made my life as a developer easier. It's certainly much cheaper for some use cases but the complexity of the system goes up very quickly, and you're end up having to manage a lot of that complexity using an inferior tool like Cloud Formation (or Terraform).


👤 anfrank
SST is an Open-Source framework worth checking out if you are looking for a lighting fast local dev environment for serverless. https://github.com/serverless-stack/serverless-stack

I often see ppl adopt one of two development patterns:

1. Locally mock all the services that your Lambda function uses. Like API Gateway, SNS, SQS, etc. This is hard to do. If you are using a tool that mocks a specific service (like API Gateway), you won't be able to test a Lambda that's invoked by a different service (like SNS). On the other hand a service like LocalStack, that tries to mock a whole suite of services, is slow and the mocked services can be out of date.

2. Or, you'll need to deploy your changes to test them. Each deployment can take at least a minute. And repeatedly deploying to test a change really slows down the feedback loop.

SST lets you develop your Lambda functions locally while connecting to others AWS services without needing to mock them locally. Behind the scene, the lambda requests are streamed to ur local, runs ur local function, response streamed back to Lambda. So you can iterate on ur function code without redeploying.

Think of your Lambda functions are Live Reloaded. You can see it in action here — https://youtu.be/hnTSTm5n11g

I’m the core maintainer of the project. And folks in our community are saying using SST feels like "working in the future" ;)

I'd love for you to give it a try. We are also a part of the YC W21.


👤 windowshopping
I find all of AWS frustrating. Their interfaces feel like they were designed by an 80s IBM analyst.

👤 fookyong
I was completely lost with AWS lambda until I discovered https://serverless.com - the fact that you haven't found them yourself suggests they need to do a much better job of marketing themselves!

I run my lambdas locally with a single command: serverless offline

If the lambda has an http endpoint, it creates the endpoint at localhost/ and I'm good to go, it even does live reloading as I edit my code.

If the lambda runs off AWS events, I can invoke the lambda locally with a command, and point it to a JSON file of a simulated AWS event. I get my local rails app to create these AWS event JSON files, so that I can test end to end locally. Works well for my purposes.

To deploy I just run: serverless deploy --stage production

Which sets up all the necessary additional services like API Gateway, cloudwatch etc.

I can't imagine using AWS lambda any other way.


👤 astashov
I recently migrated from Cloudflare Workers to AWS Lambda + Dynamo for my relatively large pet project.

It was surprisingly hard - the local development of lambdas is still very raw, documentation is scarce, and there are various small issues appearing here and there. I should probably write a blogpost about how to setup decent developer environment for AWS CDK and Lambdas, because there's not much on the Internet about it.

I set up the whole AWS infrastructure via AWS CDK. I have one TypeScript file, that creates a stack with lambdas, dynamodb tables, API gateways, S3 buckets, wires up the secrets manager, etc - all of that in 2 environments - dev and prod.

AWS CLI also can generate a Cloudformation YAML file from the CDK file (via `cdk synth`), which could be fed into SAM. So, I generate a `template.yaml` Cloudformation file this way, then run SAM like `sam local start-api`, and it runs exactly the same lambda locally, as in AWS, using that Cloudformation YAML file. SAM also supports live reload, so if you change any source files, it will automatically get those changes.

So, okay-ish developer experience for lambdas is possible. There're caveats though:

* I couldn't figure out how to use "nameless" Dynamodb tables (i.e. when CDK assigns the name to them automatically), because if I omit a table name in CDK template, then the local lambda and lambda in AWS assume different names for some reason.

* Locally, the binary outputs don't work. It ignores `isBase64Encoded` by some reason, and lambda just returns Base64 output, instead of binary.

* And the main problem - local lambdas are SLOW. It seems like it restarts some container or something under the hood on each API call, which adds 2-3 seconds to each API call. So, the calls that should be like 30ms, are actually 2 seconds now. This is super frustrating.


👤 css
I have not had difficulty writing code for AWS Lambda. I just write everything locally and run the code by invoking the entry point with the proper event and context dictionaries. For debugging, I just attach a debugger and run the entry point like any normal script.

I don't know why you need to deploy to test Lambda code; you can hit remote AWS stuff from local. The AWS SDK picks up your local config's IAM role the same way as it picks up the one granted to the Lambda itself. You don't need localstack for this, just an account in your AWS organization with the right role attached.

Packaging dependencies was a little weird to figure out, but the docs [0] are very good. A simple shell script can do the packaging work; its just a few lines to make the zip file.

[0]: https://docs.aws.amazon.com/lambda/latest/dg/python-package-...


👤 msluyter
Not a full solution, but when I was doing this I really got to love the awslogs utility:

https://github.com/jorgebastida/awslogs

It allows you to stream Cloudwatch logs from the command line, so you can grep them, save them to files, etc... (The web based Cloudwatch interface is terrible.)

Another suggestion is to try to modularize the core business logic in your lambda such that you separate the lambda-centric stuff from the rest of it. Obviously, though, if "the rest of it" is hitting other AWS services, you're going to hit the same testing roadblock.

Or you can try mocking, which may or may not provide much value for you. There's a python library for that, (moto), but it's not 100% up to date wrt AWS services/interfaces, last I had checked. Might be worth a try though.

https://github.com/spulec/moto


👤 endgame
Suggestions:

1. If you are building APIs and using Lambda functions as targets from an API Gateway API, look into libraries like serverless-wsgi (Python) or wai-handler-hal (Haskell) that translate between API Gateway request/response payloads and some kind of ecosystem-native representation. Then as long as you're writing code where all state gets persisted outside of the request/response cycle, you can develop locally as if you were writing for a more normal deploy environment.

2. Look into the lambda runtime interface emulator ( https://github.com/aws/aws-lambda-runtime-interface-emulator... ). This lets you send invoke requests to a fake listener and locally test the lambda more easily. While the emulator is provided in the AWS container base images, you don't need to run it inside a container if you're deploying with zip files. (AWS-provided container images automatically enable the emulator if not running in a lambda runtime environment, and using docker for port remapping, which is nice but not at all required.)

3. Get really good at capturing all requests to external services, and mocking them out for local testing. Whether this is done with free monads, effect systems, or by routing everything through gateway classes will depend on your language and library choices.


👤 dmlittle
If all you need is the ability to run a lambda function's code locally you might interested in docker-lambda[1]. I haven't really used localstack or SAM but a couple of years ago when we needed to run some lambda functions locally for development docker-lambda worked well enough.

[1] https://github.com/lambci/docker-lambda


👤 sonthonax
Don’t bother with Localstack, it’s a bunch of Docker images combined with Python’s Moto library.

AWS has Step Functions, which is a vaguely functional and declarative language for linking your lambdas together. It’s a bit verbose and annoying to write, but at least a it makes your lambada setup repeatable.

The state machine which actually runs your step functions is publicly available.

https://docs.aws.amazon.com/step-functions/latest/dg/sfn-loc...


👤 _6pvr
I've found both AWS Lambda and all Azure serverless tooling to be extremely frustrating. "Just don't do local development" is the feedback I see a lot, which just seems bonkers to me. Like the orgs that don't run tests locally and just use CI to validate builds. If your workflow involves long periods of waiting to test even very basic functionality, it is (to me) totally broken.

We have containers which unify local environment and production environment. In my opinion (and experience), there aren't any more efficient shared mediums to work with.


👤 kaishiro
It sounds like you're committed to Lambda, but we've spent extensive time across both Lambdas and Google Cloud Functions, and my off the cuff, somewhat biased opinion is that GCF gives a significantly better development experience the Lambdas. I note biased because a majority of the work we do lives in RTDB/Firestore.

However, in response to your specific pain point - after initializing a new functions project you can immediately run an 'npm run serve' and have local execution firing for any scaffolded endpoints. These - by default - will connect to a project associated RTDB/Firestore instance without the need for additional service account/IAM configuration.

I've always enjoyed that this was a part of the mainline dev flow - and it removes a huge barrier of entry for me (mostly mental) to spin up little toy projects or test endpoints whenever the thought crosses my mind.


👤 sharms
I share these frustrations and don't have a great answer. When integrating N cloud services, at some point it gets incredibly hard to build out fast feedback / local environments. The best way I know how to deal with it is use less services and ensure the ones you stick with are simple.

👤 mozey
I write Lambda functions in golang. In dev I run the function as local http server. In prod I map a custom domain to the lambda, routing is done internally to the function. API Gateway is only used for the top level route. This workflow is enabled by https://github.com/apex/gateway

In principle I try to avoid Amazon Web Services that lock me into the platform. So, for my dev stack I run a few other containers to give me the equivalent of the prod environment, RDS (MySQL or PostgreSQL), ES (OpenDistro), S3 (Minio), and SES https://github.com/mozey/aws-local

Dev is easy because I can test everything locally, and the resulting code should be portable to any other cloud.

Obviously this approach might not be feasible if you choose to use AWS that do not have equivalent self-hosted options.


👤 i_v
The best solution I've found for local testing is the Lambda Runtime Interface Emulator (RIE)[1]. It's basically a little Go wrapper[2] that runs your handler and makes it callable through an HTTP API. I considered writing a test harness around this for convenient integration testing but in my recent evaluation, I ended up avoiding Lambda (it's been really nice for small one-off tasks but was too expensive for the most recent and somewhat fringe use-case I considered using it for).

[1]: https://docs.aws.amazon.com/lambda/latest/dg/images-test.htm... [2]: https://github.com/aws/aws-lambda-runtime-interface-emulator


👤 jmb12686
That can be true if you are using tightly coupled event sources (like s3, SNS, etc) where you need to inspect the incoming object. If you are doing a HTTP / REST API, try to decouple the code as much as possible from the API Gateway invocation by using ExpressJS / aws-serverless-express to aid in local testing and debugging. Then testing and debugging locally becomes much easier.

👤 reilly3000
There are a few ways to improve the feedback loop, like SAM, Stackery, and Serverless Framework. That said, containers on Lambda (or GCP Cloud Run) are where it’s at. Just code your code, run local unit tests, and push the container up. The most important thing is to clearly define a mock request from your invocation source as your first step of the project. There is a vast difference between SQS, ALB and API Gateway events that need to be accounted for, and trip up people on deployment. I usually have those in my tests folder for every event the code will handle, and build from there.

‘aws lambda invoke -e myevent.JSON’ goes a long ways, because you get the full response object back.


👤 fifthofeight
Have you tried using serverless/serverless-offline? https://www.serverless.com/

👤 swyx
> Alternatively, do other FaaS providers solve this problem?

I worked on Netlify Dev, which does offer a simulated lambda environment for testing locally: https://news.ycombinator.com/item?id=19615546

I was also quite disappointed to see that AWS Amplify did not offer a running dev server for lambdas, best you could do was a local onetime manual invoke and even that often failed (mismatched to the live lambda environment where it succeeds)


👤 munns
IN a direct reply to Op: There is definitely a lot that goes into learning serverless tech like AWS Lambda.

I lead Developer Advocacy for Serverless at AWS and we're constantly looking at ways to help streamline and simplify this process for you. You can find a lot of information on a site we've launched to rollup lots of resources: https://serverlessland.com .

Right now the best dev process is a mix of using local emulation via AWS SAM CLI (http://s12d.com/sam) or tools like localstack. But it's true that you'll never be able to completely mimic all the various services and their functionality.

Would love to know more of what left you feeling underwhelmed by SAM. It has almost all the functionality of CloudFormation, which is quite a lot.

Thanks, - Chris Munns - Lead of Serverless DA @ AWS


👤 night-fall
I haven't any suggestions but can relate on the frustration part.

We recently started using AWS Athena and I was shocked to find no local test environment is offered by Amazon. Eventually I built a test container that uses Hive and Presto and allows my integration tests to deploy their test data to the Hive filesystem for Presto to access it. Unfortunately, Presto deviates from Athena so I can only approach the production environment during development, which leads to much lower confidence than we would have had otherwise.

Essentially, after all the tests are green, I'm still not sure if my code is production ready. Which is bewildering to me, we made all this progress in the past decade with continuous deployment, this just seems like a large step back.

It's not like it's that hard to fix for Amazon, they could at least offer local test containers for their cloud offerings. They just choose not to.


👤 ldoughty
Probably lost in all the replies but use can use AWS SAM start-local[1] to run a lambda locally.

There's also a docker container to mock dynamodb locally.

When I'm just working logic... Unit tests are best, but you could also keep the lambda invitation endpoint to lambda_handler, then write your own main() function to do some setup (like assume role to the same role as the lambda execution role, set up env bars)... Then end by invoking your lambda_handler locally... Basically main() is your pathway to running the lambda locally.. with only the minimum code required to "fake" being an AWS lambda.

1: https://docs.aws.amazon.com/serverless-application-model/lat...


👤 nindalf
> do other FaaS providers solve this problem?

Cloudflare's Workers have a good dev experience. It takes a second to build and locally preview a function. It live-reloads when you make a change. Later, it takes a second to build and deploy. On the flip side, Workers can't run arbitrary Docker containers. Your code has to be JS/TS or WebAssembly.

I'm not affiliated with Cloudflare, just a satisfied customer.


👤 waterside81
You're not alone. I've had this exact experience so much so, that we stopped using AWS Lambda, and switched to using the the various Amazon API calls individually and gluing together a pipeline using a job queue.

It's just so slow to debug and develop on AWS Lambda


👤 atraac
We run Azure Functions with C#, it's not perfect but it's very easy to test, runs with a single click locally, deploys fairly easily to Azure as well(though it has some quirks) can use 99% of our codebase as a dependency without any issues.

👤 jollybean
Yes, the AWS serverless is essentially half-baked for this reason it would be considerably more powerful - to the point of changing how we make apps - if there was a local environment provided by them that could really make testing and deployment seamless. Being able to see a debug of 'live' lambdas in your own cli etc. would be ideal.

👤 terom
AWS Lambda is a great replacement for cronjobs when gluing various AWS infrastructure components together, but I would hesitate to use it for an actual application.

Pre-packaged lambda functions are a pain to deal with if you're standardizing on something like terraform for all AWS infra provisioning - you need to jump through extra hoops to repackage the lambda functions into something you can manage.

Replacing the zip deploys with Docker images is a good first step. I just wish they'd support inter-account ECR -> Lambda usage - ATM you still need to replicate the images to each account separately.


👤 TruthWillHurt
We're using it "the right way" with SAM, but still very frustrating.

We had to detect if running locally or deployed in our code because Cognito auth was giving us trouble when running locally, and Step Functions can only be tested deployed.

Cloud Watch logs take several minutes to appear, funcs take some time to package and deploy (even with caching) - making feedback loop very slow.

I personally think the best way is to develop serverless funcs on platform - using the built in editor. You get logs instantly. But can't get devs to give up their IDEs...


👤 redhale
I completely agree, especially having just come from working with Azure Functions. This likely also applies to other runtimes, but definitely for dotnet, you can run all Functions in a solution locally via the default F5 debugging feature in Visual Studio. Stellar developer experience.

👤 garethmcc
Hi tdfirth. I am with the team at Serverless, Inc, creators of the Serverless Framework. I can totally understand the frustration, especially with new development architectures such as serverless.

Over the years of building serverless applications there are some unique differences in building them compared to traditional applications you build entirely locally then "throw over the wall" to be run on a server somewhere. And for this reason there are many teams and organisations working to make the developer experience better, including making deployments to the cloud to test easier and faster. Unlike mainframes of old, new code can be deployed in seconds and logs available in less than that when testing invocations of your Lambda functions. For example, the Serverless Framework has a command to deploy just the code changes of a Lambda and does so in seconds. There are other tools that do similar.

Its a really broad topic to get into but am more than happy to do so if you wish or have any specific questions I can help you answer. If its easier to ask privately feel free to do so

And I would just like to leave saying that I am a proponent of serverless development not because I happen to work for a company bearing the name, but because I sincerely think its the future. They just happened to spot that and asked me to join them.


👤 rodriguezrps
I actually find extremelly easy to develop with lambdas, there's many options for development, testing and debugging. things like serverless, localstack and even Amplify can help you with local testing and local development.

For debugging in the cloud, besides cloudwatch you can use x-ray and have a full picture of your services and segments.

And if this ins't enough and you really need to test in the cloud, then use CLI or serverless to upload your lambda(few seconds tops)

invooke the lambda with test data via cli https://docs.aws.amazon.com/cli/latest/reference/lambda/invo...

and tail the logs via AWS CLI.

https://awscli.amazonaws.com/v2/documentation/api/latest/ref...

I mean, those steps aren't much different to running tests locally, just create a script for it =)


👤 penguin_linux
Try knative on k8s.

1. Some great tooling out there to shorten the dev loop on k8s. I use tools like skaffold or apache kamel k, but telepresence looks interesting too. 2. Minikube solves the "Localstack" issue. It's an official project that runs the exact infra, not just a replica of the APIs. 3. logging is far more straightforward than cloud watch. 4. It runs the same everywhere.


👤 iainctduncan
Yes, I tried it once for a small job for a client, thinking this was so small that it would be really nice not to deal with servers or anything. Boy was I wrong. NEVER AGAIN. God what an unpleasant workflow. Even on a tiny job the disruption to my usual productive code workflow ate up way more time than standing up an EC2 instance and installing stuff. ugh.

👤 tim333
I'm reminded of a suggestion from someone else a few days go if you don't mind PHP

>https://bref.sh/ -- it allows running PHP apps on AWS Lambda effortlessly. With all the serverless craze, it's a very important piece of the ecosystem. After writing PHP apps for two decades, I am absolutely stunned how easy it is. Amazon blogged about it at https://aws.amazon.com/blogs/compute/the-serverless-lamp-sta...

(comment https://news.ycombinator.com/item?id=26827819)


👤 fiznool
If you are ok using a ‘higher level’ serverless provider, vercel offers first class support for running and debugging serverless functions locally, in an environment which closely mirrors their production environment (which I believe is mostly AWS under the hood).

See the following for more information:

https://vercel.com/docs/serverless-functions/introduction#lo...

I tried a few solutions earlier in the year for a side project (vercel, netlify functions, Cloudflare workers, begin) and found vercel to be the outstanding choice for developer experience, features (wildcard DNS for example is fully supported on their free plan) and support.


👤 _pdp_
We operate a serverless infrastructure (lambda + dynamodb) at secapps.com with a super small team, and we could not be any happier. Mind you, currently, we have over 30 services (not microservices but products we sell to clients) and well over 30 applications.

While you should take my comment with a grain of salt, for me it is a live testament that the serverless development paradigm works.

That being said, I also believe that it depends on how exactly you work and develop products/features. That is probably more important than the technological approach used. We have some very strict mental model about lambda which do not quite fit the mental model used for microservices.


👤 heavyset_go
Yes. I only use Lambda when forced to by a client. 99% of the time it's a poor fit for their use case, anyway, but they've still got to have it to make someone happy.

I feel as though there are similar amounts of friction across AWS offerings that make them unpleasant to work with in general, too. I'm numb to it now, but imagine my surprise when I actually went to use it for the first time after being hyped up about AWS.

For these reasons, and because you pay a significant premium for network traffic on AWS, I never use AWS for personal projects and I'm very happy with that choice.


👤 granttimmerman
(Disclaimer: I work on Google Cloud Functions)

With Google Cloud Functions, you can run your function locally using the Function Framework libraries (available in all Cloud Function languages) [0]. These libraries are used in production Cloud Functions, so you're not emulating the runtime. The feedback loop / devX is pretty simple with just 1 command to run the function locally like a library for every language.

We have some guides on local development here [1]. Our samples have system tests that use the FF.

You also can bundle your function in a container after building it with `pack build` [2]. This provides an exact container environment with things like environment variables and system packages.

We're working on improving eventing devX, but we have some `curl` examples here [3]. Connecting multiple functions together eloquently is still something to be desired though.

[0] https://github.com/GoogleCloudPlatform/functions-framework [1] https://cloud.google.com/functions/docs/running/overview [2] https://cloud.google.com/functions/docs/building/pack [3] https://cloud.google.com/functions/docs/running/calling


👤 pensatoio
Here are a few libraries that make testing lambdas (and simulating adjacent infrastructure like queues) very easy in tests.

localstack

mintel/pytest-localstack

charlieparkes/boto3-fixtures


👤 gfxgirl
I found it extremely frustrating in that what I expected/hope for was that I'd write a function that got called with some object that had the headers and body of an HTTP request to which I'd output a response.

Instead what I got was a (massively over engineered?) front end (the API Gateway) where the default is to add 10s or 100s of rules to try to have Amazon parse the request. As a JS dev I can only guess this was designed by Java programmers where strict typing might suggest such a convoluted solution?

It took a while to figure out how to turn off all that stuff and just pass the request to my lambda JS based function.

I feel like API Gateway should be doing absolutely nothing except routing requests to the appropriate service. That should be the default and all that parsing should be a separate service. In other words it should be

API Gateway (no parsing) -> lambda

and if you want parsing then you'd

API Gateway -> Parsing service -> lambda

Or for that mater just make the parsing a library you can use if you want.

OTOH I'm not a network person so maybe there's valid reasons for API Gateway to have such a giant configuration


👤 kerryritter
I typically write my Lambda functions as Nest apps, develop locally, and then deploy with the aws-serverless-express (or fastify) NPM package, and I enable The ability to trigger a kinesis/s3/etc events locally as well. The fact that my app gets deployed to Lambda doesn't really have any impact on my developer experience. What stops you from working locally?

👤 dragonwriter
> What I want is to run the lambdas locally, ideally more than one, and then exercise them with streams of test events (perhaps captured from a real environment).

lambci has dockerized near perfect replicas of the lambda runtime environments, so as long as you aren’t relying on VPC endpoints tied to specific lambdas, or things like that, you should be able to, even without SAM.

But you can also: (1) deploy separate test copies of lambdas and fire test events at them the same way you would locally, which has full fidelity and no third-party deps. (2) throw test events at your lambda entry points locally and capture and validate what they return.

I do both regularly with python lambdas (I find that the latter is adequate for testing lambdas and their downstream interactions, but the former is useful for validating interactions upstream, particularly—for most of the lambdas I work with—with APIGateway and some auth stuff we have hooked in there.)


👤 purerandomness
Yor sentiment is beautifully captured in the essay "The Eternal Mainframe" [1] by Rudolf Winestock.

It gets reposted every year or so here on HN.

[1] http://www.winestockwebdesign.com/Essays/Eternal_Mainframe.h... (2013)


👤 philliphaydon
I’ve got about 300 c# lambdas and dont have any development friction.

Curious what sort of errors OP is running into that cause friction.


👤 mjgs
I had exactly the same experience you describe. I’ve developed serverless code on Netlify and AWS Lambda.

I managed to get what I needed working but it was laborious compared to typical local server development.

I feel like most of the tools were written by devops folks because it’s quite easy to run these deployments in production, but the dev experience is lousy. The basics for developers is to be able to run the code locally and be able to run everything in a debugger. And that needs to include some sort of way to run services like dbs, queues etc. That’s just not possible as far as I can tell.

They say that cloud is somebody else’s server, well serverless is just somebody else’s monolith, and you don’t get a copy of it to develop on.

Not sure if it helps but here’s a demo IoT project I developed, has unit and integrations tests, it’s simple to understand, deployed using serverless:

https://github.com/mjgs/serverless-books-api

A few things I found useful:

netlify-cli - CLI tool for interacting with the Netlify API, great for deploying to Netlify when you aren’t building your site on their infrastructure.

netlify-lambda - CLI tool for building and running your Netlify functions locally.

serverless framework - A CLI and web tool to build and deploy serverless architectures, includes supports for Amazon, Azure, Google among others. Makes it easy to configure and provision all the components of your Cloud Native application.

These were taken from a blog post I wrote which some might find relevant:

Cloud Native web application development: what's it all about?

https://blog.markjgsmith.com/2021/01/12/cloud-native-web-app...


👤 pc86
Is AWS a requirement? I've just started a project that is roughly... 75% Azure Functions and so far it's been a dream. Between the triggers and the outputs, and being able to run functions locally in VSCode I feel like it's the perfect way to "glue together various bits and pieces" as you said. Obviously a lot of the triggers are Azure-specific but between HTTP and Service Bus Queue/Service Bus Topic you should be able to do just about anything you need.

On the plus side of having a lot of the pieces in Azure, if you're lucky enough to have an input and output match (e.g. you want to persist SB Topic messages into a SQL API CosmosDB instance) it's a half dozen lines of code and you're done.


👤 kennu
A big part of this is the switch from traditional programming (just write code) to cloud based programming (combine cloud resources together like "lego blocks"; code is just one block among many others).

This reduces the importance of writing code and shifts focus into new development tools like AWS CDK. More and more of development work is about defining Step Functions logic and setting up various cloud events that hook all the pieces together.

It's not trivial to replicate ALL the hundreds of cloud services at home. So we accept that development is done in batches and deployment takes a while. It could be better if course, but it's not only about Lambda any more. It's about CloudFormation in general.


👤 Niksko
People shove too much complexity into a lambda. It seems like a great idea to have this infinitely scaling lambda with no infrastructure to run, but what you're trading off is developer experience.

It sounds like your lambdas are calling other lambdas. If this is true, rethink your architecture.

If you have a thin, minimal code wrapper that translates your lambda invocation event into some sort of function invocation, it shouldn't be hard to test. Your external dependencies should be modeled as interfaces that can have fake implementations injected that either return pre-canned responses or implement things in memory.

Without more info about what you're doing, it makes it really hard to know what specifically is stopping you.


👤 joshuanapoli
My company uses separate stages (through serverless framework) for each branch of feature development. CI uses these to test, running within AWS infrastructure. It works well for isolating test environments. We lean more on unit testing to debug the service with external dependencies mocked. We try to avoid situations where we are adding log statements one-by-one; CloudFormation deployments take several minutes, so iterating small changes is indeed frustrating. We thought about localstack, but haven't tried it. I'm worried that it will increase our IaC complexity; everything will have to deploy to both AWS and also localstack.

👤 richardanaya
Have you considered just using lambda to http server adapter? That way you can just write a normal http server and test against that locally as normal, then when you publish to a lambda, the adapter turns AWS lambda json to http requests?

👤 ChicagoDave
What I’m reading from this thread:

- cloud native is weird/hard - old ways were better! - misconceptions about boundaries and aspects

Remember folks. When choosing technology, your selections always depend on a list of concerns, including:

- cost - complexity - developer audience - support audience - change management - vendor relations - mapping out dev strategies - getting consensus from team

No matter what tech is used, you still need to go through planning and design.

More often than not, cloud enables us to hack first. That is never a good strategy.

There’s nothing inherently bad about cloud native. It’s just different and _can_ be highly beneficial to outcomes.


👤 klohto
I personally like SAM, but there is https://github.com/dherault/serverless-offline as well

👤 gorbypark
I am working on a side project (more of a way to learn go than anything) that makes “serverless” functions as simple to deploy as possible. Basically, a developer posts some JavaScript to an API endpoint, and they are returned a function id and a management token. The function can be run at URL.com/functionid and to delete or modify the function the management token must be used. It all runs in a V8 isolate (ala Cloudflare Workers). It’s a self contained go binary, so it should be super easy to run locally, too.

👤 mradzikowski
You can run your code locally and still interact with real AWS services. For example, using Serverless Framework, you can do "sls invoke local", and your calls to DynamoDB etc. will work fine.

The problematic bit is the input event. You can't call real API Gtw. to trigger your Lambda code locally. You need to write/copy the request as JSON and provide it as input. It's more annoying with triggers like Kinesis Stream, but doable.

I personally don't like Localstack, it's safer to use real services.


👤 kadirmalak
Maybe someone else has already mentioned but anyway... The things I do: - Lambda event passed to your lambda_handler is just a big json, get it and copy it somewhere, and then you can test your code locally. ( lambda_handler(your_event, None) ) - If there are some things you cannot do locally, use some flags to handle them. (You may use: MessageGroupId from the lambda event for example) - Prepare a deploy script and use it. (aws lambda update-function-code ...)

👤 mfriesen
I feel your pain. I developed an event handler and messaging buss framework that allows me to run end-to-end workflows in my IDE. The framework is smart enough to know if it is running in the cloud or locally.

The other challenge the framework solves is that payload wrappers are different if there is there is an sqs component in front of the lambda, or an sns in front of that. The execution code of the lamda should be completely agnostic to how the payload arrives.


👤 earthboundkid
For developing API Gateway web servers on Lambda in Go, I use an adaptor, so it becomes a standard HTTP request, and I can just write a web service like any other. https://github.com/carlmjohnson/gateway For event based stuff… I have always avoided it for exactly this reason. Seems completely untestable and a pain to get working right.

👤 yankexe
Maybe you should give OpenFaas a spin. You can run it completely locally on your machine with Docker . It's mostly based off of Kubernetes. But you can also test it on EC2 or any virtual machine using faasd.

https://docs.openfaas.com/ https://github.com/openfaas/faasd


👤 haram_masala
One way to make your life easier is to use a JetBrains IDE with the AWS Toolkit extension. Many of the pain points you’re experiencing are greatly eased by that.

👤 nsypteras
My co-founder and I built https://napkin.io for this reason. We thought the experience of onboarding and stitching together services with AWS was too complex. With Napkin you can deploy a Python API endpoint from our landing page with a single button click.

We're actively developing this so any feedback of how we could make this better for your use case is welcome!


👤 tlarkworthy
Part of my frustrations with FaaS lead me to develop serverless cells where there is no deploy step.

https://observablehq.com/@endpointservices/serverless-cells

Source code is loaded dynamically each invocation.

Still, some work todo, it's still annoying to program. Next hope to make them easy to use is pipe the runtime to Chrome devtools


👤 srikanthkrish
Isn’t that why aws created docker packaging for lambdas or am I missing something.

The zip file packaging and debugging in the cloud has always frustrated me as well. But I like the docker packaging, still a long way to go but atleast I can somewhat mock some inputs and output triggers.

I have no idea how people managed the zip file uploads especially for python projects, it’s an insane waste of time uploading and debugging on the cloud.


👤 stadium
I haven't use it yet but CDK seems to make it easy to build and tear down infrastructure for development and testing. My strongest language is python and I like the idea of less context switching and being able to spin up an aws account just for development and delete it after prod deployment.

For lambda based API's I am curious how it compares to cloudformation and terraform?


👤 renewiltord
I write Go lambdas and run HandleFunc normally as an AWS user who has assumed the appropriate role.

Then when I'm done, I push `lambda.Start(HandleFunc)`

Frequently, though, when I want to change some code I already have, I just run through the edit-build-push-invoke loop. It isn't optimal but with fast Internet it is not much slower than a Scala build and I can iterate with that.


👤 infinityplus1
Firebase Functions has a local emulator and a unit testing module. The emulator is quite good for testing your code locally quickly.

👤 timbaboon
I am a fan of localstack. I know it isn’t perfect and I hear a lot of the complaints that others have posted. However, it covered everything I’ve had to use so far, mostly lambda, rds, sqs, sns, iam. Sure, won’t work for everyone, and there might be easier ways for me to test, but it fits my needs and has saved me so much time and money.

👤 oldnbitter
Not sure if anyone mentioned this, but I write a good bit of Lambdas by getting the boilerplate for them to run through Express.js for local development. Makes it possible to interactively debug with Visual Studio Code pretty easily. It requires a little initial startup cost for a project to connect them but works great afterwards.

👤 19h
We’re testing lambdas by running it with Node and enabling the local testing code with an environment variable. Internally we’re just calling the lambda handler with fake parameters to enable bdd like testing. Entire test suite runs before deploy, always worked flawlessly for us.

👤 PaywallBuster
Use serverless framework to deploy your lambdas, use serverless-offline to launch them locally.

👤 meursault
Have you tried the recently added container support? There's a component provided called the Lambda Runtime Interface Emulator that exposes an API for the locally running container. In my experience this is miles better than the original approach.

👤 pier25
You should check out Vercel.

It uses AWS Lambda underneath but it makes the dev experience just awesome.


👤 tenken
https://github.com/localstack/localstack

Why not try to use this toolchain to local build/test your server less application


👤 tolidano
Depending on the language you're using, one thing that works is: 1) use a framework to handle the nitty gritty (like Serverless, Zappa, or similar) 2) definitely use "devDependencies": { "serverless-python-requirements": "^4.3.0" } for python 3) don't be afraid to use Serverless 1.84 instead of 2.xx 4) I use pipenv (but poetry is probably fine too) and these: black = "" ipython = "" mypy = "" moto = "" pytest = "" pytest-cov = "" python-lambda-local = "*" The last one lets me drop this in my Makefile: try: pipenv run python-lambda-local -t 60 -f identify handler.py event.json event.json is a recorded event (or a carefully crafted one)

👤 mdo123
There are some plugins for PyCharm (and possibly other jetbrains editors?) that let you see all your lambdas and invoke them right from the IDE.

👤 popotamonga
c# with lambas you just click 'play' locally and you can invoke debug step by step etc, could not be any easier. Which lang?

👤 juliopy
Very! serverless-framework helps a little bit but overall is missing a lot of what we are used to in the development cycle.

👤 RocketSyntax
I was blown away that the service has been around for so long yet the process of installing packages was so manual

👤 Karuhanga
You can start a local lambda server and invoke the functions without a deployment.

👤 simlevesque
Why can't you test the whole lambda ? Just invoke the handler in the test.

👤 k__
There are many monitoring/observability services that help.

- Dashbird

- Thundra

- Lumigo

Cloud9 has direct Lambda integration, which helps too.


👤 brianleroux
begin.com helps address a lot of the issues here. If you want to deploy your own arc.codes is completely open source, works locally and generates standard aws sam flavored CloudFormation.

👤 tonfreed
Lambda works fantastically when you're working in AWS' little box.

Processing kinesis streams, monitoring and munging S3 objects, auditing dynamo tables, etc. Web lambdas are not worth it imo, very complex and difficult to get right.


👤 raymond_goo
I swear on glitch.com!

👤 philip142au
You can use serverless framework

👤 edjgeek
Full disclosure: I work for AWS, in fact I work with the Serverless team as a Developer Advocate This is a great thread and I understand some of the pain points that have been talked about. Serverless changes the model for how we develop and test code. There is a want to have everything local. But it is tough to build and maintain local emulators for all services. With that in mind, I encourage you to change the mindset of bringing the cloud to the developer to bringing the developer to the cloud.

When building serverless applications, the most tested and iterated upon part of the application is our code which usually resides in a Lambda function. Testing Lambda functions breaks down to to angles. 1) invocation: testing services invoking a Lambda function, and 2) action: what is the Lambda function doing. This is the only part of the application that should be tested locally through local emulation. The rest of the application is best tested in the cloud.

IMHO the best way to test a Lambda function locally is with AWS SAM: https://aws.amazon.com/serverless/sam/

For testing invocation:

A Lambda function can only be invoked through the AWS Lambda service. SAM has three ways to emulate the Lambda service: 1) invoke - locally invoke the Lambda function one time [https://docs.aws.amazon.com/serverless-application-model/lat...] This functionality is helpful if you want to mock invoking a Lambda function from a service like S3, SNS, SQS, etc. Simply add an event. To create the proper event structure, SAM provides a command called generate event. [https://docs.aws.amazon.com/serverless-application-model/lat...] 2) start-lambda - start an emulator of the Lambda service that can be reached via SDK or AWS CLI [https://docs.aws.amazon.com/serverless-application-model/lat...] 3) start-api - start the Lambda service emulator with a basic API Gateway emulator wrapped around it. This creates a local endpoint for each Lambda function that uses an API GW event source [https://docs.aws.amazon.com/serverless-application-model/lat...]

For testing action:

Using one of the above commands will invoke a Lambda function. The Lambda function will run locally and provide logs as well as stepping through the code in an IDE like AWS Cloud9 or VS Code. The Lambda function can also call out to service like DynamoDB, SQS, SNS, etc that reside in the cloud. Once the Lambda function is working as expected locally, it's time to deploy to a development environment and run E2E tests.

One other tool I would suggest it SAM Logs. SAM Logs can output logs for a specific Lambda function from CloudWatch to your terminal. This is a great way to debug async Lambda functions in the cloud.

I encourage you to visit https://serverlessland.com where we are constantly adding content to help developers with Serverless on AWS. I also have a series of SAM videos at https://s12d.com/sws. Additionally, we host Serverless Office Hours every Tuesday: https://twitch.tv/aws or https://youtube.com/serverlessland. During this time we answer any and all serverless related questions.

Having said all this. Our team is continuing to work towards making the development experience better. Threads like this are critical to our understanding of developer needs and we read them and take them to heart. If you would like to have a longer conversation please reach out to me at ericdj@amazon.com or @edjgeek on Twitter.


👤 chriswarbo
I tend to start with a type-checked language (usually Scala) since the AWS libraries have a lot more structure than the arbitrary dicts-of-lists used in JS, Python, etc. (I've written AWS code in those too, but it's not my preference). One annoyance is that they can take a while to 'spin up', compared to "slow" languages like Python. Ideally Lambda would support (with native SDKs) some well-typed languages which don't rely on runtime behemoths like the JVM (e.g. Rust, Haskell, StandardML, etc.)

I try to use the AWS 'resource API' rather than 'service API', since it's usually easier to understand. The latter can do anything, but deals with fiddly 'Request' and 'Response' values; the former isn't as expansive, but provides high-level things like 'Tables', 'Buckets', etc.

I wrap all calls to AWS in a failure mechanism, and check for nulls immediately. I usually use Scala's `Try[T]` type, which is essentially `Either[Exception, T]`. Note that there are some cases where null is expected, like an empty DynamoDB.get result. Those should be turned into `Try[Option[T]]` values immediately.

I'll aggressively simplify the interface that an application depends on. For example, a Lambda might be completely independent of DynamoDB except for a single function like:

    put: Row => Try[Unit]
Even things which are more complicated, like range queries with conditional bells and whistles, etc. can be hidden behind reasonably simples interfaces. In particular, the application logic should not instantiate AWS clients, parse results, etc. That should all be handled separately.

I'll usually wrap these simple type signatures in an interface, with an AWS-backed implementation and a stub implementation for testing (usually little more than a HashMap). These stubs can usually be put in a shared library and re-used across projects.

My current approach to dependency injection is to dynamically bind the overridable part (e.g. using a scala.util.DynamicVariable). This can be hidden behind a nicer API. The "real" AWS-backed version is bound by default; usually wrapped in a `Try` or `Future`, to prevent failures from affecting anything else.

All business/application logic is written against this nice, simple API. Tests can re-bind whatever they need, e.g. swapping out the UserTable with a HashMap stub.

I tend to use property-based tests, like ScalaCheck, since they're good at finding edge-cases, and don't require us to invent test data by hand.

For each project I'll usually write a "healthcheck" lambda. This returns various information about the system, e.g. counting database rows, last-accessed times, etc. as well as performing integration tests (e.g. test queries) to check that the AWS-backed implementations work, that we can connect to all the needed systems, etc.


👤 sanderjd
Yes.

👤 andyxor
don't use AWS lambdas, problem solved.

👤 tracer4201
I developed on Lambda with SAM and testing was pretty dang quick. IMO, Lambda works great for light weight event processing. I don’t know your use case. I wrote some Java based Lambdas, and despite not being a huge Java fan, the Lambda experience was fairly easy for me. I was essentially processing events coming in from an SQS queue, performing some enrichment, and dropping things in other queues for downstream services to consume. Are you sure Lambda is the right solution for your problem?