HACKER Q&A
📣 llegnaf

How are you using Go to write production-grade back end services?


As the title suggests. Interested to see how (and what) companies are using Go to make production grade backend services.

- What are you using for tests?

- Do you use dependency injection or any mocking frameworks?

- Are you using any routing frameworks?

- HTTP web frameworks?

Would love to know the ins and outs! What is good, what do you not like about Go, any pain points? Anything you would want to improve?

Thanks!


  👤 RabbitmqGuy Accepted Answer ✓
For our backend[1],

- We are using the default test framework that comes with Go; go test.

- We usually follow Postel's law which translated to Go, would be: `Accept interfaces, return concrete types`[2] This enables us to pass in fake interfaces during tests. I haven't checked what kind of performance cost(if any) we may be paying by passing around interfaces instead of concrete implementations; but performance has not been a problem so far so we are just happy with our approach.

- We do not use any http web frameworks, we are just using stdlib's net/http. We pair that with certmagic[3] for automated TLS certificate issuance and renewal.

I like the performance of Go, it is easy to pick up and it comes with a pretty great stdlib.

What I do not like is the fact that it has nil pointers, and you tend to run into one or two nil pointer dereference errors once in a while.

1. https://errorship.com/

2. https://blog.chewxy.com/2018/03/18/golang-interfaces/

3. https://github.com/caddyserver/certmagic


👤 bithavoc
- Web: No framework, I mostly use https://goswagger.io but for basic stuff then just the standard http library + https://github.com/julienschmidt/httprouter

- Testing: https://golang.org/pkg/testing/ + https://pkg.go.dev/mod/github.com/stretchr/testify

- Mocks: https://github.com/golang/mock

- Dependency Injection: None, current user of https://github.com/uber-go/dig and I regret it.


👤 WnZ39p0Dgydaz1
I've built a pretty complex and high performance go-based microservice architecture (~12 services) as backend with testify and gomock for testing. Pretty happy with those choices, never had any issues. The backend doesn't use HTTP, so no opinion there.

I recently started migrating some of the services to Rust for performance reasons. I would say that Go's biggest strength is perhaps its biggest weakness as well: It "just works". Types are loose (no generics), concurrency is extremely easy. This means I can write working Go code really quickly, but as project complexity grows, code tends to become kind of a mess.

For example, channel-based concurrency can become hard to reason about if you have a complex service. A few times I ended up putting mutexes at various places just to make it work despite knowing that it's not the "right" thing to do. Mutexes then come with their own issues. Once you have a deadlock or race condition, good luck debugging it. There exist multiple packages and tools to detect race conditions and deadlocks, so this seems to be a common problem. I must've spent days or weeks worth of time looking at pprof output. You may say it's my fault as a developer to write sloppy code, and that may be true, but the Go language encourages such code with the decisions it made.

The same goes for types. I never needed generics, but the fact that typing is so loose means you can get away with a lot of sloppy code without being punished for it. This can be great for moving fast, but may come to bite you later on.


👤 comment_out
My work mainly involves setting up internal enterprise applications.

- The standard testing package

- Manually setup dependency injection. Generally a flow like config=>databases=>models=>routes=>server.

- https://github.com/gorilla/mux for routing and https://github.com/swaggo/swag for documentation

- As for HTTP frameworks I just use the standard package and I really enjoy https://github.com/sirupsen/logrus for logging

We use IIS. I really wanted to use go so, I found a way to run go applications on top of it. That was not fun to figure out, but I can make use of the windows authentication underneath IIS. Required a custom module that forwards some headers to my go applications. Also the version of IIS we use doesn't even support http2 which sucks.


👤 jimsmart
- Ginkgo and Gomega for tests, though sometimes just stdlib on smaller projects. I like Gomega very much, great matchers, lots of useful helpers.

- Manual DI, passing in values and objects. We've hand mocked a handful of components: just what's essential for testing.

- Mostly not using HTTP. But a few projects do, some simply use basic stdlib, others use Echo.


👤 tcbasche
At a previous job we did a 'tech experiment' to write a Go REST service. As far as I'm aware it's still in use in a production-level capacity. It used the following:

- gotest

- No DI, but in hindsight this would have been the right thing to do. The dev team is 100% Python, so mocking was more the talk of the office than DI/IoC.

- For routing and HTTP the service used httprouter https://github.com/julienschmidt/httprouter

I think it's a fantastic language, gofmt and gotest are both great utilities, and the short time to create an executable made development turnaround a breeze.

However, I think for the purposes of a simple REST service I would probably use Python/Flask. Less boilerplate and for a Python team, it would have made more sense ...


👤 closeparen
Testify for assert and require. GoMock for mocking, but only when it really adds value over a handwritten fake. Table driven tests wherever possible. Don't need heavyweight test frameworks.

Plain HTTP/JSON encouraged for edge-facing services only, typically not in Go. Between services, GRPC, integrated with our metrics, distributed tracing, and auth. When plain HTTP does happen, gorilla/mux.

Wire or FX for DI.

Implicit satisfaction of interfaces is the core of the language. The type system will feel maddeningly obtuse until you learn to use them effectively. It took me a few days to learn the language's structure and start shipping code, but a few years to grok the implications of that simple structure and design software in harmony with it. Interfaces are the key. Think Haskell typeclasses, not Java.


👤 adventured
I'm using Go with Redis for various aggressive caching needs. I batter Redis and Go performs very well. A few other languages will work mostly fine for my purposes, however I like working in Go and have always had a great experience with its performance.

No traditional testing. Standard library and Redigo. That's it.

No pain points for what I'm using it for. I usually try to avoid complexity in anything I build. This is a rather simple system that is only meant to take a high volume beating, cache to Redis (content later retrieved & presented by another part of the application via another language) and be reliable.


👤 hactually
I've been using standard router and Gorilla Mux for about 5 years now and have so many snippets that I can compose apps with.

I recently built an lru based rate limiter [0] that is compatible with both - it might be useful! Obviously it would need love for multi host but PRs welcome

[0] https://github.com/17twenty/gorillimiter


👤 potta_coffee
I'm using gotest for testing, my testing is really primitive right now, no mocking frameworks. The only non-standard library I'm using is gorilla/mux for routing. Go feels a little verbose and restrictive coming from Python, but after being acclimated, I love it. It's extremely productive, performance is nice, deployment is so easy. The pain points for me were understanding the package system and structuring my project, which I worked through eventually.

👤 sethammons
I've been wanting to do a blog series on how we use Go. Here is a short version.

Tests:

We use the testing package for unit tests and (maybe too much) use interfaces as arguments so we can create test fakes that behave the way we want so we can validate error paths, logs (yes, we assert on logs), metrics, and, of course, green/good expected behavior.

We then have acceptance level testing. These are ensuring the system works as expected. We leverage docker-compose to spin up all our dependencies (or, in some cases, stubs - but only rarely). We then have a custom testing package built atop the stdlib one. It behaves very similarly, but allows for the registering of test suites, pre and post test suite methods, pre and post test methods, and generates reports in json/xml for QA to keep track of test cases, when they ran, pass rates, etc. As part of our SOC2 compliance, we have these to back up our thoroughness in testing. Tests also can have labels so we can run all tests for a given feature only, or a given suite. These tests hit the running binary of our service under test, so if it works here, it will work when deployed.

Before a service makes it to prod, it lands in staging. There, a final suite of tests go through user features and ensure that things are ok one last time. Total black box.

Dependency Injection / Mocking:

I am very, very much against mocking. For that, I did write a blog post; though, I think the thing it highlighted most is I need to write more :). You can goolge "When Writing Unit Tests, Don’t Use Mocks" if you want to read it. When you mock, you create brittle tests that are tied to the assumptions in your mock. Instead, we use "fakes." These are test structs that match an interface and allow us to control their behavior. You might ask how that is different than a mock. Mocks have assumptions and make your tests more brittle and subject to change when you update the code (which is what Martin Fowler concluded in Mocks Aren’t Stubs). People tend to write "thing called 4 times, with arguments foo and bar, and will return x, y, z... blah blah blah." Instead, when you use a fake struct that matches an interface, you can make them as simple or complex as needed, and usually simpler is better. Return a result or an error. Validate the code does what you need. We also avoid functions as parameters for just testing. IE, your test code uses a custom function that is not the function used in prod. These are easy to cause nil panics and are kludgy. Fakes get us what we need 99 times out of 100.

Routing frameworks:

We have folks who don't use them, or use gorilla mux, or chi (my fav). They are convenient and make things easier for passing URL parameters. You can, of course, do this without a customer router. I like Chi because it is stdlib compatible.

HTTP Frameworks:

Nope. I could see, maaaayyyybee if we were writing a bunch of CRUD apps, but we don't. The services my team makes tends to have few routes and not all the CRUD stuff. Even then, if we do a lot of CRUD work, it will only eat up a few days. What we do have, however, is a project skeleton generator so our projects all start out with the same basic directory structure and entry points. Everyone knows that your app starts in cmd/appname/main.go for example.

Logging (and errors):

The other one we leverage in place of HTTP Frameworks is a custom logger and an experiment we are doing with custom error types. We have logging requirements to play nice in our ecosystem at work. Logs all are structured json and have some expected keys. The logger generates all that. We looked at all the log packages and none matched exactly what we needed. We can store key value pairs on the logger and pass that logger around (so you only have to do logger.Add("userid", userID) once and now all logs going forward in a request will have it. You get timestamps, app name, and a few other fields for free. You can create a child logger that will have its own context kv pairs so you don't pollute its parent (helpful for when you go into a function and want to add more details to logs based on errors specific to that function). The other one that we are playing with now on our new project is a custom error type that stores a map of key value pairs so we can just bubble up our custom error type, wrap it with more kv pairs on each bubble up, and then only log at the top, and then when the error is logged, we use our logger to extract the KV pairs and bingo, structured logs with context for each error bubble up point with potentially relevant kv pairs that are only known down in deep levels.

BuildPipe:

We run our tests locally usually. But when we create a PR, a build is kicked off using BuildKite (plugin system is really nice). A PR cannot be merged to master until the test suite passes, which includes the acceptance tests from earlier. After merging to master, a fresh build is run again and that creates artifacts that are then used by ArgoCD so we can roll our code out to our kube cluster.

I love Go. It is my favorite language I've worked in. There are warts, for sure. You can get that list anywhere. There are some oddities when assigning to structs in maps, nil interfaces, shadow variables, non-auto-checked discarded errors, and others. The biggest wart right now is the module system. I think that will improve over time.


👤 acifani
- Test -> stdlib + testify/assert + vektra/mockery

- Dependency injection -> manual DI passing values and objects down

- Routing framework -> gorilla/mux

- HTTP web frameworks -> stdlib + go-kit for the general architecture

We then have a pretty large internal library for all things shared


👤 one2know
Heh, production grade depends more on the people in charge than the tech choices. The lack of exceptions in Go is the biggest stupid thing in software. Exceptions have always been a much less risque feature than multiple return values. In Go every single "function" has multiple return values and the code is litered everywhere with if err != nil... garbage. It will never change because it is apparent that exceptions are to Go as collection literals and operator overloading are to Java.