From my perspective, this message doesn't provide much actionable information. It doesn't indicate whether the issue is on my end or the service's side, nor does it suggest any specific steps I could take. Often, simply "trying again" yields the same result.
What puzzles me most, however, is that it seems it'd be equally as unhelpful to the developers who presumably implemented that error messaging.
Why would they? It seems like this lack of detail wouldn't be particularly helpful for the technical teams either. Without specifics on what went wrong, how do they track down and fix issues effectively?
I'm genuinely curious to understand the rationale behind this practice. Are there benefits to this approach that aren't immediately obvious to end users? Is it a matter of simplicity, or are there technical or business reasons for keeping error messages vague?
I'd love to hear insights from the other side of this, the people who must've written, implemented, and let those error messages go live.
Behind the scenes there might be extensive logging & monitoring for engineers to triage the issue.
There are technically "infinite" different reasons it could be shown. For example a variable is unexpectedly undefined due to a bug in the code.
So the error message can't elaborate in detail exactly what's wrong or why it happened or what to do about it, or whether trying again will help or not, etc...
Behind the scenes (hopefully) the actual error is captured and logged for further inspection.
In other words the end user will see a generic and friendly error message but behind the scenes the developers will see the actual code error along with other information such as a stacktrace to show how the code reached the path where the error happened.
https://top10proactive.owasp.org/v3/en/c10-errors-exceptions
Well-run sites will have some kind of logging / exception-tracking behind the scenes that captures all the details about the error (stacktrace, request details, etc) for later review.
https://en.wikipedia.org/wiki/Anna_Karenina_principle
?
There is one happy path in a bootloader and many many many more unhappy paths. Optimal UX on the unhappy path is orders of magnitude more expensive than the UX on the happy path. It's also true about error handling in general but a vague error message is a low priority issue compared to other ways your bootloader could be flawed.
For system/app errors that is more of a crash, users shouldn't really need to know the details as they cannot do anything with it anyway. So a gentle "Something went wrong" is good.
Remember that most applications may show a generic error to the user but they log the exceptions/real errors in backend using tools like Sentry etc.
Keep it simple.
I recently worked on an internal app at work, and the UI team didn't want us to provide technical details in the error messages from the backend, because they didn't think it was a good experience for a user to get too much technical jargon. They wanted the conversation to die there. I had to fight to create meaningful error messages that would go to a support team, while they still kept sending something akin to "something went wrong" to the users.
Something like this https://shadcnpro.com/docs/components/default-error
Then the user has the chance to press "Try again".
As for developers, errors details may be separately logged elsewhere. So they don't need any specifics about the error from the user.
Finally if you show error details to the user in production, it may lead to leaking sensitive information (PII or system internals that may ease hacking).
Personally, they make my blood pressure skyrocket, it feels like the frickin manager or company (HP) is just taunting me - "I had a problem, loser, guess what it is!"
Even if it did, you'd likely not be able to do anything with that information (i.e., a web app returning an HTTP 500).
Details potentially leaks information the developer doesn't want out there. For example, in SharePoint Online you wouldn't want the correlation log provided to the end user -- it's not useful to the end user, it isn't necessarily limited to just what they did, and it may contain PII.
> unhelpful to the developers who presumably implemented that error messaging
In my example above, the user would relay to a technical support individual the date/time/ Now, if the developer did a poor job of correlating "friendly" error messages to something that can be diagnosed on the backend, that's on them.
case UNEXPECTED_ERROR:
return 'something went wrong'
Sorry my friend, we, meticulous perfectionists, are marginalized minority
---------------
As a dev at another small company supporting users: We have a lot of different failure modes that we try to handle, ranging from the specific (your auth is wrong, you're rate limited, your request is invalid, your request is too big, whatever) to the somewhat vague "there's been an error, please submit a help ticket with error ID #blahblahblah" where #blahblahblah actually corresponds to some backend logging that we can then further troubleshoot with them individually.
Then there's the "Something went wrong" errors, which are fallbacks of last resort... that one last safeguard in a cascading series of try/catches where we know the operation didn't succeed, but not in any failure mode we predicted. Maybe a server or function blew up halfway through an operation, but not in a way where we could've caught a stack dump or logged a request error. Maybe it's a type of crash we've never seen before, maybe a bug in some third-party lib, maybe a file parser died on an edge case in a user-generated file... whatever. We don't have any logging on that and we can't do any useful debugging on it, usually.
But you know what? It's still helpful for two reasons:
1) It lets the user know something unexpected happens. Sometimes trying again actually works. But even if it doesn't, it let them know that there was an unexpected error (and it's almost never their fault). If you don't even tell them that, well, like the terrible Sonos app, either they assume they did something wrong, or they think it succeeded when it really didn't.
2) These errors are for us (or at least should be) very far and few in between. If we get one report a month about one, we know, ok, there's some corner case bug in some dark corner that we've yet to catch. We can't really do anything about that, but we'll keep an eye out.
If, however, we suddenly start getting tens or hundreds of reports of these a day, we know something is very, VERY wrong on our infrastructure somewhere, maybe caused by a recent deploy. One customer reporting an undiagnosable bug isn't actionable, but a handful at the same time definitely is, and more than that would probably be an all-hands-on-deck emergency.
That's at our scale of a small business. At Facebook/Google scale, you can only hope they have more robust logging and super engineers, and even if that error doesn't help you as a individual user, maybe the 20th million occurrence of it this week will lead some obscure bug being fixed in some 20-year-old part of their code a few months from now, even if they don't tell you about it. That's the hope, anyway. Probably a lot of them just get completely ignored...