The common refrain of "use Python" hasn't really worked fantastically: I don't know what version of Python I'm going to have on the system, installing dependencies is not fun, shelling out when needed is not pleasant, and the size of program always seemingly doubles.
I'm willing to accept something that's not on the system as long as it's one smallish binary that's available in multiple architectures. Right now, I've settled on (ab)using jq, using it whenever tasks get too complex, but I'm wondering if anyone else has found a better way that should also hopefully not be completely a black box to my colleagues?
It's hard for us to help you with you giving us some context.
150-500 loc of bash are not that much, specially when you combine bash with standard Unix utilities.
These scripts you wrote, for what do you use them? Are they executed by humans or machines? On which platform? Linux, Mac, Windows or all?
You mentioned jq. Are you parsing json from HTTP requests or files? And why are you parsing json with bash?
Without a better context, A lot of people will say to use Python and Python is a good choice with you are running you scripts on Linux because you probably already have Python 3 installed and whatever you are writing with 500 loc of bash can be accomplished with zero dependencies with Python standard library.
If relying on the Python runtime is really a problem for you, your next best option is Go. Go is easy to learn and easy to use for building small cli apps with only the standard library, but even if you need some dependency, you will ship a single binary, so no problems with runtime compatibility or dependencies.
What do you think about editing the title to include “for scripting”? I thought you were asking for recommendations like zsh or fish when I first clicked
> shelling out when needed is not pleasant,
Handled properly because the language was built ground up for Ops tasks. There is even a syntax for "run a command and parse the output". Rationale and examples - https://ngs-lang.org/doc/latest/man/ngswhy.1.html
Currently NGS supports Linux and MacOS.
> smallish binary
Unfortunately there is no static build yet so can't check this checkbox
Next Generation Shell - https://github.com/ngs-lang/ngs
Alternative Shells list by @chubot - https://github.com/oilshell/oil/wiki/Alternative-Shells
1) There aren't two disjoint versions to juggle
2) Dependency installation is clean and contained
3) Dependencies can easily be installed locally or globally, depending on your preference for isolation vs reuse
4) Lots of packages are available for doing cross-platform (read: Windows compatible) operations
5) JS makes it really easy to do efficient async tasks, for example high-IO scripting involving reading and writing lots of files and making network requests (without any dependencies, I might add)
At my last company I converted all our shell scripts to Node scripts when the pandemic hit, so they'd work predictably on Windows machines, and it was great. These days I'd do it that way to begin with.
If you want strong typing (are you scripting in the large?), then, almost by definition, no scripting language provides that. If you need performance, switch to a compiled language (or a jit-compiled interpreted language) that most of your co-workers understand.
If Python brings too many dependencies, I guess you're trying to do something that is way out of the realm of possibility for bash, so you're not looking for an "alternative" to bash, you're looking for the best language for whatever project you're undertaking. And you may not be able to get away from the dependencies if you're trying something specific/sophisticated.
However I agree that nim / zig could also be good directions
Maybe even some library with stuff for parsing outputs of common commands?
It is cross-platform so you can use your scripts anywhere.
Don't use python or go or ruby or whatever as those are not scripting languages.
IMO Bash is horror and should die already.
It's really easy to write dangerous shell scripts ("${@}" vs ${@} for example) and also easy to write dangerous Python scripts (cmd="{}; {}").
Sarge is one way to use subprocess in Python. https://sarge.readthedocs.io/en/latest/
If you're doing installation and configuration, the most team-maintainable thing is to avoid custom code and work with a configuration management system test runner.
When you "A shell script will be fine, all I have to do is [...]" and then you realize that you need a portable POSIX shell script and to be merged it must have actual automated tests of things that are supposed to run as root - now in a fresh vm/container for testing - and manual verification of `set +xev` output isn't an automated assertion.
Shell is likely to remain the best way to connect together other pieces of software.
The tipping point, in my experience, is folks trying to use arrays or other buffering of data. At that point they're writing in shell, but wishing for a procedural language.
Whereas, shell works excellently (and incredibly efficiently) if you can compose your task strictly into streams of data (a more 'functional' feel)
So the real answer I think is specific to the types of script you are writing!
But also it could be worth revisiting the style in which you're using shell; especially if you can call upon your own helper programs (eg. in Python) where you require buffering or non-streamed access to data.
Whilst there's a lot of documentation of sh/bash functionality, I don't think there's anything like the same momentum behind 'best practices' as other languages like Python or even C.
Can recommend!
You could have assumed Python 2.7 for 10 years or so.
Now you could safely assume Python 3 (find minimum version applicable for your case).
> installing dependencies is not fun,
Pure python dependencies are easy (e.g., pipenv). C extensions are not more difficult than corresponding dependencies for your bash script (likely via system packages).
> shelling out when needed is not pleasant, and the size of program always seemingly doubles.
Have you tried to use shell command for shell one-liners in Python? e.g., directly via `proc = subprocess.run(shell_pipeline, shell=True)` or dedicated libraries such as pyinvoke/fabric?
As the song says: "But I still... haven't found... what I'm looking for."
I tend to use common lisp but I'm learning guile scheme and will probably move to that instead.
It's not the best fit for everyone but it has served me well.
When binary size and startup speed matters, then I use Go instead.
Finally, sometimes I just write in Bash anyway, even though it is such a difficult language to write in.
If your colleagues can be expected to have Docker, you can wrap your Python script in an image that contains specific versions of Python and the dependencies.
There are many other scripting languages to choose from, such as JavaScript,Ruby, Kotlin and Groovy.
Of course, you can also use Java,Scala, Clojure and so on.
Most OSes come with relatively modern versions of either. Pretty much everyone is or should be familiar with either. It’s not like it’s some Lispy thing.
Consider docker for some situations. Consider kubernetes if the need is more devops/ running a production system.
Consider powershell if on windows.
Travis ci / Jenkins / appveyor etc. for CI
Be lazy - can I get out of writing this code should be in the forefront of your mind (for any code!). The answer might be no but consider it.
No code tools can be considered - Google sheets, Wordpress, plethora of saas to do this. Paas eg aws lambda etc.
So many options. Solve the problem, rather than transliterate bash.
You might not adopt it, but it can give perspective on the pros and cons that comes with Bash.
https://amoffat.github.io/sh/sections/faq.html
Its Perl-ish roots just make it a fantastic tool for the job.