Now every time I try to learn something new... there's always a build system in the way, with MSTOICAL it was Autoconf, with switching to Linux, I couldn't get WikidPad to build on Linux, trying to build a lifeboat for Twitter, Python works, but then modules require builds that break.
Is it possible to have a system without Make or the like?
Alternatively, any good resources for the above?
You can do this today with any newfangled languages, just avoid third party libraries, or at least use them as a last resort, and choose options that are stable.
Autoconf is just a complicated system for finding and detecting third party libraries. If an application just uses the standard library, the build system can be very simple and reliable.
Most of your pain is actually due to 'package managers', not build systems per se.
Much the same can be said of simple Makefiles. If you are targeting your own machine, you can create a couple of variables to track libraries and such then tweak them much as you would tweak the configuration of a traditional IDE. The trouble comes with creating software that other people will build or while building software created by other people. Linux (actually, Unix in general) is notorious for its inconsistencies: libraries and headers aren't always in the same place, sometimes libraries are missing, sometimes one library is used in the place of another. Tools like autoconf help developers to work around that. Of course, modern build systems go a step further by pulling in dependencies.
It probably wouldn't be much of an issue, except for one thing: everyone seems to have their pet build system.
Linking is done by describing it as part of the code with its `foreign` system [1][2]. The benefit of this approach is that your code itself describes what needs to be linked against what. It also has the benefit that if you don't use a foreign library, it doesn't get linked against due to its minimal dependency system.
If you want to see huge examples of this, I recommend checking out the vendor library collection[3].
n.b. I am the creator of Odin, and one of the original goals was to create a language that didn't necessitate a complex build system and just get to programming. Requiring a complex build system and package manager are anti-features in my opinion and make programming a worse experience to use. Odin borrows heavily from (in order of philosophy and impact): Pascal, C, Go, Oberon-2, Newsqueak, GLSL.[4]
Niklaus Wirth and Rob Pike have been the programming language design idols throughout this project.
[1] https://odin-lang.org/docs/overview/#foreign-system
[2] https://odin-lang.org/news/binding-to-c/
[3] https://github.com/odin-lang/Odin/tree/master/vendor
[4] https://odin-lang.org/docs/faq/#what-have-been-the-major-inf...
#if 0
set -e; [ "$0" -nt "$0.bin" ] &&
gcc -Wall -Wextra -pedantic -std=c99 "$0" -o "$0.bin"
exec "$0.bin" "$@"
#endif
Then make the file executable and you can use it like a script. All the flags are just there and visible.However it would be preferable to just have a smart enough language tooling to do it without such tricks.
#run compile_to_binary("name_of_binary");
main :: () {
// ...
}
#import "base"; // my own library that has 'compile_to_binary'
I'll go into depth about what this does, but if you're not interested, skip to the final code snippet.The above code '#run's any procedure at compile-time (in this case 'compile_to_binary' which is defined in the library I imported). That procedure (really a hygienic macro) configures my 'workspace' to compile down to a 'name_of_binary' executable in the current directory. Any third-party libraries I've imported will be linked against automatically as well.
To do this without a dedicated procedure, just put this in the same file as 'main':
#run {
options: Build_Options_During_Compile;
options.do_output = true;
options.output_executable_name = "name_of_binary";
set_build_options_dc(options);
}
main :: () {
print("This is all I need to build a project\n");
}
#import "Basic";
#import "SDL"; // or whatever...
Then compile: jai above_file.jai
I've done this for projects with hundreds of files that link in SDL, BearSSL, etc.The best part is neither of these systems are a requirement to build your project. Running the compiler against a Jai file will do everything I do with my own system (I just like having the ability to configure it in code).
Jai has been a breath of fresh air in terms of "just let me write code," so I highly recommend it if you can get in the beta.
But! I think the "can I get by without needing $big_build_toolchain?" question is excellent, and would recommend that you start by building automations yourself. That means writing small programs (often shell/batch scripts) to smooth over repetitive parts of your build/packaging/deploy process.
That approach yields a ton of understanding about how build/transformation/delivery software work with your platform of choice, and that understanding in turn makes you a way better programmer in a ton of different ways.
Now, this doesn't mean that you should stick with home-rolled automations forever; past a point, they become more or less equivalent to others' build systems and you can switch if you like. But switching to something from a position of full understanding of what it provides is a much easier and friendlier process than "I have to use tools A, B, C, and D just to get 'hello world' working, and don't know what any of those do".
I'd recommend this approach to anyone with the time; it really confers a lot of understanding and confidence. And if you don't have the time/you need a prototype deployed yesterday, don't worry about it; copy the Medium article snippets and learn what the tools actually do some other time--just try to make sure "some other time" isn't "never".
But the reality is you'll be giving up the features Make or another build system provides -- things like recompiling only the files that need recompiled, or, more fancily, automatic build configuration and dependency discovery.
When you were building on DOS or Windows, your Borland IDE or whatever handled these details for you. In the Unix world, everybody relies on a constellation of small tools to handle different aspects of the build process rather than just entrusting that to their IDEs.
I bet you can get Clion (C, C++) or Lazarus (Pascal) to work the way you remember Turbo Pascal working. But those only work for their respective languages.
If you really want to go whole hog, vendor any dependencies you have. That is, incorporate them into your project and build them all at once. That way you don't have to worry about builds breaking on systems that don't have them.
but...
we have clouds now, so even if you find something, development will require CD/CI pipeline, github actions and so on.
Plugging my own repo: https://github.com/jsebrech/create-react-app-zero
It is a version of create react app that works in that way, no build tools needed, only a static web server for local development.
Ultimately, the answer to your question is going to depend on what you want to make. For instance, if you want to make an iOS app, it's going to be difficult to do so without using Xcode's build system.
Go is a very good language in terms of having a build system that stays out of your way. On Linux, you'd do something like `sudo apt-get install golang`, and then be on your way.
Start with bash script.
Translate that to a tiny ninja script when it gets to be more than a page or two. (Ninja is technically a build system, but it is tiny and simple).
Translate that to a tiny python script that emits build.ninja when it gets to be more than a page or two. Not much more to it than globs and f-strings.
Overall that scales well, has no large dependencies (you can just dump a Ninja binary in the repo), and is fast.
Very different case today. We need to use various database/cryptography/imaging/network/etc libraries fetched from github or some other servers, and so far the most reliable way to make the building process works regardless of your OS is to use a build system.
> gave it to your customers
You could start with an HTML page and vanilla JS. Use Bootstrap, Tailwind, whatever with a tag. But then instead of a build system, you're wrestling with home-grown infra if it's not cloud.
The need for a build system should become apparent, but only once you're comfortable with the idea.
Without make, you were running `cc` or `gcc`. Even your previous languages may have had a big "Play" button in the IDE. Usually even those invocations are--behind the scenes--basically one super long command.
Classic ASP is another option. It's on Windows, IIS is free, and Azure (Microsoft cloud) supports it.
By the way, it appears Delphi is alive and well via Embarcadero [1].
If you pick Delphi and a web framework, you can put it in a Docker container and run it in the cloud. The "build" mechanism in that case is whatever you need from the container image's OS to bootstrap Delphi runtime, but that's well worth learning.
A number of languages have compilers that are smart enough to find and build all your code, if you don't have any dependencies. It's, as others noted, the dependencies that create most of the difficulties.
The alternative is live systems where the source code is compiled and interned for immediate use in the running system. Examples include: Lisp machines, Smalltalk, and Forth.
There mught be other paradigms as well.
However, you might also want to try some kind of PaaS. That can simplify those things, even for a Python application.
Fly.io is very nice[0]. AWS App Runner could be another interesting option.
Hope you’ll find something to your liking. Godspeed!
This is how we end up with tools on top of make like configure, autoconf, cmake etc.
If it's any consolation, the front end has gotten even worse. Try understanding a react project if you haven't internalized the architecture... typescript.tsx --> javascript.jsx --> javascript.js --> webpack --> minify --> god knows what.
The irony is, js is an interpreted language! Didn't we (boomer coders) learn that interpreters act directly on code, and only compiled languages needed to be built? Forget about that.
Aaand.. wait for it -- webgl, websockets, webasm! Our woes are just beginning.
Otherwise vanilla JS on the web is a good option to avoid build systems. There are a lot of bundler services that provide URLs to import dependencies right there in the file. For example if you need d3 or similar you still don’t need to build.
I believe you can make chrome extensions without a build too. Not sure about electron, I don’t recall but probably you do although maybe not in the development/iteration process.
What I find frustrating is how every language has its own build system. I don’t see why (in principle) we can’t have one build system with rule sets for each language. Language designers could avoid reinventing the wheel and just write a set of rules.
Cross language stuff, such as generating an API client from a server application, would be much simpler under one system.
You can use a simple build script and a C/C++ compiler like Clang or Zig or Tcc.
You can also use the small libc by Justine Tunney to build once and run your code anywhere: https://justine.lol/cosmopolitan/
We have fast computers, if your project is not huge with millions of lines of code you can easily skip the build system.
i've used turbo C in the 80ties - afaik the system was very similar to the turbo Pascal environment:
it was an early form of an IDE ... imho. this was THE outstanding feature of all this turbo $LANG environments by borland.
* https://en.wikipedia.org/wiki/Borland_Turbo_C
just as an example: in the late 80ties i used it on an atari 1040STF - it had as the name suggests 1 MB (!) of RAM w/o a HDD (!) ... i was a teenie back then & HDDs where prohibitively expensive.
i copied the whole turbo C environment during the system-startup from a floppy-disc into a (dynamic) RAM disc - about 700 kB - and worked with the rest of the available RAM :)
You write PHP, put it behind php-fpm and httpd/nginx and you're done
It's worth having some skepticism of tools. By making some operations easy, tools encourage them. Build systems make it easy to bloat software. Package managers make it easy to bloat dependencies. This dynamic explains why Python in particular has such a terrible package management story. It's been around longer than Node or Rust, so if they seem better -- wait 10 years!
For many of my side projects I try to minimize moving parts for anyone (usually the '1' is literally true) who tries them out. I work in Unix, and one thing I built is a portable shell script that acts like a build system while being much more transparent about what it does: https://codeberg.org/akkartik/basic-build
When I use this script my build instructions are more verbose, but I think that's a good thing. They're more explicit for newcomers, and they also impose costs that nudge me to keep my programs minimalist.
You can see this build system evolve to add partial builds and parallel builds in one of my projects:
https://github.com/akkartik/mu1/blob/master/build0
https://github.com/akkartik/mu1/blob/master/build1
https://github.com/akkartik/mu1/blob/master/build2
https://github.com/akkartik/mu1/blob/master/build3
https://github.com/akkartik/mu1/blob/master/build4
Each of these does the same thing for this one repo -- build it -- but adding successively more bells and whistles.
I think providing just the most advanced version, build4, would do my users a disservice. It's also the most likely to break, where build0 is rock solid. If my builds do break for someone, they can poke around and downgrade to a simpler version.
- not need any dependencies - or use almost any more modern systems language like Rust, Zig, or Go
> Alternatively, any good resources for the above?
There are many, unbelievably many writeups and tools for Python building and packaging. Some of them are really neat! But paralysis of choice is real. So is the reality that many of the new/all-in-one/cutting edge tools, however superior they may be, just won't get long term support needed to catch on and stay relevant.
When getting started with Python, I very personally like to choose from a few simple options (others are likely to pipe up with their own, and that's great; mine aren't The One Right Way, just some fairly cold/mainstream takes).
1. First pick what stack you'll be using to develop and test software. In Python this is sadly often going to be different from the stack you'll use to deploy/run it in production, but here we are. There are two sub-choices to be made here:
1.a. How will you be running the Python interpreter itself in dev/test (in other words, what Python language version you will use)? "I just want to use the Python that came with my laptop" is fine to a point, but breaks down a lot sooner than folks expect (again, the reasons for this are variously reasonable and stupid, but here we are). Personally, I like pyenv (https://github.com/pyenv/pyenv) here. It's a simple tool that builds interpreters on your system and provides shell aliases to adjust pathing so they can optionally be used. At the opposite extreme from pyenv, some folks choose Python-in-Docker here (pros: reproducible, makes deployment environments very consistent with dev; cons: IDE/quick build-and-run automations get tricker). There are some other tools that wrap/automate the same stuff that pyenv does.
1.b. How will you be isolating your project's dependencies? "I want to install dependencies globally" breaks down (or worse, breaks your laptop!) pretty quickly, yes it's a bummer. There are three options here: if you really eschew automations/wrappers/thick tools in general, you can do this yourself (i.e. via "pip install --local", optionally in a dedicated development workstation user account); you can use venv (https://docs.python.org/3/library/venv.html stdlib version of virtualenv, yes the names suck and confusing, here we are etc. etc.), which is widely standardized upon and manually use "pip install" while inside your virtualenv, and you can optionally integrate your virtualenv with pyenv so "inside your virtualenv" is easy to achieve via pyenv-virtualenv (https://github.com/pyenv/pyenv-virtualenv); or you can say "hell with this, I want maximum convenience via a wrapper that manages my whole project" and use Poetry or an equivalent all-in-one project management tool (https://python-poetry.org/). There's no right point on that spectrum, it's up to you to decide where you fall on the "I want an integrated experience and to start prototyping quickly" versus "I want to reduce customizations/wrappers/tooling layers" spectrum.
2. Then, pick how you'll be developing said software: what frameworks or tools you'll be using. A Twitter lifeboat sounds like a webapp, so you'll likely want a web framework. Python has a spectrum of those of varying "thickness"/batteries-included-ness. At the minimum of thickness are tools like Flask (https://flask.palletsprojects.com/en/2.2.x/) and Sanic (like Flask, but with a bias towards performance at the cost of using async and some newer Python programming techniques which tend, in Python, to be harder than the traditional Flask approach: https://sanic.dev). At the maximum of thickness are things like Django/Pyramid (https://www.djangoproject.com/). With the minimally-thick frameworks you'll end up plugging together other libraries for things like e.g. database access or web content serving/templating, with the maximally-thick approach that is included but opinionated. Same as before: no right answers, but be clear on the axis (or axes) along which you're choosing.
3. Choose how you'll be deploying/running the software, maybe after prototyping for awhile. This isn't "lock yourself into a cloud provider/hosting platform", but rather a choice about what tools you use with the hosting environment. Docker is pretty uncontentious here, if you want a generic way to run your Python app on many environments. So is "configure Linux instances to run equivalent Python/package versions to your dev/test environment". If you choose the latter, be aware that (and this is very important/often not discussed) many tools that the Python community suggests for local development or testing are very unsuitable for managing production environments (e.g. a tool based around shell state mutation is going to be extremely inconvenient to productionize). I guess nowadays there's an opinionated extreme in the form of serverless environments that eat a zipfile of Python code and ensure it runs somewhere; if you choose to deploy on those, then your decisions about points 1 and 2 above will likely be guided/reassessed based on the opinions of the deployment platform re: what Python versions and packaging techniques are supported.
Yeah, that's a lot of choices, but in general there are some pretty obvious/uncontentious paths there. Pyenv-for-interpreters/Poetry-for-packaging-and-project-management/Flask-for-web-serving/Docker-for-production is not going to surprise anyone or break any assumptions. Docker/raw-venv/Django is going to be just as easy to Google your way through. And if you really don't want to think about things right now and just want to get hello-world in your browser right away, there's always cookiecutters (e.g. https://github.com/cookiecutter/cookiecutter-django).
Again, no one obvious right way (ha!) but plenty of valid options!
Not sure if that's what you were after. If you want a "just show me how to get started"-type writeup rather than an overview on the choices involved, I'm sure folks here or some quick googling will turn up many!
Perl has very flexible syntax, but I stick to the readable bits.
ESM Modules need no build system.
(actually using Next.js)
It's a bit of a ramble, sorry about that.
MSTOICAL[0] is a fork of an old C based Forth variant, it took some help from the HN community[1] to get it to compile in a modern 64 bit environment, for which I am very thankful. However, it uses AutoConf to configure, build, install, etc... and I can't for the life of me figure out how to remove all of that logic. (C isn't my primary language, I'm willing to learn that, but adding AutoConf on top of it was too much)
I was hoping to strip out Autoconf and pack it all into one HUGE C source file, and be done with everything but a single shell script and/or windows batch file to compile it, so it could be cross-platform.
In order to start work on that, I was willing to switch to Linux (Ubuntu)... I got everything up and running for the most part, but then I couldn't access WikidPad[2], my local Wiki with my appointments, etc. I missed a doctors appointment because of that, so went back to Windows.
WikidPad is written in Python, so you would think cross-platform would be a slam dunk. The issue is that the wxWindows library it uses, for reasons I'm sure they think are sufficient, changed the names and nature of variables in some calls. Because you have to build from source, that means someone has to update it to fix those breaking references. I don't have the skill to do so. The community seems to have moved on from WikidPad despite it's once wide popularity.
On Windows, you just download an old EXE installer and you're good to go.
So finally... I'm back in Windows 10, and decided to try to craft together a twitter clone in Python with a bunch of weird ideas that I tossed out at 3:30 am in a twitter thread, and put into a more coherent manifesto.[3]
There are a ton of Python libraries that seemed like they'd be trivial to hook together, but when I tried to use Flask, the build system issues broke things, so again, faced with the same issues... I wrote the above "Ask HN"
[0] https://github.com/mikewarot/mstoical
[1] https://news.ycombinator.com/item?id=30957273
[2] https://github.com/WikidPad/WikidPad
[3] https://github.com/mikewarot/iceberg/blob/main/MANIFESTO.md
Other languages followed suit and monstrosities like npm took over in the land of Javascript. Meanwhile Python went totally off the rails and became the absolute worst developer experience of any popular platform today.
Sadly, we're not likely to see the olden days come back. Some people grab a bunch of random shit and put it in docker thinking they found the solution. Unfortunately that strategy comes with its own issues and it's really just caking more complexity on top of what is already much too complex.