Who is still programming directly on a server?
So after a decade plus of doing it the right way inside companies, I felt the need for a break from all the processes and pipelines involved in shipping code. So now, on personal projects, I dump a github repo on a server, run a Go server and front it with nginx. Then I commit directly on the server. I will say that parts of me want to create a whole framework around doing it the right way, but the hacker in me is really happy to just write and ship code without all the stuff in between.
Anyone else doing this?
You've misunderstood the "right" way's purpose. It's right because it solves a problem in a particular context, not because it's the best way in all contexts. You're not coordinating with other humans, so why have a mechanism to coordinate your efforts on a personal project? You're not risking people's incomes with minor downtime, so why build a resilient system? The right way comes at a cost of time and effort - why pay for things you don't want?
For my personal blog, I have a poor man's version of a CMS written in Go that builds a binary locally and then scp it to the server directly using a Makefile that also configures Caddy as a reverse proxy in front of the binary for other goodies like SSL etc and uses supervisor to run the binary. I don't even use Git for the personal blog (not yet :)). Here is the snippet from Makefile for deployment:
.PHONY: deploy/web
deploy/web:
ssh -i ~/.ssh/id_rsa2 -t root@${production_host_ip} '\
sudo supervisorctl stop ycblog\
'
scp ./build/ycblog-linux/ root@${production_host_ip}:/home/projects/ycblog
scp ./remote/production/Caddyfile root@${production_host_ip}:/home/projects/ycblog
ssh -t root@${production_host_ip} '\
sudo mv /home/projects/ycblog/Caddyfile /etc/caddy/ \
&& sudo systemctl reload caddy \
&& sudo supervisorctl start ycblog \
Whenever I change the code, I just do:
make deploy/web
I do and I know other developers who do it. On multiple projects I've worked on we had a dev server and production server, cloud instances on AWS or GCP. Developers work over ssh on the dev server (in their own copy of the web app), either with ssh+vim or VSCode over ssh. Deployment to production amounts to a git pull and maybe updating the database schema.
Advantages: Controlled dev environment. No more days-long getting the app to work on a new developer's laptop. Can change dev environment and tools in one place. All developers can see each other's work and collaborate at any time. Dev environment identical to production and easy to refresh as needed.
Disadvantages: Network latency, though less of a problem that I expected. Must have internet connection to work (at least to see the changes and test). A more subtle disadvantage is a lot of developers can't use the Linux command line at all, or not effectively, and some sulk about having to learn "the terminal."
If I know I'll be traveling or without internet I can git pull the source and work on my laptop with no internet connection, the rsync to my dev instance later.
One of our clients handles non-trivial usage with a setup that is not much more complicated.
They use hosted db (like AWS RDS or MongoDB Atlas), an external service for some user data (sorta like Firebase to handle user accounts, paid/free entitelments, etc) and I believe also a server image on AWS to quickly be able to recreate a clean server if something goes wrong.
Deployment is done with a webhook on git push and a local script that does some folder swapping.
For the rest the application (500k+ active users) is a single machine with apache+php for prod and clones of it for testing.
Thanks to php request-response model there is little to no server-side state out of the db so if load where to become excessive they could slap a load balancer and manually spin up a second server.
If your business model is different from "must get a ton of views" or "let's monopolize this market" this should be quite enough.
You can simplify even further and edit live PHP files right in your editor.
Enterprise stuff is designed around managing peoples mistakes.
For personal stuff, I go straight to most efficient way.
- Spin up a docker image from a base on my home server that has all the stuff setup (ssh server, python/node, open ports, e.t.c)
- Use VSCode with Remote SSH to develop directly in the docker container. Don't even bother with git. I just keep all code around in different files, never delete it.
- I have a cron job running on the docker host that snapshots containers every day, keeps snapshots for a week. Sometimes useful for backups.
- Once server is up and running and I want public access, I set up cloudfare tunnel to it with a subdomain.
I'm kind of ashamed to admit this but I have multiple servers that are just Go binaries running in tmux...
I've meant to configure systemd but haven't not gotten around to it
Back in the 90s, we did it a bit like that... only we didn't have git, most of us didn't even know about the VCS that did exist. Before doing something serious, I'd make a manual backup before just editing files directly on prod through SSH. Alarmed? At least I said SSH, and not telnet...
I work with a client that has us ssh in. We are committing on branches, and they are linked to testing subdomains. So there is some safety. In theory you can spin up a local copy. Almost no one does it. Between the micro services, needing to copy multiple SQL schemas, and seed it with test data, there are a lot of hurdles with little reward.
For my personal sites, one is just a scp, into a subdir. If everything looks good, I just mv it to overwrite live. The other I just restart the process. There's nothing "mission critical" on it.
I do this with Pharo [1] (a Smalltalk descendant). Connect my computer to the internet on some non-standard port and use Chrome Remote Desktop to log in.
Using ssh and Pharo is possible but the language comes alive when using the GUI.
[1] pharo.org
The "right way" isn't always the same way - in fact I'd go as far as saying that you should beware of anyone who tells you there is only one "right way".
There's no point committing the effort and resources into a full CI/CD pipeline for a simple script admin script that'll be running on just one VM you've got - but if that script is going to need to run on 100 servers then you don't want to be having to run git pull on every one of them manually.
I don't see using the proper process as any slower. It reduces the activation energy needed to make changes when you know you can revert them no matter how messed up, as well as deploy it live by simply pushing master to origin.
The main threat after not building something useful is having the code and design go off the rails, a standard git process prevents it. You can handle almost any complexity without getting confused about the state of things.
Not on prod, but I did a startup in Go and used a droplet running NixOS as my dev machine.
I also have a headless 64GB mini computer I ssh into at work currently. Because my 32GB isn't enough for our codebase. It being headless gives me a few free gigs of RAM to boot.
I always programmed on a server via ssh in college. And then partway through my time at Amazon, they gave us all cloud desktops. It just feels like home.
I do indeed have some projects where my dev environment and my prod environment are the same environment.
I also have things that run from a systemd unit that just points at a (clean) checkout.
And I have something running under a local k3s from a local container registry. Which I haven't gotten around to setting up a proper automatic pipeline for yet, although `make publish` is pretty close.
I'm the lone coder on all my projects these days. I develop on my local machine, offline, test locally (against local apache/nginx/nodejs and local database copies, then against live remote databases), keep local git archives backed up to private ones on github, and diff and deploy updates solely over SFTP. Works for me.
PHP
PHP makes this incredibly easy to do and a big reason why it got so popular decades ago.
I’ll be the first to admit, I still do this occasionally even today.
The right way is … the right way. I’ve done both and as soon as you’re doing something meaningful you’ll want the right way.
parts of me want to create a whole framework around doing it the right way
That’s the reason everyone does this when alone. If the right way was seamless, it would look similar to an ssh window. But it is not seamless and is an overkill for a personal server. Same for “local” development in a container.
Imo, this is absolutely fine as long as you are using git and your users are fine with some potential downtime.
I stood up an Airflow instance on a VM which was regularly updated with a chron job that would git pull and build. It may need to be replaced eventually, but its been running great for years so why change it now?
Nearly. My personal projects are committed to git, and a shell script "deploys" them on a VM.
I know a friend that runs a nearly identical approach, but with PHP.
He creates a repo locally, works on a project of his liking, buys a VPS, "init"s an empty repo that syncs it with his local one, pushes it live, reverse proxy via NGINX, and goes live lol, that's it!
Do the simplest thing that works for you. You know the tradeoffs that apply to your situation and if you're happy with it then great.
I've much less often seen companies do it the right way than I've seen them cargo cult the "right" way.
I’ve been programming directly on a server since 2014, through remote ssh configured in my IDE (Atom —> Pycharm —> VSCode). That’s pretty much the only way I can do it because I’m writing multi-GPU PyTorch code.
I do this for my personal site and email server. Just SSH into some instance, pull master, and run the server. Anything that gets modified or maintained more than once in a blue moon greatly benefits from some kind of process though.
For Christmas I bought a Raspberry Pi, stuck Ubuntu Server on it, and have been basically treating it a cron/systemd timer machine ever since. Not really the same idea, but pretty similar.
On some solo projects I just run dropbox on the server and work locally, either letting it auto deploy as I save/build or having a "copy to ./release/" deploy script.
I do for some personal things that I don’t care if it breaks. It’s a lot easier than trying to do it the “right” way especially with the vscode ssh integration.
This is my local development workflow. Then when I want to 'ship to prod' each component gets wrapped in a container and glued together appropriately.
Viola.
I have a project running in which I copy a folder, edit files with nano and then I change the symlink of the folder the webserver is using as base.
Every day. Git update + build + run tests + launch.
Having a really good test suite is key. I launch straight into production with zero fear.
Develop locally, git push, ssh into VM, git pull.
Works fine for small scale FastAPI RAG setup with proprietary, mostly static data. Gateway handles the retrieval and routes to one of a handful of LLM servers. Nimble and kind of fun, honestly.
Add remote plugin in vscode and do it
Yes. Always was doing it this way.
Doing this with home automation.