I'm curious if anyone here currently works (or has very recently worked) somewhere where proprietary Unix is still used for production. If so, can you tell me what they're used for and why those deployments haven't been moved to an appropriate Linux distribution?
Not suggesting Linux is necessarily better for all use cases, just wondering what keeps these small number of entities clinging to closed-source Unix with presumably pricey license costs.
This used to run on a Compaq Proliant server (huge noisy Intel 486 tower) until the end of the millennium or so, then was converted into a VM. First on VMware, then on Hyper-V, where it has been running comfortably on various hardware (Intel Dell PowerEdge, AMD SuperServer) since.
Access is the biggest issue, as the OS only supports telnet, and serial access. So ever since this has been converted to a VM, it runs on a dedicated VLAN (666, just to make sure nobody ever misunderstands the true evil underneath...), with an AD-authenticating-HTTPS-to-Telnet bridge (coded up in Visual Basic.NET using some long-long-deprecated libraries) connecting it to the outside world.
That VB.NET kludge was recently upgraded to .NET 6, in order to get TLS 1.2 support. This was surprisingly uneventful, and I'm pretty sure this abomination gets to live another decade or so.
Ah, yes, a career in IT... Always on the forefront of cutting-edge tech...
(Later edit to, like, actually answer the question: licensing costs are nonexistent: SCO is gone anyway, and we don't require any support/updates. Migrating to Linux might be an option, but is most likely going to be hugely painful, and the existing VM scenario Just Works for everyone involved. Security and such is not a real issue: only a handful of internal users have highly-restricted access via a proxy)
They released a major update in 2020 that allowed you to move windows around the screen. It was groundbreaking.
But let me tell you, this system was absolutely terrible. All the machines were full x86 desktops with no hard drive, they netbooted from the manager's computer. Why not a thin client? A mystery.
The system stored a local cache of the database, which is only superficially useful. The cache is always several days, weeks, or months out of date, depending on what data you need. Most functions require querying the database hosted at corporate HQ in Cleveland. That link is up about 90% of the time, and when it's down, every store in the country is crippled.
It crashed frequently and is fundamentally incapable of concurrent access: if an order is open on the mixing station, you cannot access that order to bill the customer, and you can't access their account at all. Frequently, the system loses track of which records are open, requiring the manager manually override the DB lock just to bill an order.
If a store has been operating for more than a couple of years, the DB gets bloated or fragmented or something, and the entire system slows to a crawl. It takes minutes to open an order.
Which is all to say it's a bad system that cannot support their current scale of business.
There’s a bunch of enterprisey technology that is not yet dead, but dying very slowly.
https://www.usa.philips.com/healthcare/solutions/radiation-o...
Last I heard they were desperately trying to get it on Linux. Why it isn't is because its a huge legacy application with some very horrible hacks specific to the OS.
the software being run is SAP R3/Oracle and there is plans to replace it, but that is not happening anytime soon due to the usual delays associated with ERP migrations.
License cost is a red herring here especially when dealing with enterprise applications from the likes of like SAP, Oracle and IBM, heck were probably paying as much for our SAP on SUSE subscriptions as we do for our HPUX licenses, and the real license costs is with the applications and databases.
And it's not that long ago(say 2014) that there were niches where the only real cost effective way to get enough single box io performance was to bye an non x68 box that came with it's own unix, so there is a lot of systems out there where the hardware aren't actually old enough for an 1:1 migration to make commercial sense and rewrites/redesigns of ERP software is risky with most projects overrun both the time and budgets by an order of magnitude, if they don't outright fail to deliver an new system.
Or so they thought. Now it looks like most of the engineers moved on to the slate green pastures of Wintel, but occasionally an old format or workflow tends to pop up. I know of some software still being updated for those old V4 machines 5 years ago, but I've been out of the loop since.
Same software is used in aerospace, too. Where you're not switching to totally new models as fast, so I wonder how legacy-laden their software infrastructure is. "Who here knows both French and early 00 SGI admin?"
When I worked for $(LargeDefenseContractor) we used Solaris for a defense system we were developing. Over time the older units (based on older hardware) would be passed down to the national guard. I would not be surprised if Solaris was still being used in obscure places in the military.
Solaris 7 was a pretty awesome OS as I recall, but pretty soon Intel and AMD started supporting linux as a workable OS option for their server chips. Then linux on the cloud took off and the rest is history.
I can't say bad things about the whole Sun/Solaris combo though. It's rock solid and requires practically no maintenance whatsoever.
Also, since it's completely off the internet, it's not like it could be compromised in any way.
The stepper is a tool that is too expensive to retire, and if routinely maintained, can remain operational and happily expose wafers for more than twenty years[1]. No surprise to see it contains some ancient (by consumer electronics standards) software.
[1] https://www.asml.com/en/news/stories/2021/three-decades-of-p...
Also, I used a HP 86142B optical spectrum analyzer last year, which runs HPUX.
Most customers were on Ubuntu/RHEL/Windows. There was very little on FreeBSD, AIX or Solaris. We had zero interest for HP-UX, I think that is dead enough to be ignoreable. Banks and financial services have a tendency towards AIX, while Solaris I think was primarily one customer that had a lot of legacy. AIX and Windows were the biggest pain in the ass, but every time we tried to kill support for it, people discovered sizable contracts that had been signed with us (yeah, our tracking in salesforce was bad).
My background is that I learned C and Unix on a Tandy 6000 back in 1989/1990, then in college used and worked on a wide variety of O/Sen (Dynix, BSD4.3, Digital Unix 4.0, SunOS 4.1.4, Solaris 2.4+, Irix 5.x/6.x (i think), something that ran on a VAX, NetBSD and later Linux). I ported NMAP to a bunch of those and did the original GNU autoconf work on it. I've been mostly Linux since 2001 (Amazon from 2001-2006).
You are kind of combining two things though: legacy systems, and proprietary systems.
There are modern proprietary systems as well. RHEL is a good example.
I'd argue it's not a "small number of entities" though. You'd be shocked by what legacy systems are running in the most important places on the planet...maybe scared. Unfortunately/Fortunately, nuclear facilities aren't running Linux Kernel 6.X
The fact is, a lot (probably most) problems solved with a computer don't need further updates. As long as the hardware continues to function, all is well.
Something you may have not considered is that when the time does come, and the hardware does fail, I'd guess most organizations will opt -- and even go out of their way -- to source those same legacy components they had before to keep things running exactly the same instead of upgrading to a more modern solution.
I've had to do this a number of times for clients. Not long ago, I had to source an old mainboard for a system that was 20+ years old...in doing so I did realize there is some good money to be made if you can source parts for systems about 20 years in the past because the board was like $300 (this was 2017, and the board had a 33mhz processor and like 8MB RAM)
If you don't have to touch these systems, count yourself lucky.
If you do touch these systems, thank you for your service.\
In regards to the modern proprietary systems, there are many, but if you consider RHEL for instance, there is a lot of value for large organizations. They can reduce the number on on-hand personnel whom probably would be less efficient at solving an OS issue than a RHEL engineer. As an example, the Federal Reserve runs modern RHEL...but I'd guess if you dig deep, they have some really old stuff too...
They usually are just running some closed source service that is too expensive/impractical to replace, and aren't causing enough pain and suffering to anyone so there is no business case for replacing them.
I like having them around. Sure, projects could be created to replace them with some modern webshit on Linux, but it would probably run into the tens of millions of euros, take years, and work less reliably than the shit that's been chugging along just fine for longer than I've worked in tech.
Previous to that (late 90's, early 2000's) it was mainly Solaris. One place was fairly heterogeneous, and had Solaris, HP-UX, Digital Unix (aka OSF/1, Tru64), and a couple others.
The reason are maintenance contracts for very long-term systems that never upgrade (they are ultimately completely replaced by something else).
They are similar to that old bulb in a fire station[1]: it is strictly forbidden to breathe when around these systems, if you sneeze you are fired on the spot :)
We had to move them twice between data centers. I had some popcorn with me when they were powered off, transported as if it was Mona Lisa and then restarted with the sysadmins not watching and asking for the flashing numbers. Good memories.
Proprietary Unixes are frequent and have probably made up 75% of my career including current massive project. Note these are modern and up to date, usually purchased brand new for implementation project hardware and software, not inherited legacy stuff (which seems to set me apart from most respondents here)
Several reasons but note I have very bottoms up perspective.
1. Support. I think this is the major thing. Having a reliable long term vendor with pricey well written steady support model is important to companies who use erp.
2. Related is perception and reality of stability. Aix on power is as proprietary as it gets. These things get rebooted once or twice a decade. Hardware upgrade to another frame is live migration through firmware. It is not fancy or pretty but God dammit baby it works. Perception is there too - that Linux scales well out but not up, that it's vendor support is not at same level, that it moves too fast and breaks things, etc.
3. Deals and contracts. There may be legacy hardware footprint or client may get package deal with application middleware database and hardware.
My personal perspective? Proprietary Unix is ahead on internals, behind on shiny,boring and reliable. There's a lot to be said for distributed cheap boxes over proprietary big boxes. But I don't think modern SREs fully grok how much I never had to deal with hardware or OS issue or outage on these things. Anecdata sample size =1 + gossip, but it's just a very different mindset and, here's the trick, there's nothing inherently wrong with that mindset, even if it's not currently in vogue.
I prefer to work with Linux for a few reasons, including shiny and resume-helpful, but honestly, from business / management perspective, a grouchy experienced aix sysadmin on Power stack makes my job a lot easier.
Edit / p.s. Again these are modern os'es and support Gui and tunnelling.... But I don't think anybody ever uses them. The application stack running on top is certainly modern and gui/Web, but installing and supporting OS, database, middleware and apps is all cli, very obscure, very efficient, very powerful.
My take on each of the OSes was:
AIX and the associated IBM stuff is kind of a mess. I encountered a bug where /etc/filesystems (fstab equivalent) was parsed differently during boot than when using the mount command manually. The focus seemed to be on the use of the menu-driven smit utility as the primary admin tool, with automation of admin tasks an afterthought. The builtin commands are often not very practical, requiring multiple steps to do things that you're used to do in one on Linux. Installing some open-source tools is essential to sanity. Some of IBM's own tools are using expect on their own software (looking at you lpar_netboot).
SCO is clearly unmaintained stuff that looks like it dates from 30 years ago. At least it's simple to use.
Solaris had some nice features, like Zones or ZFS, but much to my dismay I couldn't play with them as I was made to install an old version of the OS as the newer version wasn't listed as supported by the version of Sybase that was to be installed on it.
I worked at an ISP in 2007 which was running mostly on Sun hardware and Solaris. This was because of huge discounts provided by Sun. Most devs ran Linux on their workstations. In 2014 I got to work with some guy whose previous project had been at that ISP, who was at that point desperately trying to move off Solaris because they had to start paying list price for the OS and it was much too expensive.
Correction: It has been pointed out to me that I'm currently using macOS which, Darwin non-withstanding, is technically a proprietary UNIX.
I recently rewrote the system we use to push user accounts and passwords to systems that don't support LDAP. It was amusing to write an app using a current-day stack on RHEL 8 that purely exists to handle these very legacy systems.
One of my favourite systems I've had to work on is running Solaris 2.5.1. Users are added to the program by editing the source code and recompiling it. How times have changed.
and it was kind of great. kept things interesting, at a minimum.
once linux and red hat starting gaining real traction in industry, i felt like losing all these high-priced unix distributions was kind of... lame.
i always had this idea, for instance, that working on/with Solaris -- i was driving this high performance _machine_ that was capable of doing almost anything, as long as i was up to the task - the Mercedes of OSs.
losing all those -- i would kind of compare it to how the English language - like the Linux OS - is taking over the world. at the same time that is happening, either as a direct result or something less than that, we're losing all these other languages. ditto biodiversity loss. ditto city gentrification / sameness / sterility. it feels wrong / unhealthy.
¯\_(ツ)_/¯
https://www.theguardian.com/news/2018/jul/27/english-languag...
what i'm saying is i miss the days of BeOS. :)
Back in the 00s it was common to have to work on multiple proprietary platforms. I did a lot of platform engineering work for one product that ran across Solaris (Sparc and x86), AIX (POWER), HPUX (PA-RISC and Itanium), Linux (x86 and System Z) and Windows.
Now ... if I'm lucky I don't have to care about the platform at all, I just write lambdas in my language of choice and throw them at AWS. It's a very different world!
There was a common set of cshell based tools used between those two environments, among other 90s style Unix tools, like software written againt the SunOS Open Look widgets, and tools written in Tcl/Tk.
I wonder if those machines are still in use!
We also apparently still have some AIX, but I'm not sure what it supports. AIX is still somewhat popular in financial services; probably others as well.
I know of a couple large HP/UX shops left.
Anyone running VMS? RSTS/E ? Or on rare hardware, OS-32 on a PE 8/32, or MPX on any SEL 32 family? MPE on Harris ?
Later on in eh, 2015 or so I worked at a company providing backup software that was tested and worked with every niche unix hardware under the sun. Usually to support large legacy industrial companies, think defense, materials, etc. who still used the hardware.
Banks will also use these things mostly for legacy reasons. The software got written once and has been working and validated for decades, no reason to rewrite it for a different OS just because.
Price is usually not a major factor when compared to the size of the business and number of employees.
[1] https://github.com/zowe/zowe-common-c [2] https://github.com/zowe/zss
I’m pretty certain most if not all are still running.
The reason I'm chiming in is to give my advice to those still dealing with these; learn vi, at least enough to edit config files. It's the only editor you can find on all of these, and you often can't install your own software.
Several times we looked at moving the DB to Linux/x86 but Oracle's pricing made it non-economical, or so I was told. All the app-tier servers (Java) ran Linux.
Haven't used a non-Linux system in production since leaving that company.
But later Oracle shell out JVM/Solaris integration/QA team, and shortly after that discontinued Solaris for new hardware completely.
Linux it is now.
Luckily it had bash, so it felt like a Linux system for my scripts. There is a Command to enumerate all the hardware running on it, I remember running that command to see what was assigned to the LPAR the Aix instance was running on. It took 3 minutes to run and complete :)
So yeah, a mainframe. Twlco. IBM obviously. It was used as a massive Oracle database.
Most servers were already Linux/x86 based at the time (2012), but I vividly remember SSHing to that one machine where things felt just... different.