But the migration to ARM is proving to be quite a pain point. Not being able to just do things as I would on x86-64 is damaging my productivity and creating a necessity for horrible workarounds.
As far as I know none of our pipelines yet do multi-arch Docker builds, so everything we have is heavily x86-64 oriented. VirtualBox is out of the picture because it doesn't support ARM. That means other tools that rely on it are also out of the picture, like Molecule. My colleague wrote a sort of wrapper script that uses Multipass instead but Multipass can't do x86-on-ARM emulation.
I've been using Lima to create virtual machines which works quite well because it can do multiple architectures. I haven't tested it on Linux though, and since it claims to be geared towards macOS that worries me. We are a company using a mix of MacBooks and Linux machines so we need a tool that will work for everyone.
The virtualisation situation on MacBooks in general isn't great. I think Apple introduced Virtualization.framework to try and improve things but the performance is actually worse than QEMU. You can try enabling it in the Docker Desktop experimental options and you'll notice it gets more sluggish. Then there's just other annoyances, like having to run a VM in the background for Docker all the time because 'real' Docker is not possible on macOS. Sometimes I'll have three or more VMs going and everything except my browser is paying that virtualisation penalty.
Ugh. Again, I love the performance and battery life, but the fragmentation this has created is a nightmare.
How is your experience so far? Any tips/tricks?
In reality I have hardly turned on the Intel MBP at all since I got it. At all.
Docker and VMware Fusion both have Apple Silicon support, and even in "tech preview" status they are both rock solid. Docker gets kudos for supporting emulated x86 containers, though I rarely use them.
I was able to easily rebuild almost all of my virtual machines; thanks to the Raspberry Pi, almost all of the packages I use were already available for arm64, though Ubuntu 16.04 was a little challenging to get running.
I also had to spend an afternoon updating my CI scripts to cross-compile my Docker containers, but this mostly involves switching to `docker buildx build`.
Rosetta is flawless, including for userland drivers for USB and Bluetooth devices, but virtually all of my apps were rebuilt native very quickly. (Curious to see what, if anything, is running under translation, I just discovered that WhatsApp, the #1 Social Networking app in the App Store, still ships Intel-only.)
https://developer.apple.com/documentation/virtualization/run...
https://developers.apple.com/videos/play/wwdc2022/10002/
If you are not familiar, Rosetta is how Apple Silicon Macs run existing Mac x86 binaries and it is highly performant. It does binary pre-compilation and cacheing. It also works with JIT systems. They are now making that available within Linux VMs running on Macs.
My solution was to give up using my M1 mac for development work. It sits on a desk as my email and music machine, and I moved all my dev work to an x86 Linux laptop. I'll probably drift back to my mac if the tools I need start to properly support Apple Silicon without hacky workarounds, but until GitHub actions supports it and people start doing official releases through that mechanism, I'm kinda stuck.
It is interesting how much impact GitHub has had by not having Apple Silicon support. Just look at the ticket for this issue to see the surprisingly long list of projects that are affected. (See: https://github.com/actions/virtual-environments/issues/2187)
As a primarily Linux user these feel like very familiar stories.
It's kinda refreshing to hear those stories from mac users. Maybe we are not so different after all.
2. Everything you install will be arm based. Docker will pull arm-based images locally. Most every project (that we use) now has arm support via docker manifests.
3. Use binfmt to cross-compile x86 images within prlctl, or have CI auto-build images on x86 machines.
That pretty much does it.
I'm only half joking. I'm of the group of people who know that Docker is a security nightmare unless you're generating your Docker images yourself, so wherever I've had to support that, I insist on that. If you don't use software that's either processor centric (and therefore buggy, IMHO) or binary-only, then this is straightforward and a win for everyone.
Run x86 and amd64 VMs on real x86 and amd64 servers, and access them remotely, like we've done since the beginning of time (teletypes predate stored program electronic computers).
Since Docker is x86 / amd64 centric, treat it like the snowflake it is, and run it on x86 / amd64.
I work on scientific software, so the biggest technical issue I face day-to-day is that OpenMP based threading seems almost fundamentally incompatible with M1.
https://developer.apple.com/forums/thread/674456
The summary of the issue is that OpenMP threaded code typically assumes that a) processors are symmetric and b) there isn't a penalty for threads yielding.
On M1 / macOS, what happens is that during the first OpenMP for loop, the performance cores finish much faster, their threads yield, and then they are forever scheduled on the efficiency cores which is doubly bad since they're not as fast and now have too many threads trying to run on them. As far as I can tell (from the linked thread and similar) there is not an API for pinning threads to a certain core type.
Assuming your base images are themselves already multi-arch, most of the tooling we needed was already built into the `dockerx` build tool, which is awesome - check it out if you haven't (2). Docker has bundled all the tooling and emulation packages (qemu) needed into a single docker image that can publish multi-arch docker images for you! You run docker to emulate docker to publish docker... There are some interesting things that you'll need to do if you publish multi-stage builds, like publish a tmp tag and delete it when you are done, but it's not /too/ terrible. Since Airbyte is OSS, you can check out our connector publish script here (3) to see some examples.
I'd recommend that spending the time to work your multi-arch tooling - not only does it make the local dev experience faster/better, it:
1. unlocks ARM cloud compute, which can be faster/cheaper in many cases (AWS)
2. removes a class of emulation bugs when running AMD images on ARM - mostly around networking & timing (in the JAVA stack anyway)
Links:
1. https://github.com/airbytehq/airbyte/issues/2017
2. https://docs.docker.com/buildx/working-with-buildx
3. https://github.com/airbytehq/airbyte/blob/master/tools/integ...
Then tried again early this year with a M1Max MBP, and it has been the biggest step change productivity boost of my life. Definitely still some pain points, but the way this thing handles anything I throw at it is incredible.
I'm mostly doing front-end dev (react native). Have a minimum of 2 IDEs, 1 iOS simulator, 1 Android simulator, Windows (ARM) Virtualbox, 2 browsers open at all times. And then add a mix of Docker, XCode, Android Studio, Zoom, Sketch, Affinity apps, Slack, Zoom, etc. I haven't ever heard the fan spin up. I was carefully managing what I had open on the 2018 MBP, and now I don't even think about it.
The only thing I'm still running in Rosetta is Apple software: XCode and the iOS simulator, but they run smooth, so I don't even think about it.
The MBA setup I was just flailing my way through. For the M1Max setup, I found this guide very helpful in my initial setup (mostly focused on a RN Dev): https://amanhimself.dev/blog/setup-macbook-m1/
We are completing a project to upgrade to Java 11 for most of our micro services. This will also mean we do multiarch container builds for our entire pipeline. Once that is complete, we will begin developing against ARM as the primary target for devs. This is needed because we are in the middle of a hardware refresh so by EOY, something like 80% of devs will be running on M1 Pro.
This should end up saving us money long term as we move all the cloud workloads to Graviton2/3 and Ampere A1 hosts.
We'll be multiarch for many years to come. I also don't see a timeline for us to sunset x86 support considering we also do on-prem installs and ARM rack mount servers are nearly impossible to source.
Early on there were major issues if you were targeting x86-64 systems code in C++. A lot of common tooling was broken for months on end. Time has solved some of these problems but ultimately I wrote code to support native ARM targets as first-class code citizens. There was a significant learning curve (I was not fluent in ARM64 ISA) but now that everything is ported, running native, and the tooling has finally started to catch up, it works pretty smoothly and I don't have much to complain about. The x86 emulation has limitations so native is the only way to go for many things, and you'll still want to test some code on real x86 server hardware.
That said, the latest OS on (at least) the M1 is noticeably buggy. Lots of behavioral artifacts that I wouldn't expect. Not sure if it is the hardware or the software, or both. Nothing catastrophic, just annoying.
The amount of trouble that some seem to be having with backend dev on M1 makes me wonder if maybe it wasn't the best idea for the industry to put its collective eggs in the single basket of trying to perfectly match dev and prod environments. If nothing else, it feels weird for progress and innovation in the world of end-user/developer-facing computing to be held back by that of the server world.
The only problems we've had is slow performance of Docker for us with our databases. So much so, we've moved those out of Docker and back to the native. Performance is easily 6x faster. MySQL was also a headache because finding a MySQL 5.7 official Docker container didn't exist for ARM so we needed to use the slow emulation through qemu.
We also have a CLI dev tool that is written in Python and distributed in Docker (x86) which has also been slow. Not enough time to build ARM based Docker image.
Here's a tip for anyone with docker compatibility problems: If you add `platform: "linux/amd64"` to your docker-compose (there's also a similar command for Dockerfile iirc), it just gets the x64 images and emulates those.
There is emulation overhead of course, but it's not perceivable in my experience, compared to running native images.
Currently running a bunch of Ubuntu (arm) virtual machines and my mbp m1 handles it really nice.
Running x64 and ARM together on one machine will work through tricks like Rosetta but I don't believe that stuff will ever work well in virtual machines, not until Apple open sources Rosetta anyway.
I'd take a good, hard look at your tech stack and find out what's actually blocking ARM builds. Linux runs on ARM fine, so I'm surprised to hear people have so many issues.
What you could try for running Docker is running your virtual machines on ARM and using the native qemu-static infrastructure Linux has supported for years to get (less efficient than Rosetta) x64 translation for parts of your build process that really need it. QEMU is more than just a virtualisation system, it also allows executing ELF files from other instruction sets if you set it up right. Such a setup has been very useful for me when I needed to run some RPi ARM binaries on my x64 machine and I'm sure it'll work just as well in reverse.
For my 9-5 employer, biggest drawback we've come across is that SQL Server can't be installed on Windows 11 ARM, which is preventing us from having a truly local development environment.
We've gotten everything else working via Azure SQL Edge running via Docker for Mac, but it lacks several features that we require (e.g. full-text search, spatial data types).
Despite a recent announcement (https://blogs.windows.com/windowsdeveloper/2022/05/24/create...) that Visual Studio will soon support ARM, There are no signs that SQL Server 2022 will support ARM.
My employer is still moving forward with provisioning M1 MBPs for developers.
But apart from that it’s been incredibly smooth.
Maybe this will help:
https://arstechnica.com/gadgets/2022/06/macos-ventura-will-e...
I don't expect many teams to volunteer to suffer this sort of slowdown and complexity in the near term.
In one case the devs were given the condition of getting their containers working on ARM to get the new MacBooks they wanted - and the cost savings of moving to ARM in cloud even subsidised the cost of them a bit too...
The prior goal was to mock production as closely as possible.
The realization is that macos as a host machine for orchestration is close enough to build. More strict validation can be done in CI and a staging env.
So for this project, the forced transition away from virtualbox Has actually led to simplification and asking questions about why it was “required” previously.
It is a bit of a pain only because some team members will need more support than others so the entire setup kind of needs to be clean and carefully documented when there is other stuff to do.
Our application is Gstreamer based, which means it uses highly optimized codecs that eventually render to OpenGL. I was very worried it wouldn't work on the M1.
It works flawlessly. Rosetta is amazing. I'm not an Apple fanboy at all but Apple has done an amazing job with M1 and this is true even though many applications are just running x86 code via Rosetta.
I like how ARM is progressing (I owned a second-batch RPi!), and M1 would probably be right for me if I wasn't a technical user, but it's simply too exhausting to fight the machine, architecture, package manager and product all at the same time. Docker is (and has been for a while) loathsome on Mac. Virtualization is usually pretty bad too, which makes regression-testing/experimentation much slower. I might give it another go if Asahi figures out GPU acceleration, but I'm not very hopeful regardless. The M series of CPUs doesn't really make sense to me as a dev machine unless you have a significant stake in the Apple ecosystem as a developer. Otherwise, it's a lovely little machine that I have next to no use-cases for.
> Any tips/tricks?
Here's one (slightly controversial) tip: next time you're setting up a new Mac, ditch Homebrew and use Nix. This is really only feasible if you've got a spacious boot drive (Nix stores routinely grow to be 70-80gb in size), but the return is a top-notch developer environment. The ARM uptake on Nix is still hit-or-miss, but a lot of my package management woes are solved instantly with it. It's got a great package repository, fantastic reproducability, hermetic builds and even ephemeral dev environments. The workflow is lovely, and it lets me mostly ignore all of the Mac-isms of MacOS.
The two pain points:
1. No support for running older virtualized macOS. I like to test back to 10.9 and need an Intel Mac to do that.
2. One Python wheel which doesn't have Apple Silicon builds and doesn't build cleanly: https://github.com/CoolProp/CoolProp/issues/2003
I mean in general, but they have also not released ARM instantclient or even an ARM version of Java. I think its crazy that I'm using Microsoft's version of ARM java.
I'm also using Windows 11 ARM in Parallels, which does seamless emulation of Oracle instantclient / Java / PL/SQL Developer. So most of my workflow has not been interrupted.
Still, just another excuse to move to a better database. Now all I have to do is convince our heavily bureaucratic IT department to move away from Oracle. It'll be easy, right?
My philosophy on most things is: if nobody else has done it, I’ll go first. I started compiling and bundling my Go applications as multi-platform universal binaries for macOS.
Last week, I spent a few hours learning how to build multi-architecture Docker images, and push them into Artifactory. That knowledge came in handy yesterday when one of the developers on another team got a new M1 Mac and could no longer build his Docker images.
Over winter break, I started putting together a build matrix for compiling RPMs, DEBs, and Alpine APKs for some software that some developers were building as part of their CI pipeline. We’ve been curious about the ARM-based EC2 Graviton instances for a while, and I only had to update a handful of lines of code to begin building arm64 versions of those same packages.
In short, necessity is the mother of invention. I enjoy inventing things. If nobody else has started adding support for arm64 to your internal pipelines, then you should go first.
Even the x64 java8 SDK for macOS runs without a glitch, I mean how impressive is that, with JIT and everything? Mind blown.
I didn't even understand the point of the new macOS 13 ventura linux rosetta thing until I realized some people are still running x64 docker containers. (why, though?)
For testing container builds on developer machines most people are either using Docker on macOS which already handles multi-arch cleanly, or using buildah on Linux which also handles multi-arch automatically if you set up qemu binfmt support. So that has been pretty painless too.
I would say if you are doing a lot of horrible workarounds, it's probably time to step back and look at improving the processes (like your pipelines).
The only issues I have ever really ran into were:
RKE had issues on arm early-on. Random containers didn't have arm image support. This went away quickly as an issue for me.
No nested virt. This one was painful for a few reasons, particularly when I was attempting to use the Canonical tooling to create preinstalled Ubuntu images, which I was doing in a vm via Multipass. Maybe M2/M3?
That's about it, really. I had to buy two Safari extensions when moving from Windows, but they were cheap and worth it (dark reader and some other one I can't remember rn)
I currently run Rancher Desktop every day as a replacement for Docker Desktop. Works spectacularly for me, and I can just not care about the environment. Just works.
I use Multipass when I need linux environments, and it's been spectacular.
Universal control has been the greatest enhancement in my workflow (and general daily use)
For developers using VMs, Docker, multi-pass, etc I think it is more trouble than it is worth to jump on to the new shiny thing and invest time in workarounds that break on a new update. At least you weren't part of the November 2020 launch day chaos otherwise you would be waiting 6 months to do any work if you went all in on the M1.
Looks like Intel is (still) the way to go for VMs until Apple Silicon gets better (eventually).
https://wiki.debian.org/CrossGrading https://www.qemu.org/docs/master/user/ https://asahilinux.org/
Most of the problems were foreseen because we had AIX and PowerPC systems in the past where we had to have multi arch pipelines already, I suppose most of the problems with the M1 were around monoculture setups that we see much more often around the world. Same architecture, same OS, everywhere. But that's actually much less 'normal' over the existence of computers than people think.
-occasional Postgres failures (i/o errors, especially with parallelization)
-kernel panics when connecting an external Sandisk ssd (known issue according to Apple forums)
It’s a shame because the machine is so much faster and energy efficient than my 16” intel MacBook Pro
Glad that the world is catching up a little but it’ll take time like anything else.
Biggest issue for me early on was android emulator, once an M1 version was released it was all easy going.
Not being able to run amd64 containers hit me hard. I fought it until I just gave in and made sure that everything we built could be built under amd64 or arm64. For specific builds on a specific architecture, GitHub action runner on a cloud box. (Or pick your flavor of CI/CD).
Once I looked past my machine into an ecosystem and embraced the arm as just another build artifact it was easier.
I also reject testing locally as a measure of working software. So that eliminates some pain. If your coverage is high then this is an easy shift. Have a dev environment that you can test that matches your assumed architecture, toolchain wise.
I have noticed that some apps can get "hangy," including Xcode, SourceTree, and Slack. I sometimes need to force-quit the system (force-quitting apps seems to now have about a 25% success rate). SourceTree also crashes a lot. A lot of this happened after I got my MBPMax14. I don't know if it would happen with any other machine.
These are not showstoppers (I've been having to force-quit for years. Has to do with the kind of code I write), but it is quite annoying. I have faith that these issues will get addressed.
And I never had any problems with it up until now. I use Chrome (Vivaldi) with tons of tabs and VS code with NodeJs and Java development and it was all snappy all the way
The main problem I had was at that time whether there will be application support for ARM and almost all the apps I use started supporting ARM as soon as the M1 came out
I only had to use Rosetta few times and NodeJS also started supporting ARM architecture as well
But next time I’ll be going for a higher RAM than 8gb for longevity
But I think I’ll be using this for couple more years and I think I’ll be skipping M2 since my M1 is good for now
The majority of Docker images that I use are available for ARM and the few that aren't perform fine under Docker for Mac emulation (although the big performance boost that I saw ultimately came from enabling VirtioFS accelerated directory sharing).
Just about all of the tools that I use are now available as universal binaries, but before that, Rosetta was utterly seamless.
I really can't complain.
Works great and I can move between a local and cloud servers depending on requirements
With good habits, it's rarely an issue anymore (though there is the occasional project when it turns out to be a hassle, usually something with an obscure node-gyp build).
If you rely on closed-source software it's a different story, I guess.
For actually creating multi-arch, I recommend you stay as far away as possible from Docker and use Podman and Buildah. The latter unbundles some of the Docker manifest commands, giving you far more control over how you create multi-arch images. I wasted 4 months on Docker tooling, and got it right in half a week with Podman. This meant switching from DCT (Podman doesn't support this at all) to Cosign, but Cosign is far more sensible than DCT.
There are a rare few containers that you can get away with running on x86.
You can run molecule against an ec2 instance or Docker containers. Since you can run x86_64 docker containers on Docker for Mac, you can continue to use molecule. I run molecule tests against Docker containers or LXD in the cloud though just because of how much faster they run on large Ec2 instances.
As for everything else, I haven't really noticed many issues. Most of the work I do is built through CI/CD pipelines so what I use locally to build doesn't affect what is deployed to production.
I occasionally have to do exploratory processes where I have to figure out how to setup the same environment locally that my teammates are using. It can be time consuming, but overall I’m able to replicate it just fine. We’re far enough into the transition that most stuff is supported out of the box.
I admit I don’t know much about what’s going on under the hood. I used podman for containers so far.
Virtualization Framework’s VLAN support is not mature and getting more than 100 machines per rack has proven difficult. The need for additional switches, patch panels, uplinks and cooling makes multi-thousand machine installations slow due to the recent logistics unpleasantness.
Using Studios is hard because of massive delays to orders. Especially the ‘big’ machines in 1,000 unit quantities.
x86 and x86/GPU still seems to be the best approach for prod datacenter use.
Otherwise I am a fan
Almost perfect, but I did switch development tools from Xcode/Swift/SwiftUI -> VSCode/Dart/Flutter at the same time. So I am having a lot of problems! But nothing mush unexpected.
I copied my system over with timemachine. I think that a lot of binaries got copied that I should reinstall.
> none of our pipelines yet do multi-arch Docker builds, so everything we have is heavily x86-64 oriented
Another data point for the fundamental principal: Portability matters
Pros: - Setup is very simple. - It can run dozens of containers without overloading local machine. - Its stable. - SSL is working.
Cons: - I got issues with web sockets support. - Some times I get file conflicts.
I actually got ready solution for running remote docker with Mac but It need a bit of work. If someone would like to support project some front-end work, and a bit of docker/nginx work, please get in touch.
Everything was far easier than I expected it to be, the only issues I had was with installing python (a few cli Utils required it) but everything else has been smooth sailing and a much better experience than running things on my 2019 MBP
I’m not a huge docker user, but I run it for a few things and again, it was all smooth sailing.
In short it was painful but once you get over the attrition of compiling (mainly C) deps it's smooth sailing from there on out.
As far as I know, this has nothing to do with the M1 or ARM. This has always been the case. How else would you run Linux containers on a non-Linux OS?
0) still converting some stuff in Lineform. Shame there wasn't a 64-bit version before they stopped selling it.
So so glad, running plasma.
I also semi-professionaly use Photoshop, Sketch, DaVinci Resolve, XD, After Effects.
All butter smooth.
I had three-up before, and now I’m back to the laptop and one central display.
I could get a cheap DisplayLink hub but the performance is poor and I’m not happy with granting their driver screen recording permission. Or trusting it at all tbh.
I got an m1 right when they came out because I started a new gig right around that time, literally happened the same week. Trying to get all my dev tools installed became a rat's nest of issues. I work as a backend / dist sys / systems engineer for my day job and so I have to write and use things that are fairly close to bare-metal. Brew hadn't been forked yet, so that added a whole new layer of issues.
Docker still doesn't work, Rust libs compile in weird ways... just all kinds of stuff that I'm not smart enough or paid well enough to figure out. My title is "Developer", not "M1 developer advocate" so after about a month of running into issue after issue, I went back and found a used MBP with an intel chip. I'm excited about the future of Apple silicon, and ARM as a whole, but it needs another couple years of refinement.
I will say that I've been using an m1 mac mini for general office work as apart of my side business and it's quite good.
In my personal hobby stuff I do miss being able to virtualize x86 machines but have been able to get by with arm versions of Windows 11 and Linux running in parallels, and qemu for operating systems that don't have arm builds.
* Virtual VM's don't solve my problem everytime. There's software that still requires x86 and a VM isn't going to solve that problem in a few cases. I wish I could get into more details here but I'm kind of a noob in this realm. (TLDR: I need to use something called UAExpert and to resolve this, I have both a separate Linux machine and Windows machine in case I need it)
* Have to install homebrew and an x864 version of homebrew to run the right software. Homebrew does not document this so this solution was based off stack overflow posts.
* While docker states that it supports multiple architectures, I don't find that to be fully true. For our codebase, I need to push up x86 docker images but accidentally pushed up arm64 ones instead. There's a solution for it but it's definitely not an out-of-the-box solution at the moment.
Overall, still pretty happy with it. My older macbook pro had gotten sluggish so the tradeoff for me was worth it.
Installed Asahi Linux, it made possible to keep OS running all the time, keep HexChat IRC running, and not shutdown when away from keyboard like on macOS.
But that M1 did only last 4 days. Then it did not boot anymore. So I returned it to warranty repair, and canceled buying it.