Even when people finally put all the effort into getting things right on both, something will change as versions are changed or new libraries are included, and suddenly it's broken again.
Has anyone got a good way around this? At this point, I just want to spin up an ec2 instance for my dev, but it's a total waste of the brand new machine and all its processing power.
1) Use the TARGETARCH variable in your DockerFile to replace the arm64 or amd64 string in whichever wget urls for unregistered packages you have. They tend to follow a pretty consistent naming scheme, and simply doing that replacement made a big difference.
2) As a frontend person, I noticed that it took a little while to figure out that certain backend packages like mqtt were using exact versions that didn't yet have an arm64 build, so I just incremented the version on my local machine.
These changes would presumably work on any machine, because the TARGETARCH variable just prints the architecture, it's up to the package maintainer to produce builds for the different architectures and keep the naming consistent.
I have no issues with this set up working between my m1 mac and intel mac, but ideally we'd bump the versions of the packages so it's less cumbersome.
With Ubuntu and Red Hat, you can have private mirrors of external repos you update when everything works in continuous (or, at least, regular) automated testing environments. Corporate installs of Windows can postpone individual updates as well until they can be certified. I'm sure Apple has something similar for macOS fleets.