What a lossy compression algorithm is doing is to decide how unexpected a give bit of data is, assuming a certain model of image interpretation. There were decades of work thrown at it back when 512x512 pixels was high resolution, to create mathematical models of human vision that most closely match how people actually see the differences between images. This lets a computer decide how much changing a given pixel would effect how much a person would notice it being different. Doing this in an efficient and effective manner was not easy.
The least surprising bits get thrown away, in such a way that the process can be reversed to recreate the original image as closely as possible given the error budget and constraints of the encoding method, and the vision model used.
There camera sensor itself is optimized for doing one job... capturing light and converting that to a voltage. There are dedicated processors elsewhere that handle JPEG and other compression methods.
There is always an adjustment of quality available with lossy compression, everyone has their own preference for how much of the picture quality they are willing to lose, which is why there is usually a slider or preset somewhere you can adjust.
Professional photographers who are in high-stakes shoots record everything the sensor sees, without loss, in a RAW file. The incremental cost of the extra storage is far less than risking quality in these situations. Often a JPEG will be made at the same time, with the same file numbering to make it quicker to go through photos at the first sorting.
Steve Jobs could have picked some arbitrary value of compression, but that wouldn't have made it right.
JPEG in particular has the problem that a constant quality setting doesn't get constant quality result. You might look at one photo and decide that quality 40 is good enough for a particular use. Some other image might require quality 55 to be good enough.
Thus you can't automatically compress images with JPEG on a large scale and know you'll be happy with the results. I compressed a million images years ago and regretted it because many of them were overcompressed.
Newer image formats have ways to specify perceptual quality that come much closer to "set and forget"
To examine "true" resolution of an image one could look at its autocorrelation or Fourier spectrum [2]
[1] https://en.wikipedia.org/wiki/Kell_factor
[2] https://photo.stackexchange.com/questions/107911/how-to-dete...
What do you mean by "scaled down?" Scaled down implies reduced size. Are you reducing the size (fewer pixels) or increasing the compression with the same effective size (same number of pixels)?
How do you judge that it is indistinguishable from the original? If you zoom in on the original vs. the scaled down image I expect you will see a difference.