[0] https://arstechnica.com/tech-policy/2021/11/rittenhouse-trial-judge-disallows-ipad-pinch-to-zoom-read-the-bizarre-transcript/
[1] https://apnews.com/article/kyle-rittenhouse-technology-wisconsin-kenosha-homicide-b561bef68dc6aadaadc9b45a1bd93a19
(I don't think pinch-to-zoom adds sharpening, but it's something that could use an expert to double check.)
Now we zoom. We're displaying a smaller part of the source image, but putting it on the same number of destination pixels. So where do the extra pixels come from? There are (at least) two possible answers:
1. We repeat source pixels across multiple display pixels. This leads to aliasing - to blocky stair-steps in the displayed image.
2. We make up values for the extra display pixels that were not in the source image. This is done by interpolation, not just by random guessing. Bicubic interpolation is pretty good. But still, the program is in fact "making up" values for the new pixels.
To try to answer the question, yes it does/could. Although it depends much on the interpolation algorithm used to enlarge the image or if there was another process that could produce significant alteration such as an AI assisted process.
If you use interpolation to "make it clearer" you're adding information to it and it stops being an accurate representation of the original. It might look clearer to you and it could help you gain more insight, but after the enhancement it's no longer an accurate representation of the original. The very meaning of the word interpolation is "the insertion of something of a different nature into something else."
As an example imagine there's a far away video of someone handing a phone to another person and you take the best frame you can for analysis. In the original it looks like that person blur is handing something to the other person but it's just a blackish line.
After processing with different algorithms in one it now looks like the person is pointing a gun, in the other shows an empty hand and the AI one for some reason shows it's a bag, and so on.
More specifically about the trial the expert witness declared that he does not actually know how the algorithms used work and that he didn't compare with the original. Which I personally find baffling because this is a laboratory that analyzes evidence that may decide the future of a lot of people's life. The technician needs to know how it works in order to be able to reach an objective conclusion that what they're presenting is accurate.
This could easily be used to digitally zoom in on an image, for example.
Where things went sideways here is the prosecution expecting to be able to use pinch-to-zoom as testimony in court, when they should have actually gone through the standard CSI process of producing a digital zoomed image.
Now, to be honest, I think they should have been able to use pinch-to-zoom. If they lose the case because of this problem, I hope they can get a mistrial declared and then go back with the proper procedure.
Otherwise, the prosecution just plain screwed themselves over.
Some people can make paintings that look an awful lot like a photograph, that is, they have a mental model of what scenes look like and can construct an image from that model.
Computers can create photorealistic images using raytracing techniques and also with neural networks
https://deepai.org/machine-learning-model/text2img
It's very possible a scaling algorithm could guess at what is missing in the picture and fill something in. That doesn't mean that is going on with Apple products in 2021.
The answer is: bicubic, yes; bilinear: yes; nearest neighbor, no.
In what context did this show up in the Rittenhouse trial?