HACKER Q&A
📣 beervirus

Is there any way that using pinch-to-zoom could change/add pixels?


Context is here, talking about using an iPad to zoom in on video. [0] My initial thought is no, it just makes existing pixels bigger and blurrier. But then I saw this expert’s testimony[1] and I’m wondering if I’m missing something: “Wisconsin crime lab employee James Armstrong testified, under questioning from defense attorney Corey Chirafisi, that the software program adds pixels to the image and he cannot say with certainty what color the added pixels are.”

[0] https://arstechnica.com/tech-policy/2021/11/rittenhouse-trial-judge-disallows-ipad-pinch-to-zoom-read-the-bizarre-transcript/

[1] https://apnews.com/article/kyle-rittenhouse-technology-wisconsin-kenosha-homicide-b561bef68dc6aadaadc9b45a1bd93a19


  👤 sp332 Accepted Answer ✓
Yeah, they could be running a sharpening algorithm. Normally you have something between nearest-neighbor resampling and a smooth gradient. But you can plug any function in there if you think it looks good. Bicubic makes nicer smooth areas, sinc can preserve sharp edges. But you can't add information to an image. It's all basically guessing which transitions between pixel values should be smooth and which should be sharp. It's possible to oversharpen an image and generate extra lines out of thin air.

(I don't think pinch-to-zoom adds sharpening, but it's something that could use an expert to double check.)


👤 AnimalMuppet
Here's an image, displayed at "normal" resolution. One pixel of source image goes to one pixel on the display.

Now we zoom. We're displaying a smaller part of the source image, but putting it on the same number of destination pixels. So where do the extra pixels come from? There are (at least) two possible answers:

1. We repeat source pixels across multiple display pixels. This leads to aliasing - to blocky stair-steps in the displayed image.

2. We make up values for the extra display pixels that were not in the source image. This is done by interpolation, not just by random guessing. Bicubic interpolation is pretty good. But still, the program is in fact "making up" values for the new pixels.


👤 h2odragon
If you scale the image to add more pixels, you have to decide what the value of those new pixels are in some way. I do not know the methods used by the tools in question. However, the relevant docs for GIMP are here: https://docs.gimp.org/en/gimp-tools-transform.html#gimp-tool...

👤 cocoricamo
The real issue here is that this is evidence in a homicide trial where you need accurate evidence to prove beyond reasonable doubt that the events really happened the way they're argued by the parties and it's just not possible in some cases.

To try to answer the question, yes it does/could. Although it depends much on the interpolation algorithm used to enlarge the image or if there was another process that could produce significant alteration such as an AI assisted process.

If you use interpolation to "make it clearer" you're adding information to it and it stops being an accurate representation of the original. It might look clearer to you and it could help you gain more insight, but after the enhancement it's no longer an accurate representation of the original. The very meaning of the word interpolation is "the insertion of something of a different nature into something else."

As an example imagine there's a far away video of someone handing a phone to another person and you take the best frame you can for analysis. In the original it looks like that person blur is handing something to the other person but it's just a blackish line.

After processing with different algorithms in one it now looks like the person is pointing a gun, in the other shows an empty hand and the AI one for some reason shows it's a bag, and so on.

More specifically about the trial the expert witness declared that he does not actually know how the algorithms used work and that he didn't compare with the original. Which I personally find baffling because this is a laboratory that analyzes evidence that may decide the future of a lot of people's life. The technician needs to know how it works in order to be able to reach an objective conclusion that what they're presenting is accurate.


👤 bradknowles
Under the Federal Rules of Evidence (referenced elsewhere), photo manipulation by CSI technicians is allowed, so long as those technicians can be there to attest to the methods and algorithms they used, and they can show the before and after steps at each stage and explain exactly what happened during that stage. This retains the digital chain of custody for the image.

This could easily be used to digitally zoom in on an image, for example.

Where things went sideways here is the prosecution expecting to be able to use pinch-to-zoom as testimony in court, when they should have actually gone through the standard CSI process of producing a digital zoomed image.

Now, to be honest, I think they should have been able to use pinch-to-zoom. If they lose the case because of this problem, I hope they can get a mistrial declared and then go back with the proper procedure.

Otherwise, the prosecution just plain screwed themselves over.


👤 PaulHoule
It really depends on what scaling algorithms you use.

Some people can make paintings that look an awful lot like a photograph, that is, they have a mental model of what scenes look like and can construct an image from that model.

Computers can create photorealistic images using raytracing techniques and also with neural networks

https://deepai.org/machine-learning-model/text2img

It's very possible a scaling algorithm could guess at what is missing in the picture and fill something in. That doesn't mean that is going on with Apple products in 2021.


👤 AS37
A defense lawyer gave an example - if you have red pixels and blue pixels, would the added pixels between be purple, a color not present in the original image?

The answer is: bicubic, yes; bilinear: yes; nearest neighbor, no.


👤 DarknessFalls
The visual cortex does image interpolation for crying out loud! Are we supposed to rely on a teleprompter for clarification of what we're looking at? This is a ridiculous argument. Yes, bicubic interpolation primitively aliases between pixels. The human brain does this in much more dramatic ways.

👤 brezelgoring
What he is referring to _could_ mean aliasing, where n^2 pixels that can't be accurately represented are merged into m^2 pixels of average colors and values, where mStill a stretch, and quite a big one at that, so its probably not it.

In what context did this show up in the Rittenhouse trial?