The confirmed details of the R1 chip are scant:
- It does a lot of image processing and sensor data integration: "the brand-new R1 chip processes input from 12 cameras, five sensors, and six microphones to ensure that content feels like it is appearing right in front of the user's eyes, in real time. R1 streams new images to the displays within 12 milliseconds"[0]
- It has substantial memory bandwidth: "256GB/s memory bandwidth"[1]
- It uses special, high bandwidth memory: "To support R1’s high-speed processing, SK hynix has developed the custom 1-gigabit DRAM. The new DRAM is known to have increased the number of input and output pins by eightfold to minimize delays. Such chips are also called Low Latency Wide IO. According to experts, the new chip also appears to have been designed using a special packaging method – Fan-Out Wafer Level Packaging – to be attached to the R1 chipset as a single unit […]"[2]
- This is subjective, but: Apple shows it as being roughly the same size as the M2 processor in the Vision Pro marketing[3], indicating it's a peer to the M2. Now, that may mean nothing, but based on my experience with Apple kremlinology they are not arbitrary about stuff like that.
So I see all this and get major GPU vibes. But I'm just some guy on a the internet, so what do I know?
0: https://www.macrumors.com/2023/06/05/apple-reveals-vision-pro-headset/
1: https://www.apple.com/apple-vision-pro/specs/
2: https://9to5mac.com/2023/07/11/vision-pro-performance/
3: https://www.apple.com/apple-vision-pro/
https://techcrunch.com/2023/06/05/apple-r1-chip-apple-vision...: "The specialized chip was designed specifically for the challenging task of real-time sensor processing, taking the input from 12 cameras, five sensors (including a lidar sensor!) and six microphones. The company claims it can process the sensor data within 12 milliseconds — eight times faster than the blink of an eye — and says this will dramatically reduce the motion sickness plaguing many other AR/VR systems."
Here's an obviously-incomplete list of source files part of the R1 firmware, probably referenced from asserts or other logging messages, thus present as strings in the firmware binary:
https://transfer.archivete.am/inline/Ydfxb/bora.txt
It seems it's handling data from cameras (CImageSensor*), LIDAR (SensorMgr/Tof = time of flight?), and display (DCP). I also see mentions of accel, gyro, bmi284 (IMU from Bosch?).
Edit: many are saying that is just an Image Signal Processor (https://en.wikipedia.org/wiki/Image_processor). I don't think that's quite the case because 1) The M-series chips are already known to have ISP's packaged into them. and 2) My understanding is that the R1's job is to provide continuity of passthrough even in the event of a kernel panic by the M-series chip. To my thinking, this means the R1 chip must have a level of independence beyond that of a traditional coprocessor. I think it is an entire SOC.
https://en.wikipedia.org/wiki/Digital_signal_processor
(Of which https://en.wikipedia.org/wiki/Image_processor are a subset of, which someone else in this forum mentions.)
DSPs are similar to GPUs in many respects but they are also similar to CPUs in many respects, but it doesn't make them the same.
You can do a lot with not much if it is all specialised hardware. Some of the wider features of the chip are due to the huge data bandwidth. But general purpose processing is a little slow for this task, certainly at this power envelope
Errr, that’s not how this works. The size of them being similar means absolutely nothing.
The R1 is a coprocessor, I mean it’s basically a big signal processor with a decent amount of RAM to handle all the camera inputs, and does some GPU like stuff, but isn’t a GPU
And it's entirely possible that one of the components Apple took from their grap-bag of SoC component is the GPU compute cores. Properly not the rasterizer or texture samplers, but I could see the compute cores being useful for running tracking algorithms. The massive amount of memory bandwidth does kind of suggest GPU compute cores.
But even if it does, that doesn't make it a discrete GPU. Just a dedicated SoC with some GPU components, and a bunch of other things.
But it simply can't be used as in indicator of discrete GPUs in Macs, because all those other SoC components would be a waste of silicon and it's unlikely Apple would reuse the R1 die as a dedicated GPU.