HACKER Q&A
📣 guilamu

Has Intel not made any architectural improvements in 3 years?


So I recently decided to upgrade my home media server essentially because remote desktop and the whole thing was laggy and because I thought I could get something more energy efficient.

So the old one is an Intel® NUC Kit DN2820FYKH powered by a actively cooled Intel® Celeron® Processor N2830 (1M Cache, up to 2.41 GHz). 4GB DDR3 RAM, SSD.

The new one is a Chinese NUC powered by a passively cooled Intel® Core™ i7-7500U Processor (4M Cache, up to 3.50 GHz). 8GB DDR3 RAM, SSD.

I thought one of the best test was to use Handbrake to encode a chapter of a 1080p x264 movie to x265.

Here are the results with the power used by the whole server mesured directly at the power outlet:

Core™ i7-7500U (Q3 2016) | Idle : 8 watts | Max cpu : 35 watts | encoded 4660 frames in 586.04s | 7.95 fps / 35 watts / 0.22 fpsw

Celeron® Processor N2830 (Q1 2014) | Idle : 10 watts | Max cpu : 12 watts | encoded 4660 frames in 2400.00s | 1.94 fps / 12 watts | 0.16 fpsw

So, somehow I got what I wanted. At idle state, which the server is 98 % of the time, I got 20 % lower power use. I also now have 4 times more power at my disposal should I need it.

What seems also interesting is the ratio of frame per second encoded per watt = energy efficiency. From 0.16 to 0.22 is an energy efficency increase of +37.5 %. At the same time CPU Lithography went from 22 nm to 14 nm : -36.36 %. About the same percentage !

I may be jumping to the wrong conclusions here, but it seems to me that Intel spent the 3 years seperating 14nm and 22 nm doing nothing else than reduce lithography and made 0 architectural improvements. Am I wrong?


  👤 mcbits Accepted Answer ✓
I'm really not sure what changes they've made, but I think it's just a coincidence that -36% and +37% are similar magnitudes. E.g. imagine instead the lithography shrank by 99.999%. Is there reason to expect 0.32 fpsw?