The intel con job, confirmed | INFJ Forum

The intel con job, confirmed

jkxx

*hv
Feb 20, 2018
1,395
64,485
2,912
\\.\
MBTI
INFJ
Enneagram
1w2
So yesterday quite unexpectedly I happened upon two reviews - one from Steve of Hardware Unboxed, one from Linus of tech tips fame showing the same problem with the new i9-9900K CPU from Intel. (Actually the second review is of a different processor but the same problem is evident there.)

The gist of the problem is this - Intel specced the the i9-9900K at a 95W TDP yet the motherboards designed to mate with it are all loaded with firmware and specs designed to run this processor at closer to 140W TDP - the initial reviews all used such motherboards, artificially inflating the 9900K's performance score. Yet when a user actually plugs this into a board with a 95W TDP they will achieve noticeably lower performance. And cooling a 140W processor is no joke - as AMD Bulldozer users can attest.

Underlying this story is a bigger one - that having run out of legitimate options of improving what they have based on their 14nm++ process node Intel has resorted to underhanded media manipulation.


AdoredTV's analysis of last year predicting Intel is doing this based on a different but similar case with the 8th gen Coffee Lake CPUs:

 
And adding a bit on AMD regarding the slew of Microarchitectural Data Sampling (MDS) vulnerabilities being discovered in Intel processors. (Although, amusingly, Intel tried to bribe one of the organizations discovering the vulns to shut them up. Good job, Intel.)

AMD describes the state of their processors with regard the different families of vulnerabilities here. While AMD chips were vulnerable to several Spectre variants (as of Zen 1) they are not vulnerable to the remaining exploits.

The whitepaper is worth reading as well but here are the highlights:

TLB ARCHITECTURE

The x86 architecture uses virtual addressing and hierarchical page tables to map the virtual address to the physical memory address used to reference caches and memory. This mapping allows privileged system software, whether the operating system or a hypervisor, to isolate different software environments by only allowing certain areas of the memory system to be accessed by each respective environment. This isolation is achieved by creating unique page tables for each environment. These page tables are isolated by either marking the page tables as not-present in the page table entry or using the protection attribute fields in the page table entry to restrict access.

For performance reasons, processors store a copy of these virtual to physical translations in a Translation Lookaside Buffer (TLB). AMD processors store translations in the TLB with a valid bit and all the protection bits from the page table which include user/supervisor, read/write bits along with other information. On each instruction that uses virtual addresses to access memory, AMD processors access the TLB and use the valid bit and the protection attributes to decide whether to access the caches. If the protection check fails, AMD processors operate as if the memory address is invalid and no data is accessed from either the cache or memory. This occurs whether the access is speculative or non-speculative. When the instruction becomes the oldest in the machine, a page fault exception will occur. A validated address is required for AMD processors to access data from both the caches and memory. The result is AMD processors are designed to not speculate into memory that is not valid in the current virtual address memory range defined by the software defined page tables.


In other words, AMD processors do not execute speculative memory loads until the instruction leaves the transient state and becomes an actionable instruction. This makes the vast majority of attacks affecting Intel processors impossible to execute on AMD. This behavior is consistent across these processors and will also stop the latest crop of MDS attacks affecting Intel processors:

ARCHITECTURAL EXCEPTION HANDLING

When exceptions happen within the processor this provides a window for speculation. The most common exception in the processor is a page fault due to a memory reference that is either to an unmapped page or a page that is being protected from access. AMD processors do not speculate on data from accesses that will result in page faults. Therefore, AMD processors are designed not to forward data to other speculative operations when the data is not allowed to be accessed by the current processor context.


In conclusion, this post is more of a musing on how exactly Intel implemented their speculative execution than on AMD apparently building theirs "right". And the real-world impact of so many Intel processors out there with over half a dozen data exfiltration vulnerabilities present.
 
Last edited:
That is some turnaround. For so long Intel was the golden child.
But more and more AMD has stepped up it's game. Going from outdated, and unreliable to providing serious competition to Intel.

I suppose the difference, (and it was the same just a few years ago with AMD) is that Intel wants to appear like it's making a major jump in processor technology. Whereas AMD has adopted a focus on steady and constant improvements. Nothing groundbreaking, but impressive all the same.

I've been using AMD primarily for a while now. Mainly because for a small sacrifice in power, you get cheap(ish) and reliable CPUs and GPUs. Although I still have a soft spot for the i5 8400.Though I might try an AMD 5 Ryzen 2600 next time around. Or 7 if I feel like splurging.
 
  • Like
Reactions: Daustus and jkxx
Pretty much though that image was carefully crafted by Intel so although they did hold the tech lead for years they did some stuff to make sure no one else would either. Likewise I am rather fond of the i5-8300H in my laptop even if it has become an affordable part thanks to AMD.

Forget the 2600, noticeably better parts will be getting announced in I guess the next 7 days somewhere depending on which leak is to be believed. But those will push the 2600 price even lower to help with clearing out inventory. You are looking at a minimum of 12 cores on AM4 clocked at ~ 5 GHz boost, though exact pricing is still unknown.

Things will get really interesting between now and August, I think.
 
  • Like
Reactions: Daustus and Tin Man
Pretty much though that image was carefully crafted by Intel so although they did hold the tech lead for years they did some stuff to make sure no one else would either. Likewise I am rather fond of the i5-8300H in my laptop even if it has become an affordable part thanks to AMD.

Forget the 2600, noticeably better parts will be getting announced in I guess the next 7 days somewhere depending on which leak is to be believed. But those will push the 2600 price even lower to help with clearing out inventory. You are looking at a minimum of 12 cores on AM4 clocked at ~ 5 GHz boost, though exact pricing is still unknown.

Things will get really interesting between now and August, I think.

Those at the top don't like to lose. I've heard that Intel is looking to buy AMD again. Not exactly surprising.
Hopefully they're just rumors as it would probably lead to a monopoly in the market.. And in my experience, monopolies only lead to complacency and greed. Google is a prime example.

Thanks for the advice. I'm only musing though. It'll be a while before I buy a new CPU. I picked up an i7 7th gen for next to free recently. So I'll be waiting a while.
I rarely buy newer parts anyway. I usually wait for a price drop or sale. And for the majority of bugs to be ironed out. I only use my PC to stream media, and play older/indie games. Nothing particularly intense.
 
  • Like
Reactions: Daustus and jkxx
Indeed I wonder if they (Intel) would try this with the rather lax monopoly policing as of late. I think it would cause some uproar even if regulators were to give it the green light. Not just Google either, we have a situation of several monopolies causing problems across the board. On that note it's refreshing to see the possibility of Facebook crumbling in the foreseeable future.

And yes I think buying used is the way to go, very decent tech at reasonable or even discounted pricing. And even for those doing a fair deal with their systems anything from the last few generations is likely to be "good enough."
 
  • Like
Reactions: Tin Man and Daustus
Well they can always give the idea that they're not a monopoly. Do what Cisco does. Buy out their competition and keep the recognizable name.
While Facebook is a complete mess and needs to die, I'd be happier if Youtube got some actual competition. The lack of moderation makes Steam look like practically totalitarian. All I have to do is watch and episode of John Oliver to get my feed filled up with alt-right videos.

Yup, unless you need to maintain a large virtualized environment, or you want every extra FPS you can get and have money to burn, it's a bit of a waste.
 
  • Like
Reactions: jkxx
They can, for sure, and this would be a really slick and sly way of going about it. I think by now it might be too late for this strategy, though, as they would need to react more aggressively and this is unlikely to remain unnoticed. For sure I am curious about whatever they do end up doing.

Good examples there with the other incumbents too (youtube and the rest) - maybe we'll see some changes there though can't say I have heard of anything on youtube beyond some iffy complaints here and there.
 
  • Like
Reactions: Tin Man
Yeah, so Moore's law has come to an end.

For a few generations, the improvements are nowhere near a doubling factor that impacts, performance, size and power equally.

I first saw this with Haswell on a HPC cluster, when the AVX2 instruction set was used, the clock frequency was HALVED.
And this is another problem with HPC, there is an efficiency metric where you count instructions per clock (this is basically linpack).
Unless we are talking linpack, you are doing extremely well with more let's say 25% utilisation of possible instruction per clock.
But here we are also talking microcode, not really x86 instructions and what not.

Now, did they have to halve the clock frequency when AVX2 was used? There has been this thing with Turbo and other things in addition to thermally regulating this.
This being said, it is at a point where using the whole die efficiently is almost impossible and there must be some thermal balancing.
But not in a HPC environment, where you will prefer fixed clock rates and reliable operation. So by reducing the clock frequency by half when AVX2 is used, the temperature stayed the same.

I left that business before broadwell and skylake systems, so didn't check if the clock was halved or some other measures were done, like just letting it heat up.