What are you risc fags going to bitch about now that Intel is doing away with bloat and use as an excuse as to why your niche processor is somehow better?
Removing legacy os support is hardly getting rid of much bloat. Its a trimming at most. The x86 instruction set and encodings are a compete mess, and none of that is being changed.
Yet is still the fastest and most efficient architecture, keep seething ARMtard
1 week ago
Anonymous
I'm not an ARMfag. I use a TALOS II workstation which shits on any of your consoomer garbage.
1 week ago
Anonymous
>powerpc
Meme arch, any 5 year old threadripper destroys that shit
1 week ago
Anonymous
>Summit, the fifth fastest supercomputer in the world (based on the Top500 list as of November 2022[6]), is based on POWER9, while also using Nvidia Tesla GPUs as accelerators.[7]
1 week ago
Anonymous
Tell me again, which arch uses the fastest computer in the world? That right, x86-64
1 week ago
Anonymous
>fifth
And if I were to put several trillion 486DX machines together I could also make the fastest supercomputer in the world, who cares.
1 week ago
Anonymous
The POWER part of it is just to provide enough bandwidth to the GPUs.
Nowadays x86 designs are superior in this regard and NOVIDEO rolled their own ASICs for NVLINK switching anyway.
I wonder how much IBM paid for this privilege.
If it goes 64-bit only yes it is being changed because the instruction lengths could all be made the same witch would vastly simply encoding, decoding and even writing an assembler from a software perspective. Variable length instructions requires extra silicon and causes decoder latency.
>variable length instructions BAD! >why? >because my brainlet ass couldn't handle writing an assembler
fucking retard
enjoy your 50MB bloated binaries with fuckhuge xbox hueg instructions that take up 4-8 bytes each.
Brainlets always seize upon the weakest argument so they can pretend they've rebutted the whole thing, homosexual.
I explained why they're bad... they're slow. The decoder has to do a lot of extra work, more latency.
>fuckhuge xbox hueg instructions that take up 4-8 bytes each.
Time/space tradeoff. Equal sized instructions use more memory but they decode faster and make it easier to have aligned memory. Or you could go completely the other way and have a processor that works with just bitstreams and all of its attendant problems or a complex encoder but very space-efficient.
Or you could, instead of going with 64-bit instructions, go with 48 or something else.
1 week ago
Anonymous
>Time/space tradeoff. >decode
Since sandybridge we have a uop cache and it minimizes the decoding problem to near irrelevance. Imagine trying to claim that x86 is bad and slow because XYZ, but in the real world, the fastest processors you can buy are x86. Lol. >make it easier to have aligned memory
x86 barely gives a shit about "alignment" anymore. Only interlocked operations and certain SIMD operations give a fuck about alignment these days. Instruction alignment in the modern day is just a micro-optimization that is barely worth doing for 99% of code.
1 week ago
Anonymous
>Since sandybridge we have a uop cache and it minimizes the decoding problem to near irrelevance.
Not really, no.
>Imagine trying to claim that x86 is bad and slow because XYZ, but in the real world, the fastest processors you can buy are x86. Lol.
Being a dominant player doesn't make you incapable of inferior engineering. Intel has tried to address bad architecture more than once; so they're not really on your side. OP's post is not exactly a glowing appraisal of your views from the very people who produce these chips.
>x86 barely gives a shit about "alignment" anymore. Only interlocked operations and certain SIMD operations give a fuck about alignment these days. Instruction alignment in the modern day is just a micro-optimization that is barely worth doing for 99% of code.
Abject nonsense. Alignment does still matter.
1 week ago
Anonymous
>Abject nonsense. Alignment does still matter.
Alignment doesn't fucking matter, retard. People who still worry about alignment for general purpose data structures and mundane shit like that are literally stuck in the 2000s.
Alignment makes no difference unless you are working with large, fixed size arrays, variables that are accessed with interlocked operations, or SIMD. Padding your data structures for "alignment" does nothing except waste memory and more importantly waste cache. With padding, instead of being able to fit 37 structures in a single cache line, you can only fit 32. And when you have 1000 of those structures your program slows down, uses more memory, and doesn't end up running faster. Why don't you go read some optimization manuals that aren't from the 90s and 2000s.
1 week ago
Anonymous
You're too stupid to continue arguing with. You pretend trade-offs don't exist and you rationalize every legitimate argument put before you because you're a dumb fanboy. You're flat earth tier stupid and I'm done reading your nonsense.
1 week ago
Anonymous
>Since sandybridge we have a uop cache and it minimizes the decoding problem to near irrelevance.
Not really, no.
>Imagine trying to claim that x86 is bad and slow because XYZ, but in the real world, the fastest processors you can buy are x86. Lol.
Being a dominant player doesn't make you incapable of inferior engineering. Intel has tried to address bad architecture more than once; so they're not really on your side. OP's post is not exactly a glowing appraisal of your views from the very people who produce these chips.
>x86 barely gives a shit about "alignment" anymore. Only interlocked operations and certain SIMD operations give a fuck about alignment these days. Instruction alignment in the modern day is just a micro-optimization that is barely worth doing for 99% of code.
Abject nonsense. Alignment does still matter.
Actually 37 structures in a cache line was just random numbers I made up.
Let's say you have a 3-byte data structure consisting of a word and a byte. With alignment, each structure is now 4 bytes and contains a wasted byte, and you can fit 64/4 = 16 of them in a cache line. When you pack your structures (which is what I always do in my programs unless there's a reason not to) each structure is 3 bytes and you can fit 21 of them in a cache line.
Idiots who screech about alignment are literally retarded.
1 week ago
Anonymous
>I'm a retard and how does false sharing happen?
1 week ago
Anonymous
>pass user >post one of the worst take of the day
every fucking time...
Variable length instructions suck because they vastly increase the amount of silicon needed by a wide decoder, by introducing dependencies between deciding instructions. x86 is particularly bad because of the huge variation in instruction width rather than a simple 2/4 byte variability in something like T32.
They are not changing the instruction set. It will still be x86 and it will still run the same code as it did before. The only thing that is being changed is how the system boots.
As far as your OS is concerned even that is the same. UEFI already switches to 64 bit before the OS even starts. Only firmware vendors will see a difference in the boot process.
horse-shit, windows has an emulator for 32bit software on 64bit windows already and I'm sure you were not aware of it because it already works flawlessly most of the time, it's already a solved issue and hardware support is the last piece that needs to go.
https://en.wikipedia.org/wiki/WoW64
if it isn't already an issue for those who rely on 32bit software on 64bit os then it won't when intel will drop 32bit support (yes, will, not if because it WILL happen, the only real question is when)
WoW64 is an API wrapper not emulation retard.
32 bit software on 64 bit oses still runs in long mode but in 32 bit compatibility mode, which is not the same thing as 32 bit protected mode and not what intel proposed to remove
>WoW64 is an API wrapper not emulation retard.
not feeding this infinite semantical discussion, you know very well what I meant >which is not the same thing as 32 bit protected mode and not what intel proposed to remove
no fucking shit, native 32bit cpu mode has been worthless for a very long time
1 week ago
Anonymous
If you can't read or write that's your problem, not mine.
> reddit tier post about something being "over" > nothing is actually happening
when will the pedophiles of reddit just go home and stay there?
hello, reddit.
When will we start using a 128bit OS?
need a 128-bit cpu, but there will never be a 128-bit intel cpu. we can't even expand the memory to full range of what a 64-bit cpu address space could access.
i'm sorry about your severe brain damage, dunning kruger nagger retard. shut the fuck up, thanks. straight white men that know how CPUs work (and their history) are speaking.
1 week ago
Anonymous
I'm not that homosexual. Take your racist fat ass leave
1 week ago
Anonymous
> everyone is /misc/ that i disagree with
go home, reddit. your computer illiteracy isn't wanted here.
The transistors required for some operations (e.g. multiplication) scale superlinearly with the width, so it's not trivial to increase width arbitrarily.
by reducing 16 bit crap in their processors and switching to tsmc, intel is going to btfo ARM in energy efficiency. apple and amd will go bankrupt overnight. x86 is king. fuck ARMshit and fuck RISCfags
> removing "legacy" magically makes things perform better > compares CISC with RISC
i knew passfags were literally the dumbest and most mongoloid retards in existence but this just proves it once and for all that you people are amazingly fucking stupid.
>removing "legacy" magically makes things perform better
it doesn't make anything perform better but it will decrease power consumption for very low power devices like laptops and tablets.
why dont you stop being a fucking contrarian and listen to what i'm saying. I never mentioned performance you fucking nagger. Kill yourself
I doubt it will have any measurable impact on power consumption.
This is just cleaning up of the legacy boot stuff.
You won't be able to boot to MS-DOS anymore, but running old software is still possible under a modern 64-bit OS.
It's not like the 32-bit stuff is completely separate - it all runs on the same ALUs, using the same decoders, etc.
>intlel switching to tsmc
lol no
the U.S gov is currently dumping boatloads of cash at Intel's feet to build more fabs >inb4 b-but arc
low volume product, it's a beta for their datacenter AI accelerators (which have significant national security value, from the perspective of the government)
AMD has not been a driving force behind the x86 architecture since AMD64. Everything else since then has been an Intel development - VEX/AVX, AVX2, AVX512, and now x86-S. Amd hasn't invented anything new except for their fancy v-cache exploding CPUs.
Practically this has already been the case for a long time with UEFI. Only very early implementations were 32-bit. With the deprecation of CSM in 2020 the CPU stays in legacy mode for a very short time.
Intel's architecture is already incompatible with amd64, and has been since beginning. The differences are minuscule, but every OS has to support them both.
Yes, Intel's own 64-bit architecture failed in the market.
They had no choice but to license the superior AMD64. They of course modified it very slightly.
Yes, Intel's own 64-bit architecture failed in the market.
They had no choice but to license the superior AMD64. They of course modified it very slightly.
To grossly oversimplify, back in the day AMD was just one of several licensors of Intel's IP, since back then everyone had a (and usually just a) fab, so Intel needed the help to meet the demand from IBM et al. Then, to get ahead, AMD added 64-bit extensions without asking Intel first. This was a big no-no, but they were selling like hotcakes and threatening Intel's market. Also there were allegations of corporate espionage since Intel had been working on its own 64-bit extensions to x86 that had nothing to do with IA64. There was a very long lawyer orgy and the end result was the unilateral patent-sharing guarantee that Intel and AMD currently share. And they're now so intertwined that neither of them can legally operate without it (For example, most of the AVX-512 extensions are Intel's work but are now a selling point for AMD). Conveniently serves the double purpose of guaranteeing neither of them can be hit with an anti-trust case, not that the US gubment would ever do that given the current geopolitical silicon climate anyway.
The end result is the same. Intel is using a near identical architecture to AMD64 in their processors.
Their own efforts failed to appease the market.
Not to mention that Intel is not the only licensee of AMD64. VIA made compatible CPUs that were sold in the West, so they were legally licensed. The Zhaoxin deal is a grey area, on one side VIA is involved and on the other it's not being sold in the West.
Centaur (originally an x86 design house, later bought by VIA) also made an AMD64 design, but they ended up being bought by Intel in the end.
>Intel is using a near identical architecture to AMD64 in their processors
That's a dumb distinction to draw. Either amd64 isn't a distinct architecture from x86, or it is and what's in today's processors isn't amd64 because it's been 20 years since then. Also if AMD thought Intel 64 was merely amd64 they'd have sued by now.
>That's a dumb distinction to draw.
Yes it is, welcome to corporate politics. >Either amd64 isn't a distinct architecture from x86
It is. >or it is and what's in today's processors isn't amd64
If you look at for example the Linux kernel source you'll see that Intel's and AMD support code is almost identical, almost being the crucial part. This was made deliberately by Intel to distinguish their implementation from AMD's. >Also if AMD thought Intel 64 was merely amd64 they'd have sued by now.
They have a cross licensing agreement. That's why Zen 4 has SSSE3, AVX, AVX2, AVX-512, and so on. The base AMD64 only mandated SSE2.
Good. It’s bloat anyway. When was the last time you used a 32bit os?
What are you risc fags going to bitch about now that Intel is doing away with bloat and use as an excuse as to why your niche processor is somehow better?
Removing legacy os support is hardly getting rid of much bloat. Its a trimming at most. The x86 instruction set and encodings are a compete mess, and none of that is being changed.
Ah, so you're going to be moving the goal posts and claiming something ELSE is bloat now (it isn't). Got it.
x86 was bloated back when it was 16-bit only. Its entire design ethos is fundamentally bloat.
The only thing that's bloated here is your belly.
Yet is still the fastest and most efficient architecture, keep seething ARMtard
I'm not an ARMfag. I use a TALOS II workstation which shits on any of your consoomer garbage.
>powerpc
Meme arch, any 5 year old threadripper destroys that shit
>Summit, the fifth fastest supercomputer in the world (based on the Top500 list as of November 2022[6]), is based on POWER9, while also using Nvidia Tesla GPUs as accelerators.[7]
Tell me again, which arch uses the fastest computer in the world? That right, x86-64
>fifth
And if I were to put several trillion 486DX machines together I could also make the fastest supercomputer in the world, who cares.
The POWER part of it is just to provide enough bandwidth to the GPUs.
Nowadays x86 designs are superior in this regard and NOVIDEO rolled their own ASICs for NVLINK switching anyway.
I wonder how much IBM paid for this privilege.
If it goes 64-bit only yes it is being changed because the instruction lengths could all be made the same witch would vastly simply encoding, decoding and even writing an assembler from a software perspective. Variable length instructions requires extra silicon and causes decoder latency.
>witch*
which
>variable length instructions BAD!
>why?
>because my brainlet ass couldn't handle writing an assembler
fucking retard
enjoy your 50MB bloated binaries with fuckhuge xbox hueg instructions that take up 4-8 bytes each.
Brainlets always seize upon the weakest argument so they can pretend they've rebutted the whole thing, homosexual.
I explained why they're bad... they're slow. The decoder has to do a lot of extra work, more latency.
>fuckhuge xbox hueg instructions that take up 4-8 bytes each.
Time/space tradeoff. Equal sized instructions use more memory but they decode faster and make it easier to have aligned memory. Or you could go completely the other way and have a processor that works with just bitstreams and all of its attendant problems or a complex encoder but very space-efficient.
Or you could, instead of going with 64-bit instructions, go with 48 or something else.
>Time/space tradeoff.
>decode
Since sandybridge we have a uop cache and it minimizes the decoding problem to near irrelevance. Imagine trying to claim that x86 is bad and slow because XYZ, but in the real world, the fastest processors you can buy are x86. Lol.
>make it easier to have aligned memory
x86 barely gives a shit about "alignment" anymore. Only interlocked operations and certain SIMD operations give a fuck about alignment these days. Instruction alignment in the modern day is just a micro-optimization that is barely worth doing for 99% of code.
>Since sandybridge we have a uop cache and it minimizes the decoding problem to near irrelevance.
Not really, no.
>Imagine trying to claim that x86 is bad and slow because XYZ, but in the real world, the fastest processors you can buy are x86. Lol.
Being a dominant player doesn't make you incapable of inferior engineering. Intel has tried to address bad architecture more than once; so they're not really on your side. OP's post is not exactly a glowing appraisal of your views from the very people who produce these chips.
>x86 barely gives a shit about "alignment" anymore. Only interlocked operations and certain SIMD operations give a fuck about alignment these days. Instruction alignment in the modern day is just a micro-optimization that is barely worth doing for 99% of code.
Abject nonsense. Alignment does still matter.
>Abject nonsense. Alignment does still matter.
Alignment doesn't fucking matter, retard. People who still worry about alignment for general purpose data structures and mundane shit like that are literally stuck in the 2000s.
Alignment makes no difference unless you are working with large, fixed size arrays, variables that are accessed with interlocked operations, or SIMD. Padding your data structures for "alignment" does nothing except waste memory and more importantly waste cache. With padding, instead of being able to fit 37 structures in a single cache line, you can only fit 32. And when you have 1000 of those structures your program slows down, uses more memory, and doesn't end up running faster. Why don't you go read some optimization manuals that aren't from the 90s and 2000s.
You're too stupid to continue arguing with. You pretend trade-offs don't exist and you rationalize every legitimate argument put before you because you're a dumb fanboy. You're flat earth tier stupid and I'm done reading your nonsense.
Actually 37 structures in a cache line was just random numbers I made up.
Let's say you have a 3-byte data structure consisting of a word and a byte. With alignment, each structure is now 4 bytes and contains a wasted byte, and you can fit 64/4 = 16 of them in a cache line. When you pack your structures (which is what I always do in my programs unless there's a reason not to) each structure is 3 bytes and you can fit 21 of them in a cache line.
Idiots who screech about alignment are literally retarded.
>I'm a retard and how does false sharing happen?
>pass user
>post one of the worst take of the day
every fucking time...
Variable length instructions suck because they vastly increase the amount of silicon needed by a wide decoder, by introducing dependencies between deciding instructions. x86 is particularly bad because of the huge variation in instruction width rather than a simple 2/4 byte variability in something like T32.
They are not changing the instruction set. It will still be x86 and it will still run the same code as it did before. The only thing that is being changed is how the system boots.
As far as your OS is concerned even that is the same. UEFI already switches to 64 bit before the OS even starts. Only firmware vendors will see a difference in the boot process.
Yeah I've never understood why software is even released for 32bit anymore. Last time I used it was like 2009
There's plenty of 32 bit userland software but they wouldn't be impacted by this change.
horse-shit, windows has an emulator for 32bit software on 64bit windows already and I'm sure you were not aware of it because it already works flawlessly most of the time, it's already a solved issue and hardware support is the last piece that needs to go.
https://en.wikipedia.org/wiki/WoW64
if it isn't already an issue for those who rely on 32bit software on 64bit os then it won't when intel will drop 32bit support (yes, will, not if because it WILL happen, the only real question is when)
WoW64 is an API wrapper not emulation retard.
32 bit software on 64 bit oses still runs in long mode but in 32 bit compatibility mode, which is not the same thing as 32 bit protected mode and not what intel proposed to remove
>WoW64 is an API wrapper not emulation retard.
not feeding this infinite semantical discussion, you know very well what I meant
>which is not the same thing as 32 bit protected mode and not what intel proposed to remove
no fucking shit, native 32bit cpu mode has been worthless for a very long time
If you can't read or write that's your problem, not mine.
often but mostly ARM boards. I can't recall last time I used 32-bit mode on 64-bit intel machine except for toying with Windows XP in virtual box
IA64-reborn*
unless it supports 64-bit mem instead of 48-bit then sure thing pal
It's about time.
You clearly don't know the first thing about IA64 or why it wasn't adopted.
> reddit tier post about something being "over"
> nothing is actually happening
when will the pedophiles of reddit just go home and stay there?
hello, reddit.
need a 128-bit cpu, but there will never be a 128-bit intel cpu. we can't even expand the memory to full range of what a 64-bit cpu address space could access.
reddit spacing
not how it works, reddit. go back to your home with the rest of the computer illiterate failures that don't know how a CPU works.
Can AMD64 work without i386?
>Intel invented the 64-bit set instructions
What a fucking joke HAHAHAHA
Also, fuck 32-bit!
Intel tried hard to take away AMD's credits for AMD64.
Good.
https://www.intel.com/content/www/us/en/developer/articles/technical/envisioning-future-simplified-architecture.html
32-bit programs will still work btw, this just affects the CPU ability to boot a 32-bit OS.
personally I dont like it because I am old os enthusiast, but iunno maybe by then our emulation meme chips will be good enough that it doesn't matter
When will we start using a 128bit OS?
we do already, hell we even have 256 instruction sets in use
BIG NUMBERS BOIIII
Never.
Maybe around the time when 18.4 exabytes of memory isn't enough.
two more weeks, guys!
that is simply an extension of their 8-bit 8088 CPU.
> that is simply an extension of their 8-bit 8088
The 8088 and the 8086 are same cpu, the only difference being the 8088 has an 8bit external bus.
thank you for explaining what we already knew, gpt chatbot nagger retard.
Dumb /misc/tard.
i'm sorry about your severe brain damage, dunning kruger nagger retard. shut the fuck up, thanks. straight white men that know how CPUs work (and their history) are speaking.
I'm not that homosexual. Take your racist fat ass leave
> everyone is /misc/ that i disagree with
go home, reddit. your computer illiteracy isn't wanted here.
The transistors required for some operations (e.g. multiplication) scale superlinearly with the width, so it's not trivial to increase width arbitrarily.
>"transistors"
>read that as trans sisters
please get out of my head
Just get a Playstation 2.
x86 is a 16-bit architecture
by reducing 16 bit crap in their processors and switching to tsmc, intel is going to btfo ARM in energy efficiency. apple and amd will go bankrupt overnight. x86 is king. fuck ARMshit and fuck RISCfags
> removing "legacy" magically makes things perform better
> compares CISC with RISC
i knew passfags were literally the dumbest and most mongoloid retards in existence but this just proves it once and for all that you people are amazingly fucking stupid.
>removing "legacy" magically makes things perform better
it doesn't make anything perform better but it will decrease power consumption for very low power devices like laptops and tablets.
why dont you stop being a fucking contrarian and listen to what i'm saying. I never mentioned performance you fucking nagger. Kill yourself
I doubt it will have any measurable impact on power consumption.
This is just cleaning up of the legacy boot stuff.
You won't be able to boot to MS-DOS anymore, but running old software is still possible under a modern 64-bit OS.
It's not like the 32-bit stuff is completely separate - it all runs on the same ALUs, using the same decoders, etc.
>and switching to tsmc
never going to happen, not for the CPU cores themselves anyway
passfags aren't going to know this. i'd be shocked if it actually understands english.
>intlel switching to tsmc
lol no
the U.S gov is currently dumping boatloads of cash at Intel's feet to build more fabs
>inb4 b-but arc
low volume product, it's a beta for their datacenter AI accelerators (which have significant national security value, from the perspective of the government)
Keep dreaming, Pajeet. AMD will always find a way to fuck 'em up.
AMD has not been a driving force behind the x86 architecture since AMD64. Everything else since then has been an Intel development - VEX/AVX, AVX2, AVX512, and now x86-S. Amd hasn't invented anything new except for their fancy v-cache exploding CPUs.
AMD have invented chiplets which everyone uses, retard. And ReBAR. And APUs that don't suck and are used everywhere.
how does this affect pre 64 bit games and programs?
It doesn't, see
This is just another step after making the CSM not mandatory in 2020.
Does this mean that an x86S computer will start in long mode?
Practically this has already been the case for a long time with UEFI. Only very early implementations were 32-bit. With the deprecation of CSM in 2020 the CPU stays in legacy mode for a very short time.
I understand that, but I'm wanting to know if it's there from the very start.
Yes.
sweet
this comes off to me like some sort of move to get away from using AMD's 64-bit architecture
Intel's architecture is already incompatible with amd64, and has been since beginning. The differences are minuscule, but every OS has to support them both.
>Simplified Intel Architecture.
Isn't it AMD IP?
Yes, Intel's own 64-bit architecture failed in the market.
They had no choice but to license the superior AMD64. They of course modified it very slightly.
To grossly oversimplify, back in the day AMD was just one of several licensors of Intel's IP, since back then everyone had a (and usually just a) fab, so Intel needed the help to meet the demand from IBM et al. Then, to get ahead, AMD added 64-bit extensions without asking Intel first. This was a big no-no, but they were selling like hotcakes and threatening Intel's market. Also there were allegations of corporate espionage since Intel had been working on its own 64-bit extensions to x86 that had nothing to do with IA64. There was a very long lawyer orgy and the end result was the unilateral patent-sharing guarantee that Intel and AMD currently share. And they're now so intertwined that neither of them can legally operate without it (For example, most of the AVX-512 extensions are Intel's work but are now a selling point for AMD). Conveniently serves the double purpose of guaranteeing neither of them can be hit with an anti-trust case, not that the US gubment would ever do that given the current geopolitical silicon climate anyway.
The end result is the same. Intel is using a near identical architecture to AMD64 in their processors.
Their own efforts failed to appease the market.
Not to mention that Intel is not the only licensee of AMD64. VIA made compatible CPUs that were sold in the West, so they were legally licensed. The Zhaoxin deal is a grey area, on one side VIA is involved and on the other it's not being sold in the West.
Centaur (originally an x86 design house, later bought by VIA) also made an AMD64 design, but they ended up being bought by Intel in the end.
>Intel is using a near identical architecture to AMD64 in their processors
That's a dumb distinction to draw. Either amd64 isn't a distinct architecture from x86, or it is and what's in today's processors isn't amd64 because it's been 20 years since then. Also if AMD thought Intel 64 was merely amd64 they'd have sued by now.
>That's a dumb distinction to draw.
Yes it is, welcome to corporate politics.
>Either amd64 isn't a distinct architecture from x86
It is.
>or it is and what's in today's processors isn't amd64
If you look at for example the Linux kernel source you'll see that Intel's and AMD support code is almost identical, almost being the crucial part. This was made deliberately by Intel to distinguish their implementation from AMD's.
>Also if AMD thought Intel 64 was merely amd64 they'd have sued by now.
They have a cross licensing agreement. That's why Zen 4 has SSSE3, AVX, AVX2, AVX-512, and so on. The base AMD64 only mandated SSE2.
>say bye bye to all the goat games before 2010 and xp in vm
god i hate israelites
I haven't natively run a 32 bit OS in decades. This is a nothingburger.
I thought what we commonly used now was basically AMD64. But who is even using 32-bit x86 anymore?