Why do we need 128bit CPU architecture?

With some ‘leaked’ information about Microsoft’s plan to include brand new IA-128 computer processor architecture in their next version of Windows (8 & 9) it got me thinking about the need for 128-bit CPUs. What’s the point?

Memory Addressing

This is often cited as the reason for needing to increase the number of bits in a CPU. With a 32bit register you can address approximately 2^32 bytes of RAM, or about 4GiB system wide. Windows itself imposes a limit of 2GiB of RAM for user processes and about 3.12GiB for total RAM, which is why there is a such a push to 64bit architecture. 64bit versions of Windows allow you to address far more memory. Except, this isn’t really true.

Physical Address Extension

PAE is an old technique used to address more memory than you should be able to, up to 64GiB on 32bit CPUs. It is similar to bank switching in that it uses the addressing register to split up your total memory into different banks of space which it can then switch into allowing you to access the full memory. There are a number of reasons why this is efficient and safe, and in fact Windows already does it. This is why the 32bit version of Windows server can address a full 4GiB of RAM even though the 32bit consumer version cannot. Well actually you can force it to do that as well.

How much memory do you really need?

Let’s assume that 32bit architecture, without PAE, only lets you access half of what it should be able to, 2GiB of RAM. That means that the maximum amount of RAM you could access in a 64bit architecture would be (2^64 / 2). This equates to 8,388,608 TiB of RAM that you would still be able to access. Most computers being sold today come equipped with 2GiB of RAM total, or 2.3*(10^-10) % of just _half_ of the total addressing space allowed for with 64bit architecture.

Speed improvements

The next argument for increasing the bit size of the architecture is to get speed improvements. By increasing the length of every register you no longer have to straddle registers when you are dealing with large numbers. For example, if you are doing math using a 64bit number on a 32bit CPU you will need to use two registers to fit the whole number. On a 64bit CPU you just need one, thus freeing up the second register for something else.

Certainly moving to 128bit CPUs will also improve speeds? Well… sort of. You see a lot of the large number math instructions that a CPU can do already make use of specialized 128bit registers inside of existing 32bit CPUs. I highly doubt there will be a large need for 256bit data types moving forward (super big long unsigned int?), so most of the real speed improvements you will see on a 128bit CPU will be when you are using 128bit numbers.

Another issue is existing software. The vast majority of software currently available are 32bit programs, meaning that they will see very little speed increases on 64bit or 128bit CPUs. In fact 64bit software is only now starting to become common place, with many applications still lacking true 64bit support.

Yes, 128bit registers will be beneficial for some computations such as encryption (128bit-256bit keys) and hash algorithms (some of the SHA-3 candidates keep an internal state of 512-1024bits) but so will the addition of specific instruction sets to make use of the existing hardware.

Progress moves forward

I don’t mean to rain on the 128bit architecture parade, I just merely mean to point out that what has been said so far about it really isn’t that different from what we already have. One day I do expect 128bit CPUs to replace 64bit ones, just as they are now slowly replacing 32bit CPUs. In the mean time I would much rather have additional registers or more hardware functionality because they will actually be taken advantage of.