CHERI is a much more interesting case, because it expands the definition of what a "pointer" is. Most low-level programmers think of pointers as just an address, but CHERI turns it into a sort of tuple of (address, bounds, permissions) -- every pointer is bounds-checked. The CHERI folks did some cleverness to pack that all into 128 bits, and I believe their demo platform uses 128-bit registers.
The article also touches on the UNIX-y assumption that `long` is pointer-sized. This is well known (and well hated) by anyone that has to port software from UNIX to Windows, where `long` and `int` are the same size, and `long long` is pointer-sized. I'm firmly in the camp of using fixed-size integers but the Linux kernel uses `long` all over the place, and unless they plan to do a mass migration to `intptr_t` it's difficult to imagine a solution that would let the same C code support 32-, 64-, and 128-bit platforms.
(comedy option: 32-bit int, 128-bit long, and 64-bit `unsigned middle`)
The article also mentions Rust types as helpful, but Rust has its own problems with big pointers because they inadvisably merged `size_t`, `ptrdiff_t`, and `intptr_t` into the same type. They're working on adding equivalent symbols to the FFI module, but untangling `usize` might not be possible at this point.
On the other hand, filling out a 64 bit address space looks tough. I struggled to find something of the same magnitude of 2^64 and I got ‘number of iron atoms in an iron filing’, From a nanotechnological point of view a memory bank that size is feasible (fits in a rack at 10,000 atoms per bit) but progress in semiconductors is slowing down. Features are still getting smaller but they aren’t getting cheaper anymore.
Let's think critically for a moment. I grew up in the 1980s and 1990s, when we all craved more and more powerful computers. I even remember the years when each generation of video games was marketed as 8-bit, 16-bit, 32-bit, ect.
BUT: We're hitting a point where, for what we use computers for, they're powerful enough. I don't think I'll ever need to carry a 128-bit phone in my pocket, nor do I think I'll need a 128-bit web browser, nor do I think I'll need a 128-bit web server. (See other posts about how 64-bits can address massive amounts of memory.)
Will we need 128-bit computing? I'm sure someone will find a need. But let's not assume they'll need an operating system designed in the 1990s for use cases that we can't imagine today.
It's an unlikely hypothetical but imagine if fiber ran everywhere, and all computers seamlessly worked together sharing computer power as needed. Even 256 bits wouldn't be out of the question then. And before you say something like that will never happen consider trying to convince somebody from 2009 that in 13 years people would be buying internet money backed by nothing.
Can anyone explain the rationale for not simply naming types after their size? In many programming languages, rather than this arcane terminology, “i16”, “i32”, “i64”, and “i128” simpy exist.
I'm sure I got something wrong here, def off-by-one, but roughly it looks like it would need 1209-bit floats (2048-bit rounded up!). IDK, mildly interesting. :>
import math pi = 3141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393607260249141273724587006606315588174881520920962829254091715364367892590360 sign_bits = 1 sig_bits = math.ceil(math.log2(pi)) exp_bits = math.floor(math.log2(sig_bits)) assert sign_bits + sig_bits + exp_bits == 1209
"A 64 bit memory space is large enough that if a process allocated 1MB every second, it could continue doing this until significantly past the expected lifetime of the sun before it ran into problems"
Fast-forward 18 years, and it's fascinating to me to see people now seriously floating the proposal to support 256-bit pointers.
or better yet, design a new abstraction for not having to hard-code the limit of the pointer size but instead allow it to be extensible as more addressable space becomes a reality, instead of having to transition over and over. is this even possible? if it is, shouldn't we head in that direction?
First, we should decide whether to have a microkernel or a monolithic kernel.
I think the answer is obvious: microkernel. This is much safer, and seL4 has shown that performance need not suffer too much.
Next, we should start by acknowledging the chicken-and-egg problem, especially with drivers. We will need drivers.
So let's reuse Linux drivers by implementing a library for them to run in userspace. This should be difficult, but not impossible, and the rewards would be massive, basically deleting the chicken-and-egg problem for drivers.
To solve the userspace chicken-and-egg problem (having applications that run on the OS), implement a POSIX API on top of the OS. Yes, this will mean that some bad legacy like `fork()` will exist, but it will solve that chicken-and-egg problem.
From there, it's a simple matter of deciding what the best design is.
I believe it would be four things:
1. Acknowledging hardware as in .
2. A copy-on-write filesystem with a transactional API (maybe a modified ZFS or BtrFS).
3. A uniform event API like Windows' handles and Wait() functions or Plan 9's file descriptors.
For number 3, note that not everything has to be a file, but receiving events like signals and events from child processes should be waitable, like in Windows or Linux's signalfd and pidfd.
For number 2, this would make programming so much easier on everybody, including kernel and filesystem devs. And I may be wrong, but it seems like it would not be hard to implement. When doing copy-on-write, just copy as usual, and update the root B-tree node; the transaction commits when the root B-tree node is flushed to disk, and the flush succeeds.
(Of course, this would also require disks that don't lie, but that's another problem.)
Since then, it only roughly halved. What happened?
I know it's not process geometry, since we went from 45nm->5nm in the time, a roughly 81x decrease.
Is is realistic to assume scaling will resume?
Does anybody know why they don't use the existing fixed size integer types  from C99 ie uint64_t etc and define a 128 bit wide type on top of that (which will also be there in C23 IIRC)?
My own kernel dev experience is pretty rusty at this point (pun intended), but in the last decade of writing cross platform (desktop, mobile) userland C++ code I advocated exclusively for using fixed width types (std::uint32_t etc) as well as constants (UINT32_MAX etc).