Always Bump Downwards (2019)

FITZGERALDNICK.COM
85
10
tjalfi

Comments

@benmanns

Somewhat aside, I love the graphical display of the benchmark results: https://fitzgeraldnick.com/media/bumpalo-switch-to-downwards... --I think I'll start doing that when I run benchmarks in the future.

@thethirdone

Notably, that graph starts at 25 which seems misleading.

@wrsh07

Eh it was clearly labeled, and I pretty quickly gleaned that it was like 5 microseconds faster

@karmakaze

Given that the chart goes to 80, it was misleading labeling.

@gjm11

Since the title is less than perfectly perspicuous:

It's about "bump allocators", where you allocate memory just by incrementing/decrementing a pointer (and don't free it until you're ready to free up all the memory all at once).

These can either allocate "upwards", starting at low addresses and giving you memory at higher addresses for successive allocations, or "downwards", going the other way.

The claim being made is that "downwards" is better, because the bounds checks you need to do turn out to be more efficient that way; allocation ends up using fewer instructions, fewer registers, and fewer conditional jumps.

@chrchang523

Incidentally, you can choose to bump in both directions. It's more complicated (you need to keep track of which end you allocated each data structure on), but in exchange, the allocator becomes sufficient for many more use cases.

Given a choice, the OP implies that you should position small-but-numerous allocations next to the top, and larger infrequent allocations next to the bottom.

@dev_dwarf

This sounds like it would make the alloc logic much more complicated and branch-y, defeating the purpose of bumping down anyway, unless your implying some compile-time way to do this.

@chrchang523

No, the idea is that you manually make some allocations downward from the top and some allocations upward from the bottom. The bumping code is as simple as in the unidirectional case.

The tricky part is choosing in a way that puts you noticeably ahead of the unidirectional allocator re: what problems you can solve, without putting excessive mental load on yourself. I've found a pattern of "long-lived allocations on one end, short-lived allocations on the other" to work well here (which, yes, doesn't always coincide with the numerous vs. infrequent axis mentioned in my previous comment).

@dev_dwarf

Ok, I get it now. It would add an extra ptr to the struct, but wouldn't be significant overhead.

I do wonder what benefit there is for you over just having two separate allocators, one for long term and one for short term. I imagine there could be benefits in very memory constrained scenarios.

@toast0

BEAM (Erlang) uses something similar for process memory; a process gets a chunk of memory, heap grows up, stack grows down; when they meet, trigger GC, and if that doesn't reclaim enough, allocate a larger chunk of memory. More details if you're interested [1]. In general, I'd think anywhere that the combined size of two allocators should be a consideration, one going up and one going down would make a lot of sense.

[1] https://www.erlang.org/doc/apps/erts/garbagecollection

@dev_dwarf

Thats an interesting idea. I'm not sure I'm sold on it v.s. just having two seperate allocators and growing them seperately. The arena allocators I use take advantage of virtual memory to grow which might change my perception of the tradeoffs involved, as I wouldn't typically need to resize one of my allocators (you can just aggressively over-reserve memory and then only commit what is actually used).

@Conscat

A double ended stack allocator is not an uncommon primitive in video games.

@svat

> manually make some allocations downward from the top and some allocations upward from the bottom

Incidentally, this is what Knuth does in TeX, if I understand correctly: http://mirrors.ctan.org/info/knuth-pdf/tex/tex.pdf#page=43 (section 116):

> The mem array is divided into two regions that are allocated separately, but the dividing line between these two regions is not fixed; they grow together until finding their “natural” size in a particular job. Locations less than or equal to lo_mem_max are used for storing variable-length records consisting of two or more words each. […] Locations greater than or equal to hi_mem_min are used for storing one-word records…

(Different allocators are used for the two regions and neither seems to be a bump allocator, so it's probably not very relevant to this thread, but I was reminded of it so just sharing…)

@justin_oaks

I only recently learned about bump allocators. I was very confused as to how such an allocator could be useful. In reading the author's bumpalo crate [1] documentation, he cleared up my confusion.

For one thing, this particular allocator is separate from the general allocator so the calling code can choose when to use the bump allocator.

Quoting the crate doc:

> [B]ump allocation [is] well-suited for phase-oriented allocations. That is, a group of objects that will all be allocated during the same program phase, used, and then can all be deallocated together as a group.

I can see this being useful as a way to optimize allocation/deallocation speed for specific use-cases.

[1] https://docs.rs/bumpalo/latest/bumpalo/

@loeg

It's great when you don't need destructors and have a bunch of small objects to allocate with similar lifetimes.

@ph4evers

It is also used in WASM. I guess to increase memory safety

@saagarjha

Bump allocators are very simple in an environment where everything you want must be brought with you.

@lowbloodsugar

So lesson is don’t use safe Rust code when writing something like a memory allocator where an overflow check is deemed by the author to be too slow?

Update:

>To handle both these cases, we will use checked addition and return a null pointer if either addition overflows. Here is the new Rust source code:

I’m still filling this under “Author made arbitrary decision and result is arbitrary solution is slower as a result of arbitrary decision”.

@pavon

I don't think Rust is the issue here. The same integer overflow can occur in any language and should be checked. Integer overflow and underflow is one of the most common security bugs after memory access errors. It is more that bump allocators are so efficient that a relatively inexpensive overflow check can be a significant fraction of their runtime.

@kazinator

It's not integer overflow but pointer overflow.

If you're doing small bump allocations and you're anywhere near pointer overflow, it means you're way out of bounds already; the bump allocations you already made were wrong.

You need to check whether the allocation increment is out of the zone from which you're allocating, and you need that no matter which direction you go.

If the requests are small, you will hit the end of your arena long before you worry about pointer overflow at the zero address or at address 0xFF..FF!

That said, aligning down is a shade faster than up. To align down to a boundary divisible by ALIGN_MASK + 1 we just truncate some low order bits to zero:

  addr &= ~ALIGN_MASK;
If the bits are already zero, the address doesn't move; all is cool.

but aligning up, where we don't care about overflow, requires handling the case where the address is already aligned and doesn't have to move:

  if (addr & ALIGN_MASK) {
    addr |= ALIGN_MASK;
    addr++;
  }
Or a trick like this where we bias the address with an offset of -1 during the masking calculation:

  addr = ((addr - 1) | ALIGN_MASK) + 1;
(We could get underflow here if addr is the zero address (null pointer on most systems), but it's reversible if the pointer arithmetic is done as unsigned. Zero aligns to zero: it goes to 0xFFF..FFF which stays the same after | 7, and then increments back to zero.)

Either of these is worse than just addr &= ~7.

@lowbloodsugar

Or you just add ALIGNMENT-1 and then mask. So if alignment is 16 then you add 15.

@kazinator

Oh right; that's how I've always done it, just forgot. Right. Still, it's an extra addition compared to just masking down.

@tel

I would definitely be in favor of an even faster `SmallBump` variant which assumes that the allocations are small w.r.t. usize::MAX for some additional speed.

I also wouldn't mind the a default "fast" bump allocator library to do all tricks it can without sacrificing safety.

@lowbloodsugar

And maybe the heap has fixed alignment so you have one that only needs 4 and one that needs 16. Indeed maybe you grow from top and bottom depending. And maybe this is all a giant premature optimization - including the panic about up vs down. My previous career was console video games and maybe this kind of handwringing isn’t required for whatever the fuck this is.

@wrsh07

I think the lesson is "safety comes at a cost. If you bump downwards you can avoid those safety checks"

I don't think the lesson is "let's just write unsafe code"

@kazinator

Caller asks for a 500 megabyte downward bump allocation in a 32 bit system. Do you check for underflow or not?

@bitwize

You check to make sure the size requested is less than or equal to current pointer - chunk start and fail if it is bigger.

Let's face facts, the allocator, even with this additional check, is still going to be blazing fast compared to malloc() or whatever. If you're allocating so much that the handful of extra instructions is going to be a significant slowdown, maybe structure your allocations differently?

@saagarjha

Most allocators will do this, yes.

@cratermoon

No, the lesson is to work with the safety mechanisms to accomplish both fast and safe result. If I'm writing code in language A, I don't choose to throw out one of the foundational aspects of that language on the grounds that it's merely inconvenient.

@ridiculous_fish

The insight here is that, for unsigned values, rounding down to a multiple of N is cheaper than rounding up to a multiple of N.

In addition to simpler arithmetic, rounding down always succeeds - 0 is a multiple of all N - while rounding up may fail if a larger multiple does not exist. So you need to check in the round-up case.

@asalahli

The rust code for rounding down is given as

    let new_ptr = new_ptr & !(align - 1);
But I don't see rdx decremented before being negated and AND'ed with rax, in the assembly. What am I missing?
@conradludgate

It's being converted into `-align` which is equivalent under twos compliment

@asalahli

I don't follow. Are you saying `new_ptr & -align` is equivalent to `new_ptr & !(align - 1)`?

@dev_dwarf

In the "bump up" version you could remove both the checked_add branches and replace them with a single check at the end, making the amount of branches the same.

Quick example: https://godbolt.org/z/rdv4qnrs8.

*edited to update the example, realized I messed up the comparison logic.

@sltkr

That version is unsafe: what if size == 0xfff..fff and alignment is needed? You will end up with ptr <= new_ptr < end, seemingly a valid result, but actually with not enough space.

edit: code moved to a toplevel comment

@loeg

No allocator can be expected to allocate usize::MAX, so it doesn't really matter.

@sltkr

It matters because if the allocator cannot allocate a given amount, it should reliably return NULL to iform the caller that the allocation failed, not return a random invalid pointer that's not usable, which will lead to undefined behavior.

@loeg

The API should restrict callers from providing bogus values at all.

@rictic

How would the API do that without more overhead than the check that GP is suggesting?

@loeg

For some uses it might be reasonable to restrict the size input to compile-time constants, which can be verified at compile time. Or you could have a newtype with a checked safe constructor and an unchecked unsafe constructor, which allows bypassing the overhead when you know the sizes are reasonable. On 64-bit systems, it is reasonable in many domains to restrict allocation size to u32. There are lots of possible ways to approach this without falling back to "tolerate any usize input."

@dev_dwarf

Agreed. The question really is if you should demand the user to enforce that constraint on the size they pass to you, or if the function itself should signal an error in that case.

@loeg

I think it would be pretty reasonable to have an input type for that parameter that isn't a full usize and is instead some more restricted type that can only represent smaller values. The alignment parameter could be, like, u8, or maybe u16.

@dev_dwarf

For the alignment parameter I agree.

@loeg

For both.

@sltkr

This still doesn't solve the overflow problem.

It's also too limiting: with u8 you can't ask for page-aligned data, and with u16 not for hugepage-aligned data. Granted, those aren't exactly prime use cases for a bump allocator, but it seems like poor design to limit the API unnecessarily.

@loeg

It fully solves the overflow problem. You don't need page-aligned data in a bump allocator.

@ridiculous_fish

I think this doesn't work because `aligned + size` may wrap all the way around into the valid region again. For example if aligned == ptr + 1, and size is usize::MAX, we will end up with new_ptr == ptr and the allocation will wrongly succeed.

@loeg

The straightforward answer is: don't tolerate ridiculous alignments.

@saagarjha

You'd need to check for that, though.

@loeg

No, just restrict the domain of the alignment input to like, u8. Maybe u16. Either way, it is easy to ensure your bump allocation space is far enough away from SIZE_MAX.

@dev_dwarf

Interesting point. I modified my example to test what you described. I had to play with the compilation flags to get the allocs to not be optimized out and to not panic when the integer overflow happens, but otherwise I didn't change the logic. I'm pretty sure my implementation is correctly handling the case you mention, evidenced by it returning a null pointer.

Link: https://godbolt.org/z/f1jGW6Pa3

Update: NVM, definitely not being handled correctly. https://godbolt.org/z/cMTe1o979

@
[deleted by user]
@sltkr

You can also implement BumpUp() with only two conditionals and no overflow like this (in C, because I don't know Rust, but the logic is identical):

    size_t alignment_needed = align - (ptr & (align - 1));
    if (alignment_needed > SIZE_MAX - size) return nullptr;
    size_t space_needed = size + alignment_needed;
    size_t space_remaining = end - ptr;
    if (space_needed > space_remaining) return nullptr;
    char *result = ptr + alignment_needed;
    ptr += space_needed;
    return result;
@dev_dwarf

Nice. Seems to work for case you mentioned under my comment: https://godbolt.org/z/TYorcd8b6

@loeg

Yeah, and you only need one check if you can restrict the allocation pointer to be at least max alignment away from SIZE_MAX, which is very easy to do in practice. Max alignment needed is usually no more than 64 bytes, but even 4kB isn't especially burdensome. On Linux/amd64, you don't even need to do anything special -- the high virtual address space is likely reserved for the kernel anyway.

@sam_bishop

I've heard that the Hotspot JVM uses a bump allocator but don't know the details. I'm sure it's heavily optimized though, so I'm curious about how this compares.

@dzaima

For most specific purposes, a specialized bump allocator can be significantly more optimized.

Hard-coding the alignment simplifies many questions (especially if you can guarantee the allocation size being a multiple of the alignment, at which point it becomes a complete non-issue).

And if you have an upper bound on the requested allocation size, the integer overflow checks can be dropped too (e.g. in a Java "new int[n]", "n" is an int, which has a max value of 2^31-1, which can safely be added to a bump pointer, provided it's further than that away from the address space end (which it's gonna be due to the kernel reserving the upper half of the address space)).

And for constant-size objects, your entire bump allocator can become just

    if (ptr > precomputedEndMinusObjectSize) fallback();
    result = ptr;
    ptr += size;
@saagarjha

I believe HotSpot can move allocations out of the arena when garbage collection happens, which means fragmentation is less of an issue.

@pizlonator

It’s possible to write many other bump algorithms, including ones that bump upwards but generate much much better code than OP’s forward upward bump. In particular, no overflow checks are needed if you write it carefully enough.

My favorite is one where I only use subtraction but the direction of allocation is in the positive direction (I subtract a `remaining` counter, and the returned address is `end - remaining`).

But I have seen many others. A few of my colleagues have similarly geeked out on this and come up with splendid bumpers (ggaren wrote a great one in bmalloc, and I remember the MMTk folks put hella thought into theirs).

@titzer

In most situations, the allocation size is a constant and bumping upwards can be done without an overflow check because the region cannot be close enough to the upper part of memory to wrap around.

I'm surprised that there was no discussion of memory system performance. There are tons of OS-level and hardware prefetcher optimizations for forward-marching pointer references and zero-page allocation.

@pizlonator

You can bump upwards without an overflow check even if the size is variable.

I’ve seen multiple ways to do it. And I’ve lost days of my life to coming up with pointlessly amusing variants that get there in weird ways.

Yeah I’m also amused that they didn’t get into the fact that downward bump is just not what the HW expects you to do. You’d need a real benchmark to see that effect. And, it’s an open question whether this would matter unless your allocation rate was obscene (if it isn’t then most scans of memory will be normal loops and not the allocator).

@zeusk

> fact that downward bump is just not what the HW expects you to do.

Where do you get this fact from? ARM and x86 have a descending stack (<= ARMv7 supported stack growing in either direction).

@saagarjha

Even when size is a full 64-bit value (…in that it might actually overflow the address space)?

@pizlonator

Phil style: track end and remaining. End doesn’t change. To allocate size, check if lesseq remaining and subtract from it. End minus remaining is the result (using the old value of remaining). Has two variants - one that burns an extra register but saves an instruction. Empirically, this one is fast enough that I just use it all the time.

Bmalloc style bump: independently subtract from remaining and add to bump. It’s more instructions but they are instruction level parallel.

Another friend’s bump style: have bump/end ptrs like usual. Fast path checks if size < end - bump. Then you add to bump like usual.

Note that instruction counts on these don’t matter so much on modern CPU’s. But dependency chain length does matter, a lot. Register usage matters only if you want to inline the allocator (and it’s not always beneficial to do it).