For example, in some code I’m writing at the moment I have lists of integers all over the place. I call them lists - usually they only have 1 element. Sometimes they have 2 elements (10%) and very occasionally more than 2 elements or they’re empty (<1%).
If I used a language like Javascript, I’d use Arrays. But arrays are quite expensive performance wise - they need to be allocated and tracked by the GC and the array contents are stored indirectly.
Instead, I’m using an array type which stores up to 2 items inline in the container object (or the stack) without allocating. It only allocates memory on the heap when there are 3 or more items. This decreases allocations by 2 orders of magnitude, which makes a really big difference for performance in my library. And my code is just as readable.
I’m using the smallvec crate. There’s plenty of libraries in C and Rust for this sort of thing in arrays and strings. Swift (like obj-c before it) builds small string optimizations into the standard library. I think that’s a great idea.
Of course, the justification can be as simple as "it has to be some number and 5000 is as good as any", but that opens the door to the discussion of whether 5000 really is as good as any other number, which is often surprisingly enlightening.
Code is generally designed to operate within certain "reasonable" performance boundaries, and when it goes outside those you need to think whether code should be rewritten to accomodate it.
Just a tiny example, but I regularly deal with long (800+ page) PDF's on my iPad, reading parts of them in the stock Books app. When I select text to highlight, the context menu unfortunately puts "select all" directly next to "highlight". Of course, every so often I accidentally hit "select all", and then I have to force-close the app because otherwise it just freezes for 10 minutes as it extracts and selects the text on every single page.
When really, it needs a limit to detect that, hey, if the PDF is over 20 pages long then just don't allow "select all" anymore, because this isn't the right tool for the job.
When a program limits me to 256 of something, it doesn’t seem arbitrary.
I’ve heard stories of programmers setting limits to multiples of two simply so nobody asks why.
I agree we should have limits in software because we don't have unlimited memory and processing time, but I commonly find these limits are encoded by the imagination of the programmer working on the software at the time, and it's often you find limits that were not considered in systems design of the product.
If there is a loop, we will hit the limit while evaluating the cycle because it’s a DFS. With a BFS we could hit the limit in a sibling of the problematic node.
Scheduling budgets are how most people avoid the halting problem. You set a fixed multiple of expected halting time and work hard to make sure that’s your p99 time instead of your p75 time, and stop trying to violate core principles of computational theory.
Can be solved with tests instead though.
Speed limits on roads are useful in catching unsafe driving behavior; but if every car actually had speed governors installed, that couldn’t be overcome, it should be clear that this is a suboptimal solution.
Arbitrary limits written into code will eventually be refactored into ZOI, one way or another
Sure, this is why code designed by ZOI rule has those straight forward test cases: none, one, some, many aaaand crash (at most).
and for safeguard of correct use (limits as undefined behaviour) many languages have assertions.