Memory Safe Languages in Android 13

Building the fastest Lua interpreter.. automatically

hr0m
553
109
13d
3
SILLYCROSS.GITHUB.IO

Comments

gary_0 13d
I think there's a typo where the Add() example code goes `lhs.As<tDouble>() && rhs.As<tDouble>()`. I'm assuming that should be a `+`?
abecedarius 13d
There's some prior art on DSLs for efficient interpreters, like Anton Ertl's vmgen. https://www.complang.tuwien.ac.at/anton/vmgen/
JZL003 13d
wow this is a great article, so readable
kzrdude 13d
Super interesting.

I think using the name LuaJIT Remake steps on the toes of the existing LuaJIT and I would advise using a more original name.

Anything distinctive would be better. Lua DeeJIT comes to mind, since the generator is called Deegen.

gatane 13d
>More importantly, it is the world’s fastest Lua interpreter to date, outperforming LuaJIT’s interpreter by 28% and the official Lua interpreter by 171% on average on a variety of benchmarks

Wait what the hell

tromp 13d
Nice to see Haskell make an appearance in an article about Lua and C/C++:

> For example, at LLVM IR level, it is trivial to make a function use GHC calling convention (a convention with no callee-saved registers)

This refers to the following section of [1]

“cc 10” - GHC convention This calling convention has been implemented specifically for use by the Glasgow Haskell Compiler (GHC). It passes everything in registers, going to extremes to achieve this by disabling callee save registers. This calling convention should not be used lightly but only for specific situations such as an alternative to the register pinning performance technique often used when implementing functional programming languages. At the moment only X86 supports this convention and it has the following limitations:

On X86-32 only supports up to 4 bit type parameters. No floating-point types are supported. On X86-64 only supports up to 10 bit type parameters and 6 floating-point parameters. This calling convention supports tail call optimization but requires both the caller and callee are using it.

[1] https://llvm.org/docs/LangRef.html#calling-conventions

presheaf 13d
This is very cool.
ufo 13d
One caveat to pay attention to... Ever since LuaJIT and PUC-Lua diverged, PUC lua has made some significant changes to the garbage collector. I wonder if that might affect the comparison vs LuaJIT for memory intensive benchmarks such as binarytrees and tablesort.
ufo 13d
Fascinating! I wonder if there's a way to keep this 100% inside the C ecosystem, without having to reach for an LLVM dependency...
dingdingdang 13d
Very very impressive.
flyingsky 13d
> More importantly, it is the world’s fastest Lua interpreter to date, outperforming LuaJIT’s interpreter by 28% and the official Lua interpreter by 171% on average on a variety of benchmarks

Really huge, if true!

tomcam 13d
What’s a stack spill?
fsfod 13d
The author also worked on the Copy-and-Patch Compilation paper[1], I wonder if they are going to use the same idea for ones of tiers of the JIT idk if it would beat LuaJIT's JIT for code compilation speed but the machine code could be faster.

[1] https://arxiv.org/abs/2011.13127

bobm_kite9 13d
This sounds a lot like the approach taken by GraalVM. Can someone better versed in this area comment?
bessiesboobies 13d
Do these changes work on Apple M1s too?
haberman 13d
> With the tail-call approach, each bytecode now gets its own function, and the pathological case for the C/C++ compiler is gone. And as shown by the experience of the Google protobuf developers, the tail-call approach can indeed be used to build very good interpreters. But can it push to the limit of hand-written assembly interpreters? Unfortunately, the answer is still no, at least at its current state.

> The main blockade to the tail-call approach is the callee-saved registers. Since each bytecode function is still a function, it is required to abide to the calling convention, specifically, every callee-saved register must retain its old value at function exit.

This is correct, wasting of callee-saved registers is a shortcoming of the approach I published about protobuf parsing (linked from the first paragraph above). I have been experimenting with a new calling convention that uses no callee-saved registers to work around this, but the results so far are inconclusive. The new calling convention would use all registers for arguments, but allocate registers in the opposite order of normal functions, to reduce the chance of overlap. I have been calling this calling convention "reverse_cc".

I need to spend some time reading this article in more detail, to more fully understand this new work. I would like to know if a new calling convention in Clang would have the same performance benefits, or if Deegen is able to perform optimizations that go beyond this. Inline caching seems like a higher-level technique that operates above the level of individual opcode dispatch, and therefore somewhat orthogonal.

srcreigh 13d
Is there a typo in the double Add byte code example?

It uses && to combine the two doubles, I’d expect to see + instead.

sigzero 13d
I wonder why the target was 5.1 since 5.4 has been out for a while now?
MuffinFlavored 13d
> The problem with the “big switch-case” approach is that C/C++ compilers simply cannot handle such code well.

Is this the case with Rust?

saagarjha 13d
I’ve been working on an early design of a high-performance dynamic binary translator that cannot JIT, and have reached a very similar conclusion as the author. We have an existing threaded interpreter but it’s a mess of hard-to-maintain assembly for two architectures, and we run into funny issues all the time where the two diverge. Plus, being handwritten by people who are not scheduling experts, there is probably some performance left on the table because of our poor choices and the design making it difficult to write complex-but-more-performant code. Nobody wants to write an efficient hash for TLB lookups in a software MMU using GAS macros.

The core point I’ve identified is that existing compilers are pretty good at converting high level descriptions of operations into architecture-specific code (at least, better than we are given the amount of instructions we have to implement) but absolutely awful at doing register selection or dealing with open control flow that is important for an interpreter. Writing everything in assembly lets you do these two but you miss out on all the nice processor stuff that LLVM has encoded into Tablegen.

Anyways, the current plan is that we’re going to generate LLVM IR for each case and run it through a custom calling convention to take that load off the compiler, similar to what the author did here. There’s a lot more than I’m handwaving over that’s still going to be work, like whether we can automate the process of translating the semantics for each instruction into code, how we plan to pin registers, and how we plan to perform further optimizations on top of what the compiler spits out, but I think this is going to be the new way that people write interpreters. Nobody needs another bespoke macro assembler for every interpreter :)

1letterunixname 13d
PGO on a conventional executable with accumulated profiling data is likely to be more performant. JITs also have a problem because they must transmute RW data pages into RO code pages to satisfy NX. PGO enables longer-term analysis that effectively supersedes JIT. JITs are nice in theory, but don't tend to save profiling data to drive optimizations in performance-critical sections that matter.

Might try LuaToCee and then apply PGO iteratively.

cornstalks 13d
Could this be used to build a better web assembly interpreter? wasm3 is the fastest Wasm interpreter I’ve found but this approach sounds very intriguing!
fwsgonzo 13d
I have been working on this too, and while I am not at all interested in embedding LLVM, the lack of calling conventions and lack of ability t make a direct jump is unfortunate.

My biggest pet peeve though is the inability to push unlikely stuff completely to the back of the program. I mean just put it away where nobody can see it. I also want to align code blocks to 1 bytes. Not 16 bytes. Not a single compiler lets me realign the instruction handlers in a way that compresses them down.

I also want to know what BOLT does that improves my interpreter by around 30%.

krick 13d
I wonder what makes it so much worse on those 2 specific tasks. Especially k-nukleotide one.
qikInNdOutReply 13d
I work on a game, that mostly uses lua as logic code, saddling on a c/c++ engine. One of the engine developers implemented lua jit years ago and found that for the actual performance, the interpreter/jit was neglibable, the most expensive thing being the actual switch between luacode and c-code, which is with a large API a constant ugly background performance loss.

The lua code by itself did not ran long enough to actually profit much from optimizations much.

So, back then we did discuss possible solutions, and one idea was to basically notice a upcoming C-Call in bytecode ahead of execution, detect the stability of the arguments ahead of time. A background thread, extracts the values, perform the Call-processing of arguments and return pushes the values onto the stack, finally setting a "valid" bit to unblock the c-call (which by then actually is no longer a call). Both sides never have a complete cache eviction and live happily ever after. Unfortunatly i have a game dev addiction, so nothing ever came of it.

But similar minds might have pushed similar ideas.. so asking the hive for this jive. Anyone ever seen something similar in the wild?

mraleph 13d
It would be interesting to see 1-1 comparison with LuaJIT interpreter _without corresponding runtime changes_. That would given a meaningful baseline for how much you potentially loose/gain using this approach.

Current measurement presents comparison of two wildly different implementation strategies: LuaJIT Remake uses IC and hidden classes while LuaJIT proper does not. It's a well known fact that ICs+hidden classes can lead to major performance improvements. That insight originated in Smalltalk world and was brought to dynamic language mainstream by V8† - but unfortunately it muddles the comparison here.

† V8 was open sourced on September 2, 2008... Coincidentally (though probably not :)) JSC landed their first IC prototype on September 1, 2008.

fasteo 13d
It is my understanding that benchmarks are run against LuaJIT interpreter (-j off), so here are some JIT numbers for reference, using the array3d.lua benchmark [1]

  time luajit -j off array3d.lua 
  
  real 0m1,968s
  user 0m1,915s
  sys 0m0,052s
  
  time luajit -j on array3d.lua 
  
  real 0m0,128s
  user 0m0,068s
  sys 0m0,060s

[1] https://github.com/luajit-remake/luajit-remake/blob/master/l...
chubot 12d
This seems like an awesome way of writing faster interpreters – i.e. not in assembly, but in C++ snippets you stitch together with a tool.

I did peek at the deegen tool a bit, and it seems quite large? https://github.com/luajit-remake/luajit-remake/tree/master/d...

I would be interested in an overview of all the analysis it has to do, which as I understand is basically “automated Mike Pall”

FWIW I think this is the hand-written equivalent with LuaJIT’s dynasm tool: https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/vm_x64.dasc (just under 5000 lines)

Also there are several of these files with no apparent sharing, as you would get with deegen:

https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/vm_x86.dasc

https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/vm_ppc.dasc