There are various ways of estimating π, though one cheap and cheerful way is to estimate the area of a circle: imagine a grid of 1x1 squares, draw a perfect circle of radius r, and estimate the area of the circle as being the number of squares whose midpoint is within the circle. We can implement this in a few lines of Lua:

r = 1000
for x = -r, r do
  for y = -r, r do
    if x^2 + y^2 <= r^2 then
      n = (n or 0) + 1
print(n / r^2)

With a small amount of thought, the inner for and if can be replaced with an analytical solution:

r = 1000
for x = -r, r do
  n = (n or 0) + math.floor(math.sqrt(r^2 - x^2)) * 2 + 1
print(n / r^2)

At this point, we can bump r up to 10000000, run the above through LuaJIT, and get an estimate of 3.1415926535059 in less than a tenth of a second. Not a bad estimate for a few lines of code and a tiny amount of compute time.

The estimate itself is nice, but I find it more interesting to look at what LuaJIT is doing behind the scenes with this code. To begin with, we can inspect the bytecode - if source code is an array of characters, then bytecode is an array of instructions (plus an array of constants, plus a few other bits and bobs), and the first thing LuaJIT does with any source code is turn it into bytecode. We can see the bytecode by using the -bl argument:

$ luajit -bl pi.lua 
-- BYTECODE -- pi.lua:0-6
0001    KNUM     0   0      ; 10000000
0002    GSET     0   0      ; "r"
0003    GGET     0   0      ; "r"
0004    UNM      0   0
0005    GGET     1   0      ; "r"
0006    KSHORT   2   1
0007    FORI     0 => 0029
0008 => GGET     4   1      ; "n"
0009    IST          4
0010    JMP      5 => 0012
0011    KSHORT   4   0
0012 => GGET     5   2      ; "math"
0013    TGETS    5   5   3  ; "floor"
0014    GGET     6   2      ; "math"
0015    TGETS    6   6   4  ; "sqrt"
0016    GGET     7   0      ; "r"
0017    KSHORT   8   2
0018    POW      7   7   8
0019    KSHORT   8   2
0020    POW      8   3   8
0021    SUBVV    7   7   8
0022    CALL     6   0   2
0023    CALLM    5   2   0
0024    MULVN    5   5   1  ; 2
0025    ADDVV    4   4   5
0026    ADDVN    4   4   2  ; 1
0027    GSET     4   1      ; "n"
0028    FORL     0 => 0008
0029 => GGET     0   5      ; "print"
0030    GGET     1   1      ; "n"
0031    GGET     2   0      ; "r"
0032    KSHORT   3   2
0033    POW      2   2   3
0034    DIVVV    1   1   2
0035    CALL     0   1   2
0036    RET0     0   1

Having said bytecode was an array of instructions, the first column in the above (containing 0001 through 0036) is the array index. There is actually also an instruction at array index 0 which -bl doesn't show us, which in this case is 0000 FUNCV 9. The next column contains either or => - it is blank for most instructions, and is => for any instruction which is the target of a jump. The next column (KNUM, GSET, ..., RET0) contains the instruction name. The next few columns contain numbers, the meaning of which depend upon the instruction name. Finally, some instructions have a comment printed after the ;.

We can run through the various different kinds of instruction in the above (in order of first appearance):

Phew. That gets us through the bytecode listing. There's a lot of it, some of it is executed just one, and other bits of it are executed many many times. The LuaJIT interpreter doesn't care though; it'll happily churn through bytecode all day long. At some point though, the JIT part of LuaJIT comes into play. To explore this part, we'll switch from -bl (which dumps bytecode instead of running it) to -jdump=bitmsrx (which runs bytecode and also dumps various pieces of information as JIT compilation happens and JIT-compiled code executes):

$ luajit -jdump=bitmsrx pi.lua

The first thing to be output by -jdump=bitmsrx is the following:

---- TRACE 1 start pi.lua:2
0008  GGET     4   1      ; "n"
0009  IST          4
0010  JMP      5 => 0012
0012  GGET     5   2      ; "math"
0013  TGETS    5   5   3  ; "floor"
0014  GGET     6   2      ; "math"
0015  TGETS    6   6   4  ; "sqrt"
0016  GGET     7   0      ; "r"
0017  KSHORT   8   2
0018  POW      7   7   8
0019  KSHORT   8   2
0020  POW      8   3   8
0021  SUBVV    7   7   8
0022  CALL     6   0   2
0000  . FUNCC               ; math.sqrt
0023  CALLM    5   2   0
0000  . FUNCC               ; math.floor
0024  MULVN    5   5   1  ; 2
0025  ADDVV    4   4   5
0026  ADDVN    4   4   2  ; 1
0027  GSET     4   1      ; "n"
0028  FORL     0 => 0008

This tells us that trace #1 starts at line 2 of pi.lua, and is followed by the bytecode instructions which comprise the trace. This should look familiar to the -bl dump from earlier, albeit with a few differences. We start at instruction index 0008 - the JIT compiler doesn't compile everything, only the things it thinks are worth compiling, and in this case it thinks that the loop body (which starts at instruction 0008) is worth compiling. The column which contained either or => is gone - traces are strictly linear sequences with no branches, and hence no jump targets within them, and hence no need for => jump target indicators. Instruction 0011 isn't listed in the trace, as it isn't part of the trace - in the particular flow of execution recorded by the JIT compiler, the 0 branch of (n or 0) wasn't taken. The other major difference happens at function calls: when a call happens, tracing follows the call, and starts including bytecode instructions from the called function in the trace. The function known as math.sqrt consists of the single bytecode instruction 0000 FUNCC, the effect of which is three-fold:

  1. Start a C function (c.f. the FUNCV instruction we saw earlier for starting a vararg Lua function).
  2. Go off and run the C code associated with the function.
  3. Return to the calling function (c.f. the RET0 instruction we saw earlier for returning from a Lua function).

Like math.sqrt, math.floor also consists of just a single bytecode instruction. In both cases, the bytecode instruction is included in the trace; the . marker between the array index and the instruction name denotes a call frame level (. . denotes two levels of call frame, etc.).

Actually, -jdump=bitmsrx is telling us a lie: math.sqrt does consist of just a single bytecode instruction, and that instruction does do steps 1 and 3 from the above list, but its step 2 is "Go off and run the assembly code for math.sqrt". This super-specialised bytecode instruction is only used by math.sqrt, and doesn't have a name per-se, so reporting its name as FUNCC is perhaps not the worse lie in the world. Similarly, math.floor conists of a single super-specialised bytecode instruction (not all standard library functions follow this pattern - some are just plain old C functions - but most of the math library happens to be implemented in assembly rather than C).

We talk about bytecode instructions being included in a trace, but the bytecode isn't actually retained in the trace. Instead, as each bytecode instruction is executed, some so-called IR instructions are appended to the trace. After bytecode recording has finished for the trace, we get the next chunk of output from -jdump=bitmsrx, which is the full list of IR instructions:

---- TRACE 1 IR
....              SNAP   #0   [ ---- ]
0001 rbx   >  int SLOAD  #2    CRI
0002       >  int LE     0001  +2147483646
0003 rbp   >  int SLOAD  #1    CI
0004 rax      fun SLOAD  #0    R
0005 rax      tab FLOAD  0004  func.env
0006          int FLOAD  0005  tab.hmask
0007       >  int EQ     0006  +63 
0008 r14      p32 FLOAD  0005  tab.node
0009 r12   >  p32 HREFK  0008  "n"  @19
0010       >  num HLOAD  0009
0011       >  p32 HREFK  0008  "math" @54
0012 r15   >  tab HLOAD  0011
0013          int FLOAD  0012  tab.hmask
0014       >  int EQ     0013  +31 
0015 r13      p32 FLOAD  0012  tab.node
0016       >  p32 HREFK  0015  "floor" @14
0017       >  fun HLOAD  0016
0018       >  p32 HREFK  0015  "sqrt" @15
0019       >  fun HLOAD  0018
0020       >  p32 HREFK  0008  "r"  @12
0021 xmm0  >  num HLOAD  0020
0022 [8]      num MUL    0021  0021
0023 xmm1     num CONV   0003
0024 xmm1     num MUL    0023  0023
0025 xmm0     num SUB    0022  0024
0026       >  fun EQ     0019  math.sqrt
0027 xmm0     num FPMATH 0025  sqrt
0028       >  fun EQ     0017  math.floor
0029 xmm7     num FPMATH 0027  floor
0030 xmm7     num ADD    0029  0029
0031 xmm7     num ADD    0030  0010
0032 xmm7   + num ADD    0031  +1  
0033          num HSTORE 0009  0032
0034 rbp    + int ADD    0003  +1  
....              SNAP   #1   [ ---- ]
0035       >  int LE     0034  0001
....              SNAP   #2   [ ---- 0034 0001 ---- 0034 ]
0036 ------------ LOOP ------------
0037 xmm5     num CONV   0034
0038 xmm5     num MUL    0037  0037
0039 xmm6     num SUB    0022  0038
0040 xmm0     num FPMATH 0039  sqrt
0041 xmm6     num FPMATH 0040  floor
0042 xmm6     num ADD    0041  0041
0043 xmm7     num ADD    0042  0032
0044 xmm7   + num ADD    0043  +1  
0045          num HSTORE 0009  0044
0046 rbp    + int ADD    0034  +1  
....              SNAP   #3   [ ---- ]
0047       >  int LE     0046  0001
0048 rbp      int PHI    0034  0046
0049 xmm7     num PHI    0032  0044

Ignoring the .... SNAP lines for a moment, the first column (containing 0001 through 0049) is the index into the IR array. The next column contains either a register name (like rbx or xmm7) or a stack offset (like [8]) or is blank. This column isn't populated as the IR is created - instead it is populated as the IR is turned into machine code: if the result of the instruction gets assigned to a register or a stack slot, then that register or stack slot is recorded (some instructions don't have results, or don't have their result materialised, and so this column remains blank for them). The next column contains either > or : the > symbol indicates that the instruction is a so-called guard instruction: these instructions need to be executed even if their result is otherwise unused, and these instructions are also able to "fail" (failure, if/when it happens, causes execution to leave the JIT-compiled code and return to the interpreter). The next column contains either + or : the + symbol indicates that the instruction is used later-on in a PHI instruction - as with the register/stack column, this column isn't populated as the IR is recorded, and instead it is populated by the LOOP optimisation pass (as it is this optimisation pass which emits PHI instructions). The next column contains the type of the IR instruction, which in our case is one of:

The next columns contain the instruction name, and then the two (or occasionally one or three) operands to the instruction. We can run through the IR instructions in order of first appearance:

The effectiveness of the LOOP optimisation pass is really quite impressive: instructions 0001 through 0022 disappear completely, as do 0026 and 0028. The bytecode performs seven table lookups per iteration (n, math, floor, math, sqrt, r, n). Common-subexpression-elimination and load-forwarding during the bytecode to IR conversion causes the IR before the LOOP instruction to contain just five HREFK instructions (i.e. n and math are looked up once each rather than twice each). Despite the table store to n within the loop, these five HREFK instructions instructions are all determined to be loop-invariant (good alias analysis is to thank here). The HLOAD instructions for math, floor, sqrt, and r are also determined to be loop-invariant. The HLOAD for n isn't loop-invariant, but forwarding saves us, so there is no HLOAD for n after the LOOP instruction (that is, the reason for the HLOAD not being loop-invariant is the HSTORE within the loop body, but LuaJIT can forward the stored value rather than having to reload it). The HSTORE instruction for n is still done after the LOOP instruction: stores to local variables are deferred to trace exits, but stores to tables (including stores to global variables) are not deferred.

On that note, we can begin to consider the .... SNAP lines in the IR dump. Each of these lines corresponds to a so-called snapshot. A snapshot is used to transition from JIT-compiled code back to the interpreter, and consists of:

Sadly, for snapshots, -jdump doesn't report the bytecode instruction at which the interpreter will resume. It does report the deferred stores though: reading from left to right, each token between [ and ] represents a slot, with the leftmost token being slot #0. For example, [ ---- 0034 0001 ---- 0034 ] means that when exiting with this snapshot:

Call frames are denoted with | symbols in snapshots, but we'll gloss over this as it doesn't occur in our example. If an IR instruction can fail, then when it fails, the nearest preceding snapshot is used to return to the interpreter. In our example, this means instructions 0001 through 0034 use snapshot #0, 0035 uses #1, 0036 though 0046 use #2, and 0047 through 0049 use #3. It is worth dwelling on snapshots for moment, and viewing them as a form of transactional rollback. For example, (in our case) if instruction 0001 fails, then snapshot #0 is used. If instruction 0028 fails, then snapshot #0 is still used, despite various table lookups, some arithmetic, and a call to math.sqrt all happening between instructions 0001 and 0028. This means that if instruction 0028 fails, then after restoring the interpreter state, the interpreter will repeat the lookups, the arithmetic, and the math.sqrt call (presumably it wouldn't repeat the math.floor call, as a failure of instruction 0028 would mean that math.floor no longer corresponded to the builtin floor function).

With that, we can move on to the next chunk of output from -jdump=bitmsrx, which is the generated assembly (though LuaJIT actually generates machine code directly rather than generating assembly, so what is shown is really the output of -jdump's own bundled disassembler):

---- TRACE 1 mcode 507
f125fdfa  mov dword [0x00043410], 0x1
f125fe05  movsd xmm7, [rdx+0x8]
f125fe0a  cvttsd2si ebx, xmm7
f125fe0e  xorps xmm6, xmm6
f125fe11  cvtsi2sd xmm6, ebx
f125fe15  ucomisd xmm7, xmm6
f125fe19  jnz 0xf1250010  ->0
f125fe1f  jpe 0xf1250010  ->0
f125fe25  cmp ebx, 0x7ffffffe
f125fe2b  jg 0xf1250010 ->0
f125fe31  movsd xmm7, [rdx]
f125fe35  cvttsd2si ebp, xmm7
f125fe39  xorps xmm6, xmm6
f125fe3c  cvtsi2sd xmm6, ebp
f125fe40  ucomisd xmm7, xmm6
f125fe44  jnz 0xf1250010  ->0
f125fe4a  jpe 0xf1250010  ->0
f125fe50  mov eax, [rdx-0x8]
f125fe53  mov eax, [rax+0x8]
f125fe56  cmp dword [rax+0x1c], +0x3f
f125fe5a  jnz 0xf1250010  ->0
f125fe60  mov r14d, [rax+0x14]
f125fe64  mov rdi, 0xfffffffb00054868
f125fe6e  cmp rdi, [r14+0x1d0]
f125fe75  jnz 0xf1250010  ->0
f125fe7b  lea r12d, [r14+0x1c8]
f125fe82  cmp dword [r12+0x4], 0xfffeffff
f125fe8b  jnb 0xf1250010  ->0
f125fe91  mov rdi, 0xfffffffb00048d48
f125fe9b  cmp rdi, [r14+0x518]
f125fea2  jnz 0xf1250010    ->0
f125fea8  cmp dword [r14+0x514], -0x0c
f125feb0  jnz 0xf1250010  ->0
f125feb6  mov r15d, [r14+0x510]
f125febd  cmp dword [r15+0x1c], +0x1f
f125fec2  jnz 0xf1250010  ->0
f125fec8  mov r13d, [r15+0x14]
f125fecc  mov rdi, 0xfffffffb00049150
f125fed6  cmp rdi, [r13+0x158]
f125fedd  jnz 0xf1250010  ->0
f125fee3  cmp dword [r13+0x154], -0x09
f125feeb  jnz 0xf1250010  ->0
f125fef1  mov rdi, 0xfffffffb000491e0
f125fefb  cmp rdi, [r13+0x170]
f125ff02  jnz 0xf1250010  ->0
f125ff08  cmp dword [r13+0x16c], -0x09
f125ff10  jnz 0xf1250010  ->0
f125ff16  mov rdi, 0xfffffffb0004ebc8
f125ff20  cmp rdi, [r14+0x128]
f125ff27  jnz 0xf1250010  ->0
f125ff2d  cmp dword [r14+0x124], 0xfffeffff
f125ff38  jnb 0xf1250010  ->0
f125ff3e  movsd xmm0, [r14+0x120]
f125ff47  mulsd xmm0, xmm0
f125ff4b  movsd [rsp+0x8], xmm0
f125ff51  xorps xmm1, xmm1
f125ff54  cvtsi2sd xmm1, ebp
f125ff58  mulsd xmm1, xmm1
f125ff5c  subsd xmm0, xmm1
f125ff60  cmp dword [r13+0x168], 0x000491b8
f125ff6b  jnz 0xf1250010  ->0
f125ff71  sqrtsd xmm0, xmm0
f125ff75  cmp dword [r13+0x150], 0x00049128
f125ff80  jnz 0xf1250010  ->0
f125ff86  roundsd xmm7, xmm0, 0x09
f125ff8c  addsd xmm7, xmm7
f125ff90  addsd xmm7, [r12]
f125ff96  addsd xmm7, [0x00064bc8]
f125ff9f  movsd [r12], xmm7
f125ffa5  add ebp, +0x01
f125ffa8  cmp ebp, ebx
f125ffaa  jg 0xf1250014 ->1
f125ffb0  movsd xmm0, [rsp+0x8]
f125ffb6  xorps xmm5, xmm5
f125ffb9  cvtsi2sd xmm5, ebp
f125ffbd  mulsd xmm5, xmm5
f125ffc1  movaps xmm6, xmm0
f125ffc4  subsd xmm6, xmm5
f125ffc8  sqrtsd xmm0, xmm6
f125ffcc  roundsd xmm6, xmm0, 0x09
f125ffd2  addsd xmm6, xmm6
f125ffd6  addsd xmm7, xmm6
f125ffda  addsd xmm7, [0x00064bc8]
f125ffe3  movsd [r12], xmm7
f125ffe9  add ebp, +0x01
f125ffec  cmp ebp, ebx
f125ffee  jle 0xf125ffb0  ->LOOP
f125fff0  jmp 0xf125001c  ->3

The first column gives the memory address of the instruction, and the remainder of the line gives the instruction in Intel-ish syntax (which happens to be a syntax I'm fond of; AT&T syntax needs to die). I'm not going to explain the semantics of each individual instruction (there are multi-thousand page Intel manuals for that), but there are a number of interesting things to point out:

The next piece of output from -jdump=bitmsrx tells us that recording (and IR optimisation and machine code generation) of the trace has finished, and that the trace is a looping trace:

---- TRACE 1 stop -> loop

The final piece of output from -jdump=bitmsrx tells us that execution of the trace finished and snapshot #3 was used to restore the interpreter state (noting that -jdump=bitmsrx doesn't tell us when execution of a trace starts):

---- TRACE 1 exit 3

Finally, we get our estimate of π courtesy of the interpreter executing (the bytecode corresponding to) print(n / r^2):