Estimating π with LuaJIT

There are various ways of estimating π, though one cheap and cheerful way is to estimate the area of a circle: imagine a grid of 1x1 squares, draw a perfect circle of radius r, and estimate the area of the circle as being the number of squares whose midpoint is within the circle. We can implement this in a few lines of Lua:

r = 1000
for x = -r, r do
  for y = -r, r do
    if x^2 + y^2 <= r^2 then
      n = (n or 0) + 1
print(n / r^2)

With a small amount of thought, the inner for and if can be replaced with an analytical solution:

r = 1000
for x = -r, r do
  n = (n or 0) + math.floor(math.sqrt(r^2 - x^2)) * 2 + 1
print(n / r^2)

At this point, we can bump r up to 10000000, run the above through LuaJIT, and get an estimate of 3.1415926535059 in less than a tenth of a second. Not a bad estimate for a few lines of code and a tiny amount of compute time.

The estimate itself is nice, but I find it more interesting to look at what LuaJIT is doing behind the scenes with this code. To begin with, we can inspect the bytecode - if source code is an array of characters, then bytecode is an array of instructions (plus an array of constants, plus a few other bits and bobs), and the first thing LuaJIT does with any source code is turn it into bytecode. We can see the bytecode by using the -bl argument:

$ luajit -bl pi.lua 
-- BYTECODE -- pi.lua:0-6
0001    KNUM     0   0      ; 10000000
0002    GSET     0   0      ; "r"
0003    GGET     0   0      ; "r"
0004    UNM      0   0
0005    GGET     1   0      ; "r"
0006    KSHORT   2   1
0007    FORI     0 => 0029
0008 => GGET     4   1      ; "n"
0009    IST          4
0010    JMP      5 => 0012
0011    KSHORT   4   0
0012 => GGET     5   2      ; "math"
0013    TGETS    5   5   3  ; "floor"
0014    GGET     6   2      ; "math"
0015    TGETS    6   6   4  ; "sqrt"
0016    GGET     7   0      ; "r"
0017    KSHORT   8   2
0018    POW      7   7   8
0019    KSHORT   8   2
0020    POW      8   3   8
0021    SUBVV    7   7   8
0022    CALL     6   0   2
0023    CALLM    5   2   0
0024    MULVN    5   5   1  ; 2
0025    ADDVV    4   4   5
0026    ADDVN    4   4   2  ; 1
0027    GSET     4   1      ; "n"
0028    FORL     0 => 0008
0029 => GGET     0   5      ; "print"
0030    GGET     1   1      ; "n"
0031    GGET     2   0      ; "r"
0032    KSHORT   3   2
0033    POW      2   2   3
0034    DIVVV    1   1   2
0035    CALL     0   1   2
0036    RET0     0   1

Having said bytecode was an array of instructions, the first column in the above (containing 0001 through 0036) is the array index. There is actually also an instruction at array index 0 which -bl doesn't show us, which in this case is 0000 FUNCV 9. The next column contains either or => - it is blank for most instructions, and is => for any instruction which is the target of a jump. The next column (KNUM, GSET, ..., RET0) contains the instruction name. The next few columns contain numbers, the meaning of which depend upon the instruction name. Finally, some instructions have a comment printed after the ;.

We can run through the various different kinds of instruction in the above (in order of first appearance):

Phew. That gets us through the bytecode listing. There's a lot of it, some of it is executed just one, and other bits of it are executed many many times. The LuaJIT interpreter doesn't care though; it'll happily churn through bytecode all day long. At some point though, the JIT part of LuaJIT comes into play. To explore this part, we'll switch from -bl (which dumps bytecode instead of running it) to -jdump=bitmsrx (which runs bytecode and also dumps various pieces of information as JIT compilation happens and JIT-compiled code executes):

$ luajit -jdump=bitmsrx pi.lua

The first thing to be output by -jdump=bitmsrx is the following:

---- TRACE 1 start pi.lua:2
0008  GGET     4   1      ; "n"
0009  IST          4
0010  JMP      5 => 0012
0012  GGET     5   2      ; "math"
0013  TGETS    5   5   3  ; "floor"
0014  GGET     6   2      ; "math"
0015  TGETS    6   6   4  ; "sqrt"
0016  GGET     7   0      ; "r"
0017  KSHORT   8   2
0018  POW      7   7   8
0019  KSHORT   8   2
0020  POW      8   3   8
0021  SUBVV    7   7   8
0022  CALL     6   0   2
0000  . FUNCC               ; math.sqrt
0023  CALLM    5   2   0
0000  . FUNCC               ; math.floor
0024  MULVN    5   5   1  ; 2
0025  ADDVV    4   4   5
0026  ADDVN    4   4   2  ; 1
0027  GSET     4   1      ; "n"
0028  FORL     0 => 0008

This tells us that trace #1 starts at line 2 of pi.lua, and is followed by the bytecode instructions which comprise the trace. This should look familiar to the -bl dump from earlier, albeit with a few differences. We start at instruction index 0008 - the JIT compiler doesn't compile everything, only the things it thinks are worth compiling, and in this case it thinks that the loop body (which starts at instruction 0008) is worth compiling. The column which contained either or => is gone - traces are strictly linear sequences with no branches, and hence no jump targets within them, and hence no need for => jump target indicators. Instruction 0011 isn't listed in the trace, as it isn't part of the trace - in the particular flow of execution recorded by the JIT compiler, the 0 branch of (n or 0) wasn't taken. The other major difference happens at function calls: when a call happens, tracing follows the call, and starts including bytecode instructions from the called function in the trace. The function known as math.sqrt consists of the single bytecode instruction 0000 FUNCC, the effect of which is three-fold:

  1. Start a C function (c.f. the FUNCV instruction we saw earlier for starting a vararg Lua function).
  2. Go off and run the C code associated with the function.
  3. Return to the calling function (c.f. the RET0 instruction we saw earlier for returning from a Lua function).

Like math.sqrt, math.floor also consists of just a single bytecode instruction. In both cases, the bytecode instruction is included in the trace; the . marker between the array index and the instruction name denotes a call frame level (. . denotes two levels of call frame, etc.).

Actually, -jdump=bitmsrx is telling us a lie: math.sqrt does consist of just a single bytecode instruction, and that instruction does do steps 1 and 3 from the above list, but its step 2 is "Go off and run the assembly code for math.sqrt". This super-specialised bytecode instruction is only used by math.sqrt, and doesn't have a name per-se, so reporting its name as FUNCC is perhaps not the worse lie in the world. Similarly, math.floor conists of a single super-specialised bytecode instruction (not all standard library functions follow this pattern - some are just plain old C functions - but most of the math library happens to be implemented in assembly rather than C).

We talk about bytecode instructions being included in a trace, but the bytecode isn't actually retained in the trace. Instead, as each bytecode instruction is executed, some so-called IR instructions are appended to the trace. After bytecode recording has finished for the trace, we get the next chunk of output from -jdump=bitmsrx, which is the full list of IR instructions:

---- TRACE 1 IR
....              SNAP   #0   [ ---- ]
0001 rbx   >  int SLOAD  #2    CRI
0002       >  int LE     0001  +2147483646
0003 rbp   >  int SLOAD  #1    CI
0004 rax      fun SLOAD  #0    R
0005 rax      tab FLOAD  0004  func.env
0006          int FLOAD  0005  tab.hmask
0007       >  int EQ     0006  +63 
0008 r14      p32 FLOAD  0005  tab.node
0009 r12   >  p32 HREFK  0008  "n"  @19
0010       >  num HLOAD  0009
0011       >  p32 HREFK  0008  "math" @54
0012 r15   >  tab HLOAD  0011
0013          int FLOAD  0012  tab.hmask
0014       >  int EQ     0013  +31 
0015 r13      p32 FLOAD  0012  tab.node
0016       >  p32 HREFK  0015  "floor" @14
0017       >  fun HLOAD  0016
0018       >  p32 HREFK  0015  "sqrt" @15
0019       >  fun HLOAD  0018
0020       >  p32 HREFK  0008  "r"  @12
0021 xmm0  >  num HLOAD  0020
0022 [8]      num MUL    0021  0021
0023 xmm1     num CONV   0003
0024 xmm1     num MUL    0023  0023
0025 xmm0     num SUB    0022  0024
0026       >  fun EQ     0019  math.sqrt
0027 xmm0     num FPMATH 0025  sqrt
0028       >  fun EQ     0017  math.floor
0029 xmm7     num FPMATH 0027  floor
0030 xmm7     num ADD    0029  0029
0031 xmm7     num ADD    0030  0010
0032 xmm7   + num ADD    0031  +1  
0033          num HSTORE 0009  0032
0034 rbp    + int ADD    0003  +1  
....              SNAP   #1   [ ---- ]
0035       >  int LE     0034  0001
....              SNAP   #2   [ ---- 0034 0001 ---- 0034 ]
0036 ------------ LOOP ------------
0037 xmm5     num CONV   0034
0038 xmm5     num MUL    0037  0037
0039 xmm6     num SUB    0022  0038
0040 xmm0     num FPMATH 0039  sqrt
0041 xmm6     num FPMATH 0040  floor
0042 xmm6     num ADD    0041  0041
0043 xmm7     num ADD    0042  0032
0044 xmm7   + num ADD    0043  +1  
0045          num HSTORE 0009  0044
0046 rbp    + int ADD    0034  +1  
....              SNAP   #3   [ ---- ]
0047       >  int LE     0046  0001
0048 rbp      int PHI    0034  0046
0049 xmm7     num PHI    0032  0044

Ignoring the .... SNAP lines for a moment, the first column (containing 0001 through 0049) is the index into the IR array. The next column contains either a register name (like rbx or xmm7) or a stack offset (like [8]) or is blank. This column isn't populated as the IR is created - instead it is populated as the IR is turned into machine code: if the result of the instruction gets assigned to a register or a stack slot, then that register or stack slot is recorded (some instructions don't have results, or don't have their result materialised, and so this column remains blank for them). The next column contains either > or : the > symbol indicates that the instruction is a so-called guard instruction: these instructions need to be executed even if their result is otherwise unused, and these instructions are also able to "fail" (failure, if/when it happens, causes execution to leave the JIT-compiled code and return to the interpreter). The next column contains either + or : the + symbol indicates that the instruction is used later-on in a PHI instruction - as with the register/stack column, this column isn't populated as the IR is recorded, and instead it is populated by the LOOP optimisation pass (as it is this optimisation pass which emits PHI instructions). The next column contains the type of the IR instruction, which in our case is one of:

The next columns contain the instruction name, and then the two (or occasionally one or three) operands to the instruction. We can run through the IR instructions in order of first appearance:

The effectiveness of the LOOP optimisation pass is really quite impressive: instructions 0001 through 0022 disappear completely, as do 0026 and 0028. The bytecode performs seven table lookups per iteration (n, math, floor, math, sqrt, r, n). Common-subexpression-elimination and load-forwarding during the bytecode to IR conversion causes the IR before the LOOP instruction to contain just five HREFK instructions (i.e. n and math are looked up once each rather than twice each). Despite the table store to n within the loop, these five HREFK instructions instructions are all determined to be loop-invariant (good alias analysis is to thank here). The HLOAD instructions for math, floor, sqrt, and r are also determined to be loop-invariant. The HLOAD for n isn't loop-invariant, but forwarding saves us, so there is no HLOAD for n after the LOOP instruction (that is, the reason for the HLOAD not being loop-invariant is the HSTORE within the loop body, but LuaJIT can forward the stored value rather than having to reload it). The HSTORE instruction for n is still done after the LOOP instruction: stores to local variables are deferred to trace exits, but stores to tables (including stores to global variables) are not deferred.

On that note, we can begin to consider the .... SNAP lines in the IR dump. Each of these lines corresponds to a so-called snapshot. A snapshot is used to transition from JIT-compiled code back to the interpreter, and consists of:

Sadly, for snapshots, -jdump doesn't report the bytecode instruction at which the interpreter will resume. It does report the deferred stores though: reading from left to right, each token between [ and ] represents a slot, with the leftmost token being slot #0. For example, [ ---- 0034 0001 ---- 0034 ] means that when exiting with this snapshot:

Call frames are denoted with | symbols in snapshots, but we'll gloss over this as it doesn't occur in our example. If an IR instruction can fail, then when it fails, the nearest preceding snapshot is used to return to the interpreter. In our example, this means instructions 0001 through 0034 use snapshot #0, 0035 uses #1, 0036 though 0046 use #2, and 0047 through 0049 use #3. It is worth dwelling on snapshots for moment, and viewing them as a form of transactional rollback. For example, (in our case) if instruction 0001 fails, then snapshot #0 is used. If instruction 0028 fails, then snapshot #0 is still used, despite various table lookups, some arithmetic, and a call to math.sqrt all happening between instructions 0001 and 0028. This means that if instruction 0028 fails, then after restoring the interpreter state, the interpreter will repeat the lookups, the arithmetic, and the math.sqrt call (presumably it wouldn't repeat the math.floor call, as a failure of instruction 0028 would mean that math.floor no longer corresponded to the builtin floor function).

With that, we can move on to the next chunk of output from -jdump=bitmsrx, which is the generated assembly (though LuaJIT actually generates machine code directly rather than generating assembly, so what is shown is really the output of -jdump's own bundled disassembler):

---- TRACE 1 mcode 507
f125fdfa  mov dword [0x00043410], 0x1
f125fe05  movsd xmm7, [rdx+0x8]
f125fe0a  cvttsd2si ebx, xmm7
f125fe0e  xorps xmm6, xmm6
f125fe11  cvtsi2sd xmm6, ebx
f125fe15  ucomisd xmm7, xmm6
f125fe19  jnz 0xf1250010  ->0
f125fe1f  jpe 0xf1250010  ->0
f125fe25  cmp ebx, 0x7ffffffe
f125fe2b  jg 0xf1250010 ->0
f125fe31  movsd xmm7, [rdx]
f125fe35  cvttsd2si ebp, xmm7
f125fe39  xorps xmm6, xmm6
f125fe3c  cvtsi2sd xmm6, ebp
f125fe40  ucomisd xmm7, xmm6
f125fe44  jnz 0xf1250010  ->0
f125fe4a  jpe 0xf1250010  ->0
f125fe50  mov eax, [rdx-0x8]
f125fe53  mov eax, [rax+0x8]
f125fe56  cmp dword [rax+0x1c], +0x3f
f125fe5a  jnz 0xf1250010  ->0
f125fe60  mov r14d, [rax+0x14]
f125fe64  mov rdi, 0xfffffffb00054868
f125fe6e  cmp rdi, [r14+0x1d0]
f125fe75  jnz 0xf1250010  ->0
f125fe7b  lea r12d, [r14+0x1c8]
f125fe82  cmp dword [r12+0x4], 0xfffeffff
f125fe8b  jnb 0xf1250010  ->0
f125fe91  mov rdi, 0xfffffffb00048d48
f125fe9b  cmp rdi, [r14+0x518]
f125fea2  jnz 0xf1250010    ->0
f125fea8  cmp dword [r14+0x514], -0x0c
f125feb0  jnz 0xf1250010  ->0
f125feb6  mov r15d, [r14+0x510]
f125febd  cmp dword [r15+0x1c], +0x1f
f125fec2  jnz 0xf1250010  ->0
f125fec8  mov r13d, [r15+0x14]
f125fecc  mov rdi, 0xfffffffb00049150
f125fed6  cmp rdi, [r13+0x158]
f125fedd  jnz 0xf1250010  ->0
f125fee3  cmp dword [r13+0x154], -0x09
f125feeb  jnz 0xf1250010  ->0
f125fef1  mov rdi, 0xfffffffb000491e0
f125fefb  cmp rdi, [r13+0x170]
f125ff02  jnz 0xf1250010  ->0
f125ff08  cmp dword [r13+0x16c], -0x09
f125ff10  jnz 0xf1250010  ->0
f125ff16  mov rdi, 0xfffffffb0004ebc8
f125ff20  cmp rdi, [r14+0x128]
f125ff27  jnz 0xf1250010  ->0
f125ff2d  cmp dword [r14+0x124], 0xfffeffff
f125ff38  jnb 0xf1250010  ->0
f125ff3e  movsd xmm0, [r14+0x120]
f125ff47  mulsd xmm0, xmm0
f125ff4b  movsd [rsp+0x8], xmm0
f125ff51  xorps xmm1, xmm1
f125ff54  cvtsi2sd xmm1, ebp
f125ff58  mulsd xmm1, xmm1
f125ff5c  subsd xmm0, xmm1
f125ff60  cmp dword [r13+0x168], 0x000491b8
f125ff6b  jnz 0xf1250010  ->0
f125ff71  sqrtsd xmm0, xmm0
f125ff75  cmp dword [r13+0x150], 0x00049128
f125ff80  jnz 0xf1250010  ->0
f125ff86  roundsd xmm7, xmm0, 0x09
f125ff8c  addsd xmm7, xmm7
f125ff90  addsd xmm7, [r12]
f125ff96  addsd xmm7, [0x00064bc8]
f125ff9f  movsd [r12], xmm7
f125ffa5  add ebp, +0x01
f125ffa8  cmp ebp, ebx
f125ffaa  jg 0xf1250014 ->1
f125ffb0  movsd xmm0, [rsp+0x8]
f125ffb6  xorps xmm5, xmm5
f125ffb9  cvtsi2sd xmm5, ebp
f125ffbd  mulsd xmm5, xmm5
f125ffc1  movaps xmm6, xmm0
f125ffc4  subsd xmm6, xmm5
f125ffc8  sqrtsd xmm0, xmm6
f125ffcc  roundsd xmm6, xmm0, 0x09
f125ffd2  addsd xmm6, xmm6
f125ffd6  addsd xmm7, xmm6
f125ffda  addsd xmm7, [0x00064bc8]
f125ffe3  movsd [r12], xmm7
f125ffe9  add ebp, +0x01
f125ffec  cmp ebp, ebx
f125ffee  jle 0xf125ffb0  ->LOOP
f125fff0  jmp 0xf125001c  ->3

The first column gives the memory address of the instruction, and the remainder of the line gives the instruction in Intel-ish syntax (which happens to be a syntax I'm fond of; AT&T syntax needs to die). I'm not going to explain the semantics of each individual instruction (there are multi-thousand page Intel manuals for that), but there are a number of interesting things to point out:

The next piece of output from -jdump=bitmsrx tells us that recording (and IR optimisation and machine code generation) of the trace has finished, and that the trace is a looping trace:

---- TRACE 1 stop -> loop

The final piece of output from -jdump=bitmsrx tells us that execution of the trace finished and snapshot #3 was used to restore the interpreter state (noting that -jdump=bitmsrx doesn't tell us when execution of a trace starts):

---- TRACE 1 exit 3

Finally, we get our estimate of π courtesy of the interpreter executing (the bytecode corresponding to) print(n / r^2):


Alternatives to short unconditional jumps on x86

When writing x86 assembly code, short unconditional jumps are sometimes needed. For example, an if statement might become:

  jz false_branch
  jmp end

The jmp end instruction can also be expressed as jmp $+n, where n is the length (in bytes) of the machine code for <false_branch>. When n is small and positive, this is a short unconditional jump. These jumps tend to look ugly, and it can be entertaining (and sometimes beneficial) to consider ways of avoiding them. For example, jmp $+1 encodes as EB 01 ?? where ?? is the one byte to be jumped over. If burning a register is an option, then mov al, imm8 (encoded as B0 ??) might be an alternative (that is, the byte being jumped over becomes the imm8 value). If burning a regsiter isn't an option, but burning flags is an option, then test al, imm8 (encoded as A8 ??) might be an alternative. If not even flags can be burnt, then nop [eax+imm8] (encoded as 0F 1F 40 ??) might be an alternative.

For jmp $+4, similar patterns can be used: mov eax, imm32 (B8 ?? ?? ?? ??), test eax, imm32 (A9 ?? ?? ?? ??), and nop [eax+imm32] (0F 1F 80 ?? ?? ?? ??) are all options. For jmp $+3 or jmp $+2, one easy option is to take a jmp $+4 pattern and replace the first one or two ??s with 00 (or any other value).

For jmp $+5, slightly more effort is required. On x86_64, we could use mov rax, imm64 for jmp $+8 and then only use five of the eight immediate bytes, but this feels slightly wasteful (and isn't an option for non-64-bit code). One option to make up five bytes is to combine a 32-bit immediate value with a ModRM byte or a SIB byte. For example, a nop instruction with an arbitrary SIB byte and 32-bit immediate looks like 0F 1F 84 ?? ?? ?? ?? ??. At the cost of burning a register, a shorter option is lea eax, [?] (8D 84 ?? ?? ?? ?? ??). With some knowledge of what we're jumping over, we can get shorter still - for example, jumping over a five-byte call rel32 instruction (E8 ?? ?? ?? ??) can be done with sub eax, imm32 (81 E8 ?? ?? ?? ??), albeit at the cost of burning both eax and flags.

If this topic tickles your fancy, some terms to google are:

cfa == rsp on x86_64

The DWARF standard, in the area of stack unwinding, uses the term CFA quite a lot. It contains the following passage explaining what the CFA is:

The call frame is identified by an address on the stack. We refer to this address as the Canonical Frame Address or CFA. Typically, the CFA is defined to be the value of the stack pointer at the call site in the previous frame (which may be different from its value on entry to the current frame).

The word "typically" in the above leaves the definition of the CFA slighty up in the air. A subsequent example from the document seems to defer the definition to the platform's ABI committee: (emphasis mine)

The following example uses a hypothetical RISC machine in the style of the Motorola 88000.


  • There are 8 4-byte registers:
    • ...
    • R7 stack pointer.
  • The stack grows in the negative direction.
  • The architectural ABI committee specifies that the stack pointer (R7) is the same as the CFA.

We now move from the DWARF standard to the x86_64 ABI, which contains the following squirreled away in the function reference:


uint64 _Unwind_GetCFA(struct _Unwind_Context *context);

This function returns the 64-bit Canonical Frame Address which is defined as the value of %rsp at the call site in the previous frame.

With this, we can happily say that on x86_64, cfa == rsp (at the call site in the previous frame). This statement has a few consequences:

  1. The unwind rule for cfa specifies how to restore rsp. Combined with the stack growing downwards, this is why the cfa unwind rule is typically cfa = rsp + N or cfa = rbp + N.
  2. Unwind rules can be specified for rsp, but they have no effect - the implicit rule rsp = cfa is always used for restoring rsp.
  3. If you want to specify an exotic unwind rule for rsp, then you have to instead use an exotic rule to restore cfa. In turn, this might make it difficult to specify unwind rules for other registers (as said rules are typically specified relative to cfa) - one workaround is to use two call frames: one to restore rsp and one to restore everything else.

One observation related to the workaround in point 3 is that the unwind rule for rip doesn't have to tell the truth. The unwind rule for rip is typically rip = [cfa - 8] because this matches the x86_64 ret semantics of rsp += 8; rip = [rsp - 8], but even if your function returns via ret, you aren't obliged to specify rip = [cfa - 8] as the unwind rule for rip: you can specify whatever small white lie you need to get the correct unwind behaviour.

Sometimes I talk about LuaJIT

Back in October, I gave a talk at the 2016 Lua Workshop about some of the internals of LuaJIT. Videos from the workshop are now online, so if you're able to stand 40 minutes of me talking, and have an interest in the implementation of LuaJIT, you might like the following:

On libunwind and dynamically generated code on x86-64

libunwind is a - supposedly portable - library for performing native stack unwinding. In simple scenarios, the library does its job fairly well, however, things get more interesting in the presence of dynamically-generated (vis-à-vis JIT-compiled) code. In fact, on any platform, and in any context, unwinding native stacks in the presence of dynamically-generated code is an interesting topic. It so happens that the Windows API gets this right, presenting you with two different options, with it being sufficient to go with either option:

  1. Construct some RUNTIME_FUNCTION data structures up front, and then call RtlAddFunctionTable.
  2. Call RtlInstallFunctionTableCallback, and then construct RUNTIME_FUNCTION data structures lazily on demand (this is exactly the interface that a JIT compiler would want, and perhaps it was designed with one in mind).

I feel that Linux doesn't really get this right. Rather than there being a single OS-supplied interface, every tool has its own way of doing things:

All three of these interfaces require data structures to be generated up-front, which isn't ideal for JIT compilers, but I digress (though possibly in some cases, creative use of PROT_NONE pages and SIGSEGV handlers could allow some degree of lazy on-demand generation). That all three interfaces consume different data is annoying. That .eh_frame and .debug_frame are subtly different is also annoying, but I digress again.

Though libunwind presents an interface, it happens to be poorly documented and poorly implemented for x86-64. In particular, if you read the documentation, then you'd be left with the impression that unw_dyn_info_t can refer to either a unw_dyn_proc_info_t structure (UNW_INFO_FORMAT_DYNAMIC), or a unw_dyn_table_info structure (UNW_INFO_FORMAT_REMOTE_TABLE or UNW_INFO_FORMAT_TABLE). On the former structure, the documentation has the following to say:

This is the preferred dynamic unwind-info format and it is generally the one used by full-blown runtime code-generators. In this format, the details of a procedure are described by a structure of type unw_dyn_proc_info_t.

Let me save you some time by pointing out that unwind directives for unw_dyn_proc_info_t structures plain aren't implemented on x86-64. As such, using unw_dyn_proc_info_t is a non-starter if you actually want to do any unwinding. Consequently, the only option is to use unw_dyn_table_info. The most interesting field of unw_dyn_table_info is table_data, the documentation for which states:

A pointer to the actual data encoding the unwind-info. The exact format is architecture-specific (see architecture-specific sections below).

Of course, there are no such notes below with reference to x86-64. Let me save you some time by pointing out that the table_data field should refer to an array of table_entry structures (which aren't documented, or present in any header, but can be found in the source). In turn, the fde_offset field of that structure should refer to a DWARF FDE in .debug_frame style.

After supplying unwind information via UNW_INFO_FORMAT_TABLE, libunwind is capable of unwinding over call frames for dynamically-generated code on x86-64. After getting basic unwinding working, one might like to make libunwind supply a useful function name for the call frame. The unw_dyn_table_info structure contains a name_ptr field which looks perfect for this task, but the code which should read this field instead just returns UNW_ENOINFO for UNW_INFO_FORMAT_TABLE (or, it would, but UNW_EINVAL is also likely, as the enclosing switch statement should be on pi.format rather than di->format). The observant reader will spot that this name-related logic is fully implemented for UNW_INFO_FORMAT_DYNAMIC, leaving us in a sad situation on x86-64: use UNW_INFO_FORMAT_DYNAMIC and get names but no unwinding, or use UNW_INFO_FORMAT_TABLE and get unwinding but no names.

I'd like to finish on a positive note instead of that sad note, but alas, this is a tale of woe.

page: 1 2 3