What even is a pidfd anyway?

In recent versions of the Linux kernel, a pidfd is a special type of file that holds a reference to a process. Notably, a pidfd allows for certain process-related operations to be performed in a race-free manner, and it allows poll / select / epoll to be used to detect process termination.

Before you get too excited:

There are various ways of obtaining a pidfd:

Kernel versionglibc versionFunction
5.22.2.5 / 2.31clone with CLONE_PIDFD flag
5.3N/Aclone3 with CLONE_PIDFD flag
5.3 / 5.102.36pidfd_open
5.42.39pidfd_spawn / pidfd_spawnp
6.52.2.5 / N/Agetsockopt with SO_PEERPIDFD optname
6.52.2.5 / 2.39recvmsg with SCM_PIDFD cmsg_type

Once you have a pidfd, there are a bunch of things you can do with it:

Kernel versionglibc versionFunction
5.12.36pidfd_send_signal
5.2 / 5.52.39pidfd_getpid
5.32.2.5 / 2.3.2poll / select / epoll
5.42.2.5 / 2.36waitid with P_PIDFD mode
5.62.36pidfd_getfd
5.82.14setns
5.10 / 5.122.36process_madvise
5.152.36process_mrelease
6.92.2.5 / 2.28fstat / statx for meaningful stx_ino

Some of the subsequent text refers to a process being alive or zombie or dead. These terms come from the usual lifecycle of a unix process: it is initially alive, then transitions to zombie when it terminates, and then transitions to dead once it is waited upon. As a quick summary of the states:

AliveZombieDead
Can execute code and receive signals
Has pid number
Exit code / status retrievable
pidfd polls as readable
Cleaned up by kernel

clone with CLONE_PIDFD flag

Available since: kernel 5.2, glibc 2.31 (or glibc 2.2.5 if you provide your own definition of CLONE_PIDFD; its value is 0x1000).

If the CLONE_PIDFD flag is specified, then clone returns a freshly allocated pidfd referring to the child (in addition to returning the pid number of the child). The O_CLOEXEC flag is automatically set on the returned pidfd. Note that if CLONE_PIDFD is specified, then CLONE_THREAD cannot be specified, nor can CLONE_DETACHED. Furthermore, if CLONE_PIDFD is specified, then CLONE_PARENT_SETTID cannot be specified (unless using clone3).

One of the arguments to clone is the signal number that the child will send to its parent when the child terminates. Setting this to anything other than SIGCHLD has several consequences:

Note that if the child calls execve (or a similar exec function), then the termination signal number is reset to SIGCHLD, and the above points stop applying.

clone3 with CLONE_PIDFD flag

Available since: kernel 5.3, no glibc wrapper.

This function is just a more extensible version of clone; everything written above about clone applies equally to clone3.

pidfd_open

Available since: kernel 5.3, glibc 2.36.

This function takes a pid number (in the pid namespace of the caller), and returns a freshly allocated pidfd refering to said process (or an error if said process does not exist). It is inherently racy, unless the pid number being passed is the result of getpid (i.e. creating a pidfd referring to your own process).

Since kernel 5.10, the PIDFD_NONBLOCK flag can be passed to pidfd_open, which affects subsequent waitid calls. No other flags are valid to pass. The O_CLOEXEC flag is automatically set on the returned pidfd.

pidfd_spawn / pidfd_spawnp

Available since: kernel 5.4, glibc 2.39.

These functions are like posix_spawn / posix_spawnp, except that they have an int* output parameter for a freshly allocated pidfd instead of a pid_t* output parameter for a pid number. The O_CLOEXEC flag is automatically set on the returned pidfd.

In glibc 2.39, bug BZ#31695 causes these functions to leak a file descriptor in some error scenarios. This will hopefully be fixed in 2.40.

getsockopt with SO_PEERPIDFD optname

Available since: kernel 6.5, glibc 2.2.5 for getsockopt. The definition of SO_PEERPIDFD is not tied to a particular glibc version; its value is 77 should you need to provide your own definition of it.

SO_PEERPIDFD is the pidfd version of SO_PEERCRED. For a unix socket created via socketpair, SO_PEERPIDFD gives a pidfd referring to the process that called socketpair, meanwhile for a connected unix stream socket, SO_PEERPIDFD gives a pidfd referring to the process that called connect (if called on the server end of the socket) or the process that called listen (if called on the client end of the socket). The O_CLOEXEC flag is automatically set on the returned pidfd.

recvmsg with SCM_PIDFD cmsg_type

Available since: kernel 6.5, glibc 2.39 (or glibc 2.2.5 if you provide your own definition of SCM_PIDFD; its value is 0x04).

SCM_PIDFD is the pidfd version of (the pid part of) SCM_CREDENTIALS. If the receivier sets SO_PASSPIDFD on a unix socket (c.f. setting SO_PASSCRED), then it'll receive a SCM_PIDFD cmsg as part of receiving a message, with the associated cmsg data being a freshly allocated pidfd referring to the process of the sender of the message (or some other process if the sender has CAP_SYS_ADMIN and specifies a pid number other than itself as part of its SCM_CREDENTIALS). The O_CLOEXEC flag is automatically set on the pidfd.

pidfd_send_signal

Available since: kernel 5.1, glibc 2.36.

This function is similar to kill / rt_sigqueueinfo: it sends a signal to a process. It differs from these functions in that the destination is given as a pidfd rather than as a pid number.

This function also accepts the result of open("/proc/$pid") as an fd, though it is the only function to do so: open("/proc/$pid") does not give a pidfd, and no other functions accept the result of open("/proc/$pid") in place of a pidfd.

pidfd_getpid

Available since: kernel 5.2, glibc 2.39.

This function is the inverse of pidfd_open: given a pidfd, it returns the pid number associated with the underlying process. This function requires that /proc be mounted, and returns the pid number in the pid namespace associated with the mounted /proc. Note that the pid number can be reused for a different process once the underlying process is dead.

Changed in kernel 5.5: if the process referenced by the pidfd is dead, this function returns -1 (prior to 5.5, it returned whatever pid number the process had prior to its death).

Note that this is not a direct system call; instead it opens /proc/self/fdinfo/$pidfd and parses the Pid: line therein.

poll / select / epoll

Available since: kernel 5.3, glibc 2.2.5 (poll / select) or glibc 2.3.2 (epoll).

These functions can be used to asynchronously monitor a pidfd. They will report the pidfd as readable iff the underlying process is a zombie or is dead. Note however that read on a pidfd always fails; to get the exit code / status of the process, use waitid (possibly with WNOHANG).

waitid with P_PIDFD mode

Available since: kernel 5.4, glibc 2.36 (or glibc 2.2.5 if you provide your own definition of P_PIDFD; its value is 3).

waitid(P_PIDFD, fd, infop, options) is identical to waitid(P_PID, pidfd_getpid(fd), infop, options), except for the following:

In particular, note that:

The above points are true for all waitid calls, including P_PIDFD calls. The first time a zombie is waited upon (by any kind of wait / waitpid / waitid call), then the exit code / status is retreived, and subsequent attempts to wait upon it (again by any kind of wait / waitpid / waitid call) will fail.

When a process transitions from alive to zombie, if that process's parent's SIGCHLD handler is SIG_IGN or has SA_NOCLDWAIT, then the kernel does an automatic wait call on behalf of the parent and discards the result, thereby transitioning the child onward from zombie to dead. This causes all attempts to wait upon the child (including via P_PIDFD) to fail. The only exception to this is if the child was created with clone or clone3, and the termination signal was specified as something other than SIGCHLD, and the child has not called execve or similar: given this combination of circumstances, the automatic wait call will not recognise the child.

pidfd_getfd

Available since: kernel 5.6, glibc 2.36.

This function takes a pidfd, along with an fd number in the file table of the process referenced by the pidfd, creates a duplicate of that file descriptor in the file table of the calling process, and returns the new fd number. The effect is similar to what would happen if the referenced process used an SCM_RIGHTS message to send a file descriptor to the calling process. The O_CLOEXEC flag is automatically set on the new fd.

Calling this function incurs a PTRACE_MODE_ATTACH_REALCREDS security check.

setns

Available since: kernel 5.8, glibc 2.14.

Passing a pidfd to this function moves the caller into one or more of the namespaces that the process referenced by the pidfd is in. Note that this function can also be passed the result of open("/proc/$pid/ns/$name") as an fd.

process_madvise

Available since: kernel 5.10, glibc 2.36.

This function is similar to madvise, except that it operates on an arbitrary process (specified via a pidfd) rather than on the calling process.

Since 5.12, calling this function incurs PTRACE_MODE_READ_FSCREDS and CAP_SYS_NICE security checks. In 5.10 and 5.11, it incurred a PTRACE_MODE_ATTACH_FSCREDS security check.

process_mrelease

Available since: kernel 5.15, glibc 2.36.

This is a relatively niche function, which you are unlikely to ever need unless writing a userspace OOM killer. It can be called against a process which is no longer alive, but hasn't yet had its virtual memory released up by the kernel, to cause the kernel to release said virtual memory faster.

fstat / statx for meaningful stx_ino

Available since: kernel 6.9, glibc 2.2.5 (fstat) or glibc 2.28 (statx).

It has always been possible to call fstat or statx on a pidfd, but prior to kernel 6.9, it was not useful to do so. Since 6.9, calling statx on a pidfd gives a meaningful stx_ino: the 64-bit inode number of a pidfd uniquely identifies a process, so two pidfds referencing the same process will have identical stx_ino values, while two pidfds referencing different processes will have different stx_ino values. The same is true for fstat, provided that st_ino is 64 bits wide. In other words, since 6.9, a process's inode number (as observed via a pidfd) is a unique 64-bit identifier for the process, which is never reused (until the system is restarted), and is unique even across different pid namespaces.


It is likely that future kernel versions will add more things that can be done with (or to) a pidfd. As for the existing functionality, if you find yourself constrained by glibc version rather than kernel version, one option is to compile against a very recent glibc, then use polyfill-glibc to restore runtime compatibility with an older version of glibc.

In terms of future directions, some of the things that I'd like to see are:

C23 stdbit.h quick reference

MacrosImplementation
#define __STDC_ENDIAN_LITTLE__Some integer constant
#define __STDC_ENDIAN_BIG__Some integer constant
#define __STDC_ENDIAN_NATIVE____STDC_ENDIAN_LITTLE__ or
__STDC_ENDIAN_BIG__ (†)
Regular functionsImplementation
unsigned stdc_leading_zeros(T x)lzcnt(x)
unsigned stdc_first_leading_one(T x)x ? lzcnt(x) + 1 : 0
unsigned stdc_trailing_zeros(T x)tzcnt(x)
unsigned stdc_first_trailing_one(T x)x ? tzcnt(x) + 1 : 0
unsigned stdc_count_ones(T x)popcnt(x)
bool stdc_has_single_bit(T x)popcnt(x) == 1
unsigned stdc_bit_width(T x)x ? floor(log2(x)) + 1 : 0
T stdc_bit_floor(T x)x ? (T)1 << floor(log2(x)) : 0
T stdc_bit_ceil(T x)x ? (T)1 << ceil(log2(x)) : 1 (‡)
Inverted functionsImplementation
unsigned stdc_leading_ones(T x)lzcnt((T)~x)
unsigned stdc_first_leading_zero(T x)(T)~x ? lzcnt((T)~x) + 1 : 0
unsigned stdc_trailing_ones(T x)tzcnt((T)~x)
unsigned stdc_first_trailing_zero(T x)(T)~x ? tzcnt((T)~x) + 1 : 0
unsigned stdc_count_zeros(T x)popcnt((T)~x)

(†) Or some third value if the execution environment is neither little endian nor big endian.

(‡) Undefined if the << overflows or if the result does not fit in T.

Where:

The inverted functions can all be implemented by inverting the input and then passing it to one of the regular functions.

The stdc_first_leading_ functions are slightly slippery, and require a careful reading of the standard. For example, stdc_first_leading_one is defined as:

Returns the most significant index of the first 1 bit in value, plus 1. If it is not found, this function returns 0.

In turn, most significant index has the following unintuitive definition:

The most significant index is the 0-based index counting from the most significant bit, 0, to the least significant bit, w − 1, where w is the width of the type that is having its most significant index computed.

The initial patches for these functions in musl got this wrong, instead using the more intuitive definition of most significant index.

Linux/ELF .eh_frame from the bottom up

Problem statement

Given the current register state for a thread, and read-only access to memory, what would the register state hypothetically become if the current function was to immediately return and execution was to resume in its caller?

Notably, the register state includes (amongst other things) an instruction pointer and a stack pointer, so this operation can be iterated to generate a backtrace. With some extensions to support calling destructors and catching exceptions, this operation can also be the basis of throwing exceptions.

This operation is typically called unwinding (as in "unwinding a call frame" or "unwinding the stack").

If certain registers are always undefined immediately after a function return (because they are volatile in the ABI), then we needn't care about their state.

Portability

The size and contents of the register state depends on the architecture being used. We'll paper over this slightly by refering to registers by ordinal rather than by name, and rely on DWARF register number mappings to go between ordinal and name, for example:

Ordinalx86 namex86_64 nameaarch64 name
0eaxraxX0
1ecxrdxX1
2edxrcxX2
3ebxrbxX3
4esprsiX4
5ebprdiX5
6esirbpX6
7edirspX7
8r8X8
9r9X9
10r10X10
11r11X11
12r12X12
13r13X13
14r14X14
15r15X15
16X16
30X30 (LR)
31SP

Because everyone loves special cases, the instruction pointer register (e.g. eip / rip / PC) isn't given an ordinal. On aarch64, it doesn't need one: the return address is always in some register (usually X30, aka. LR), and so we can use the return address register to represent the instruction pointer. In contrast, x86 and x86_64 put the return address on the stack rather than in a register. For these, we'll invent a fake register (typically ordinal 8 on x86, ordinal 16 on x86_64), pretend that it contains the return address, and use it to represent the instruction pointer.

Floating-point and vector registers are generally volatile in Linux ABIs, so we needn't worry about them too much.

Start small: unwinding just the stack pointer

In most cases, the hypothetical stack pointer after function return can be easily calculated by adding some value to some register. This can be expressed as two DWARF-style ULEB128 values: one ULEB128 to specify which register to start with (often the current stack pointer), then one ULEB128 to specify the amount to add. As foreshadowing, we'll call this DW_CFA_def_cfa.

Unwinding other registers

In most cases, if a function modifies a non-volatile register, it'll save it to somewhere on the stack before modifying it. Upon function return, it'll re-load the value from said stack slot. The locations of these stack slots can be expressed as being relative to the hypothetical stack pointer after function return. This can again be expressed as two DWARF-style ULEB128 values: one ULEB to specify which register we're talking about, then one ULEB to specify the offset of its stack slot. There's only one problem: because the offset is against the hypothetical stack pointer after function return, the required offset is going to be negative, which a ULEB can't represent. To get around this, we'll scale the 2nd ULEB by a so-called data_align value (typically -4 or -8), which also has the convenient side-effect of making the ULEB smaller. As foreshadowing, we'll call this DW_CFA_offset_extended. When the 1st ULEB is less than 64, we might also call this DW_CFA_offset.

More ways to unwind other registers

So far we've described forming the address of a stack slot, and then loading from it. There's a natural variation that skips the 2nd part: form the address of a stack slot, then set the value of a register to be that address.

There's another common variant: unwind register X by taking the value from register Y (when X equals Y, this means that said register was unchanged).

For when these common variants aren't sufficient, we'll later describe a little expression language and associated bytecode VM capable of describing all sorts of complicated mechanisms.

This leads to the following definitions to describe how to unwind registers:

enum StackPointerUnwindMechanism {
  Undefined,
  RegOffset(unsigned RegOrdinal, unsigned Offset),
  ExprResult(uint8_t DwOpBytecode[unsigned Len]),
};

enum OtherRegUnwindMechanism {
  Undefined,
  LoadFromStackSlot(int StackSlotOffset),
  AddressOfStackSlot(int StackSlotOffset),
  CopyFromRegister(unsigned RegOrdinal),
  LoadFromExprResult(uint8_t DwOpBytecode[unsigned Len]),
  ExprResult(uint8_t DwOpBytecode[unsigned Len]),
};

struct UnwindActions {
  StackPointerUnwindMechanism sp;
  OtherRegUnwindMechanism reg[NUM_REG];
};

With the associated unwind logic being:

struct RegState {
  uintptr_t value[NUM_REG];
};

RegState UnwindOneFrame(RegState* input, UnwindActions* actions) {
  RegState output;
  uintptr_t NewSP;
  switch (actions->sp) {
  case RegOffset(unsigned RegOrdinal, unsigned Offset):
    NewSP = input->value[RegOrdinal] + Offset;
    break;
  case ExprResult(uint8_t DwOpBytecode[unsigned Len]):
    NewSP = EvalExpr(DwOpBytecode, 0, input);
    break;
  }
  output->value[SP] = NewSP;
  for (unsigned i = 0; i < NUM_REG; ++i) {
    uintptr_t NewVal;
    switch (actions->reg[i]) {
    case LoadFromStackSlot(int StackSlotOffset):
      NewVal = *(uintptr_t*)(NewSP + StackSlotOffset);
      break;
    case AddressOfStackSlot(int StackSlotOffset):
      NewVal = NewSP + StackSlotOffset;
      break;
    case CopyFromRegister(unsigned RegOrdinal):
      NewVal = input->value[RegOrdinal];
      break;
    case LoadFromExprResult(uint8_t DwOpBytecode[unsigned Len]):
      NewVal = *(uintptr_t*)EvalExpr(DwOpBytecode, NewSP, input);
      break;
    case ExprResult(uint8_t DwOpBytecode[unsigned Len]):
      NewVal = EvalExpr(DwOpBytecode, NewSP, input);
      break;
    default:
      continue;
    }
    output->value[i] = NewVal;
  }
  return output;
}

A little bytecode VM is defined to populate an instance of UnwindActions:

OpcodeName and operandsSemantics assuming UnwindActions* ua
0x0cDW_CFA_def_cfa(uleb128 Reg, uleb128 Off)ua->sp = RegOffset(Reg, Off)
0x12DW_CFA_def_cfa_sf(uleb128 Reg, sleb128 Off)ua->sp = RegOffset(Reg, Off * data_align)
0x0fDW_CFA_def_cfa_expression(uleb128 Len, uint8_t DwOpBytecode[Len])ua->sp = ExprResult(DwOpBytecode)
0x80 + RegDW_CFA_offset(uleb128 Off)ua->reg[Reg] = LoadFromStackSlot(Off * data_align)
0x05DW_CFA_offset_extended(uleb128 Reg, uleb128 Off)ua->reg[Reg] = LoadFromStackSlot(Off * data_align)
0x11DW_CFA_offset_extended_sf(uleb128 Reg, sleb128 Off)ua->reg[Reg] = LoadFromStackSlot(Off * data_align)
0x2fDW_CFA_GNU_negative_offset_extended(uleb128 Reg, uleb128 Off)ua->reg[Reg] = LoadFromStackSlot(-(Off * data_align))
0x14DW_CFA_val_offset(uleb128 Reg, uleb128 Off)ua->reg[Reg] = AddressOfStackSlot(Off * data_align)
0x15DW_CFA_val_offset_sf(uleb128 Reg, sleb128 Off)ua->reg[Reg] = AddressOfStackSlot(Off * data_align)
0xc0 + RegDW_CFA_restoreua->reg[Reg] = CopyFromRegister(Reg)
0x06DW_CFA_restore_extended(uleb128 Reg)ua->reg[Reg] = CopyFromRegister(Reg)
0x08DW_CFA_same_value(uleb128 Reg)ua->reg[Reg] = CopyFromRegister(Reg)
0x09DW_CFA_register(uleb128 Reg, uleb128 Src)ua->reg[Reg] = CopyFromRegister(Src)
0x07DW_CFA_undefined(uleb128 Reg)ua->reg[Reg] = Undefined
0x10DW_CFA_expression(uleb128 Reg, uleb128 Len, uint8_t DwOpBytecode[Len])ua->reg[Reg] = LoadFromExprResult(DwOpBytecode)
0x16DW_CFA_val_expression(uleb128 Reg, uleb128 Len, uint8_t DwOpBytecode[Len])ua->reg[Reg] = ExprResult(DwOpBytecode)
0x00DW_CFA_nopNo-op

† This differs from the usual DWARF semantics. To avoid confusion, use DW_CFA_same_value instead of DW_CFA_restore.

If the bytecode does not initialise sp, its value is Undefined. If the bytecode does not initialise reg[i], its value is CopyFromRegister(i).

Putting it together in a CIE

The aforementioned bytecode gets wrapped up in something called a CIE, which also includes a bunch of other fields:

struct cie {
  uint32_t length;           // In bytes, of all subsequent fields
  int32_t  zero;             // Must be zero
  uint8_t  version;          // Typically 1 or 3, sometimes 4
  char     aug_string[];     // NUL terminated
  if (aug_string[0] == 'e' && aug_string[1] == 'h') {
    void* eh_ptr;            // Only used by very old g++
  }
  if (version >= 4) {
    uint8_t addr_size;       // Must be sizeof(void*)
    uint8_t segment_size;    // Must be zero
  }
  uleb128 code_align;        // Typically 1 (even on aarch64)
  sleb128 data_align;        // Typically -4 or -8
  if (version == 1) {
    uint8_t return_address_ordinal;
  } else {
    uleb128 return_address_ordinal;
  }
  uint8_t aug_operands[];    // Relates to aug_string
  uint8_t dw_cfa_bytecode[];
  uint8_t zero_padding[];    // To give 4 or 8 byte alignment
};

The aug_string field is a NUL-terminated ASCII string. Conceptually it is a bitmask of flags, except that each flag is represented by one or two characters rather than by a single bit. If the flag has associated operand(s), they are read out of the aug_operands field, in the same order as characters appear in aug_string. The recognised flags are:

Char(s)Operand(s)Notes
"eh"void* eh_ptrOperand not part of aug_operands
"z"uleb128 lengthIn bytes, of subsequent aug_operands
"R"uint8_t fde_ptr_encodingForeshadowing
"P"uint8_t ptr_encoding
uint8_t personality_ptr[]
Foreshadowing
"L"uint8_t lsda_ptr_encodingForeshadowing
"S"NoneSignal handler frame
"B"Noneaarch64 ptrauth uses B key
"\0"NoneEnd of aug_string

"eh", if it appears, must be first. In practice, it is never present, except in code compiled by very old versions of g++. "z", if it appears, must be next, and in practice is always present. The remaining characters (except NUL) can appear in any order, though if "R" is present, and "S" and/or "B" are also present, then "R" must appear before "S" and before "B" (to work around bugs in get_cie_encoding, as compared to the correct extract_cie_info). Furthermore, if "R" is present and its operand is non-zero, then "z" must be present (again due to get_cie_encoding bugs).

The whole struct cie must have a length which is a multiple of sizeof(uintptr_t) bytes. The zero_padding field at the end ensures this. Helpfully, DW_CFA_nop is zero, so zero_padding can be seen as an extension of the dw_cfa_bytecode field. The length of the whole struct cie is 4 + length. To get the length of the combined dw_cfa_bytecode / zero_padding, the lengths of all the other fields need to be subtracted off. This is straightforward (albeit fiddly) for most fields, with the only hard case being aug_operands: its length depends on the contents of aug_string, and there may be characters in aug_string which we do not recognise. This is where "z" comes in: once we've decoded its operand, it tells us the remaining length of aug_operands.

From one location to many

Obtaining the UnwindActions applicable to one instruction pointer value is all well and good, but this needs to scale to every (*) possible instruction pointer value across an entire program. One relevant observation is that the UnwindActions applicable to an instruction pointer X are usually very similar (or even identical to) those applicable to an instruction pointer value of X+1.

(*) Assuming -fasynchronous-unwind-tables. Less coverage is required if only -fnon-call-exceptions is used, and less still is required if neither is used. Note that -fasynchronous-unwind-tables is enabled by default on some architectures.

With this observation in mind, we can borrow a trick from video compression: use a keyframe to describe the UnwindActions applicable at the start of every function, and then use delta frames to describe the differences as the instruction pointer progresses through the function.

This motivates a bunch of new VM opcodes, along with two pieces of VM state: set target to the instruction pointer that you want UnwindActions for, set fpc to the instruction pointer of the start of the function, and execute VM instructions either until you run out of VM instructions, or until fpc >= target:

OpcodeName and operandsSemantics
0x40 + OffDW_CFA_advance_locfpc += Off * code_align; if (fpc >= target) break;
0x02DW_CFA_advance_loc1(uint8_t Off)fpc += Off * code_align; if (fpc >= target) break;
0x03DW_CFA_advance_loc2(uint16_t Off)fpc += Off * code_align; if (fpc >= target) break;
0x04DW_CFA_advance_loc4(uint32_t Off)fpc += Off * code_align; if (fpc >= target) break;
0x01DW_CFA_set_loc(uint8_t Ptr[])fpc = DecodePtr(fde_encoding from CIE, Ptr); if (fpc >= target) break;

It also motivates two pieces of VM state called spReg and spOff, along with revisements to DW_CFA_def_cfa / DW_CFA_def_cfa_sf, and new opcodes which perform only one half of DW_CFA_def_cfa / DW_CFA_def_cfa_sf:

OpcodeName and operandsSemantics assuming UnwindActions* ua
0x0cDW_CFA_def_cfa(uleb128 Reg, uleb128 Off)ua->sp = RegOffset(spReg = Reg, spOff = Off)
0x12DW_CFA_def_cfa_sf(uleb128 Reg, sleb128 Off)ua->sp = RegOffset(spReg = Reg, spOff = Off * data_align)
0x0dDW_CFA_def_cfa_register(uleb128 Reg)spReg = Reg; ua->sp = RegOffset(spReg, spOff)
0x0eDW_CFA_def_cfa_offset(uleb128 Off)spOff = Off; if (ua->sp is RegOffset) ua->sp = RegOffset(spReg, spOff)
0x13DW_CFA_def_cfa_offset_sf(sleb128 Off)spOff = Off * data_align; if (ua->sp is RegOffset) ua->sp = RegOffset(spReg, spOff)

Finally, it motivates adding a Stack<UnwindActions> to the VM, along with some opcodes to manipulate it:

OpcodeName and operandsSemantics assuming UnwindActions* ua
0x0aDW_CFA_remember_statePush(*ua) (note that ua not modified)
0x0bDW_CFA_restore_state*ua = Pop()

From one function to many

We could happily have one CIE per function, but there would be quite a lot of repetition between CIEs. To resolve this, we introduce the concept of an FDE, which is essentially a stripped down CIE:

struct fde {
  uint32_t length;            // In bytes, of all subsequent fields
  int32_t  negated_cie_offset;// In bytes, from this field, to a CIE
  uint8_t  func_start[];      // Using fde_ptr_encoding from CIE
  uint8_t  func_length[];     // Using (fde_ptr_encoding & 0xF)
  uint8_t  aug_operands[];    // Relates to aug_string from CIE
  uint8_t  dw_cfa_bytecode[]; // Appended to that of CIE
  uint8_t  zero_padding[];    // To give 4 or 8 byte alignment
};

The negated_cie_offset field points to a CIE, and the FDE inherits everything from the pointed-to CIE. Note that the value is negated, i.e. the value is subtracted from &fde::negated_cie_offset to form the cie*. For inheritance of aug_string, each flag therein sometimes has operand(s) in the CIE's aug_operands, sometimes in the FDE's aug_operands, and sometimes in both. The FDE aug_operands has:

Char(s)Operand(s)Notes
"eh"None
"z"uleb128 lengthIn bytes, of subsequent aug_operands
"R"None
"P"None
"L"uint8_t lsda_ptr[]Encoded using lsda_ptr_encoding from CIE
"S"None
"B"None
"\0"NoneEnd of aug_string

The dw_cfa_bytecode field is inherited by concatenation: executing the bytecode for an FDE involves first executing the bytecode of the associated CIE. All VM state is carried over from the end of the CIE bytecode to the start of the FDE bytecode, thoough if using DW_CFA_remember_state / DW_CFA_restore_state then the stack is emptied as part of switching from the CIE to the FDE.

That leaves func_start, which is a variable length pointer, and func_length, which is a variable length integer. If aug_string does not contain "R", then these fields are void* and uintptr_t respectively. If aug_string does contain "R", then the operand of "R" (fde_ptr_encoding) describes the size and interpretation of func_start, meanwhile fde_ptr_encoding & 0xF describes the size and interpretation of func_length. That's the segue to talking about pointer encodings.

Pointer encodings

Depending on the context, it can be desirable to encode a pointer in different ways. For example, sometimes a uintptr_t is desirable, whereas other times an int32_t offset from the start of the current function is desirable. To allow flexibility, various places allow a uint8_t to describe the pointer encoding being used. These places are:

Encoding fieldAssociated pointer field
cie::fde_ptr_encodingfde::func_start
cie::fde_ptr_encoding & 0xFfde::func_length
cie::fde_ptr_encodingDW_CFA_set_loc operand
cie::aug_string "P" ptr_encodingcie::aug_string "P" personality_ptr
cie::aug_string "L"fde::aug_string "L" (LSDA pointer)
eh_frame_hdr::eh_frame_ptr_enceh_frame_hdr::eh_frame_ptr
eh_frame_hdr::fde_count_enceh_frame_hdr::fde_count
eh_frame_hdr::table_enceh_frame_hdr::sorted_table
DW_OP_GNU_encoded_addrDW_OP_GNU_encoded_addr ‡

Foreshadowing.
More foreshadowing.

There are two special cases for this uint8_t:

EncodingMeaning
0xffEncoded value is absent/empty, decode as NULL.
0x50Encode a uintptr_t, but precede with padding to align it.

Once the two special cases are excluded, the remaining cases split the byte into a four bit field, a three bit field, and a one bit field. The low four bits denote the data type:

Encoding & 0xFData typeSize in bytes
0x0uintptr_tTypically either 4 or 8
0x1uleb128Varies (≥ 1)
0x2uint16_t2
0x3uint32_t4
0x4uint64_t8
0x9sleb128Varies (≥ 1)
0xAint16_t2
0xBint32_t4
0xCint64_t8

‡ Cannot be used for fde_ptr_encoding.

The next three bits denote a base value to be added to non-NULL values:

Encoding & 0x70Base value
0x00NULL
0x10 (pcrel)Address of first byte of encoded pointer
0x20 (textrel)Start of .text (in theory), but usually NULL in practice
0x30 (datarel)Start of .got (x86) or NULL (most other architectures) †
0x40 (funcrel) ‡fde::func_start (after decoding)

† Except in eh_frame_hdr::table_enc, where it means start of .eh_frame_hdr.
‡ Cannot be used for fde_ptr_encoding, or in eh_frame_hdr.

The final top bit controls an optional dereference:

Encoding & 0x80Semantics
0x00Treat value as void*
0x80Treat value as void**, dereference it to get void*

If an integer is desired rather than a pointer (as is the case for func_length and fde_count_enc), then the void* is reinterpreted as uintptr_t.

Finding all the FDEs

The various CIE and FDE structures are concatenated together, and then placed in the ELF .eh_frame section.

Traditionally, the compiler would insert a call to __register_frame somewhere during startup, passing along the address of the .eh_frame section. If non-NULL values were desired for textrel / datarel pointer encodings, it would instead insert a call to __register_frame_info_bases. At shutdown, a matching call to __deregister_frame or __deregister_frame_info_bases would be made. If there are multiple .eh_frame sections, and the linker didn't merge them, then __register_frame_table / __register_frame_info_table_bases can be used instead, which take a pointer to a NULL-terminated list of pointers to .eh_frame sections.

A more modern approach is to add an ELF .eh_frame_hdr section, and use either _dl_find_object or dl_iterate_phdr to find the appropriate .eh_frame_hdr section. Once found, the section contains a sorted table of FDE pointers:

struct eh_frame_hdr {
  uint8_t version;          // Must be 1
  uint8_t eh_frame_ptr_enc; // Typically 0x1B (pcrel int32_t)
  uint8_t fde_count_enc;    // Typically 0x03 (uint32_t)
  uint8_t table_enc;        // Must be 0x3B (datarel int32_t)
  uint8_t eh_frame_ptr[];   // Encoded with eh_frame_ptr_enc
  uint8_t fde_count[];      // Encoded with fde_count_enc
  struct {
    int32_t func_start;     // In bytes, relative to eh_frame_hdr
    int32_t fde_offset;     // In bytes, relative to eh_frame_hdr
  } sorted_table[fde_count];
};

The combined size of the eh_frame_ptr and fde_count fields must be a multiple of four (which happens naturally if eh_frame_ptr_enc and fde_count_enc both use 4 or 8 byte types). The eh_frame_ptr field contains a pointer to the .eh_frame section, which is used as fallback if the .eh_frame_hdr section is unparseable for some reason. The table_enc field contains the encoding used by the contents of sorted_table, albeit datarel means relative to the start of .eh_frame_hdr, and the only supported encoding is datarel int32_t. The table is sorted in ascending func_start order, and the func_start value therein overrides the func_start from the referenced FDE (though in practice they should be identical).

Personality pointers and LSDA pointers

To support calling destructors during unwinding, and to support catching exceptions (and thereby stopping unwinding), a CIE can specify a pointer to a personality function. Said function contains the destructing and/or catching logic, and will get called as part of unwinding (unless only generating a backtrace). I won't get into the specifics of the interface, though the personality function can query various parts of the unwind state:

Very old versions of g++ use eh_ptr instead of personality functions and LSDA pointers. Unless you are implementing __frame_state_for, you shouldn't need to worry about eh_ptr.

Expression bytecode

The UnwindOneFrame function from earlier included calls to EvalExpr to handle complex cases. The outline of EvalExpr is:

uintptr_t EvalExpr(uint8_t DwOpBytecode[unsigned Len],
                   uintptr_t StackInitial,
                   RegState* original) {
  const uint8_t* bpc = DwOpBytecode;
  Stack<uintptr_t> stk;
  stk.Push(StackInitial);
  while (bpc < DwOpBytecode + Len) {
    uint8_t opcode = *bpc++;
    /* Decode opcode-specific operand(s) from bpc */
    ...
    /* Perform opcode */
    ...
  }
  return stk.Pop();
}

With the various supported opcodes being:

OpcodeName and operandsSemantics (assuming stack for Push, Pop, At)
0x03DW_OP_addr(uintptr_t Lit)Push(Lit)
0x08DW_OP_const1u(uint8_t Lit)Push(Lit)
0x09DW_OP_const1s(int8_t Lit)Push(Lit)
0x0ADW_OP_const2u(uint16_t Lit)Push(Lit)
0x0BDW_OP_const2s(int16_t Lit)Push(Lit)
0x0CDW_OP_const4u(uint32_t Lit)Push(Lit)
0x0DDW_OP_const4s(int32_t Lit)Push(Lit)
0x0EDW_OP_const8u(uint64_t Lit)Push(Lit)
0x0FDW_OP_const8s(int64_t Lit)Push(Lit)
0x10DW_OP_constu(uleb128 Lit)Push(Lit)
0x11DW_OP_consts(sleb128 Lit)Push(Lit)
0x12DW_OP_dupPush(At(-1)) (duplicate top element)
0x14DW_OP_overPush(At(-2)) (duplicate penultimate element)
0x15DW_OP_pick(uint8_t Idx)Push(At(-1-Idx))
0x13DW_OP_dropPop()
0x16DW_OP_swapa = Pop(); b = Pop(); Push(a); Push(b)
0x17DW_OP_rota = Pop(); b = Pop(); c = Pop(); Push(a); Push(c); Push(b)
0x06DW_OP_derefPush(*(uintptr_t*)Pop())
0x94 0x01DW_OP_deref_sizePush(*(uint8_t*)Pop())
0x94 0x02DW_OP_deref_sizePush(*(uint16_t*)Pop())
0x94 0x04DW_OP_deref_sizePush(*(uint32_t*)Pop())
0x94 0x08DW_OP_deref_sizePush(*(uint64_t*)Pop())
0x19DW_OP_absa = Pop(); Push((intptr_t)a < 0 ? -a : a)
0x1fDW_OP_nega = Pop(); Push(-a)
0x20DW_OP_nota = Pop(); Push(~a)
0x23DW_OP_plus_uconst(uleb128 Lit)a = Pop(); Push(a + Lit)
0x1aDW_OP_andrhs = Pop(); lhs = Pop(); Push(lhs & rhs)
0x1bDW_OP_divrhs = Pop(); lhs = Pop(); Push((intptr_t)lhs / (intptr_t)rhs)
0x1cDW_OP_minusrhs = Pop(); lhs = Pop(); Push(lhs - rhs)
0x1dDW_OP_modrhs = Pop(); lhs = Pop(); Push(lhs % rhs)
0x1eDW_OP_mulrhs = Pop(); lhs = Pop(); Push(lhs * rhs)
0x21DW_OP_orrhs = Pop(); lhs = Pop(); Push(lhs | rhs)
0x22DW_OP_plusrhs = Pop(); lhs = Pop(); Push(lhs + rhs)
0x24DW_OP_shlrhs = Pop(); lhs = Pop(); Push(lhs << rhs)
0x25DW_OP_shrrhs = Pop(); lhs = Pop(); Push(lhs >> rhs)
0x26DW_OP_shrarhs = Pop(); lhs = Pop(); Push((intptr_t)lhs >> rhs)
0x27DW_OP_xorrhs = Pop(); lhs = Pop(); Push(lhs ^ rhs)
0x29DW_OP_eqrhs = Pop(); lhs = Pop(); Push(lhs == rhs)
0x2eDW_OP_nerhs = Pop(); lhs = Pop(); Push(lhs != rhs)
0x2aDW_OP_gerhs = Pop(); lhs = Pop(); Push((intptr_t)lhs >= (intptr_t)rhs)
0x2bDW_OP_gtrhs = Pop(); lhs = Pop(); Push((intptr_t)lhs > (intptr_t)rhs)
0x2cDW_OP_lerhs = Pop(); lhs = Pop(); Push((intptr_t)lhs <= (intptr_t)rhs)
0x2dDW_OP_ltrhs = Pop(); lhs = Pop(); Push((intptr_t)lhs < (intptr_t)rhs)
0x2fDW_OP_skip(int16_t Delta)bpc += Delta
0x28DW_OP_bra(int16_t Delta)if (Pop() != 0) bpc += Delta
0x30 + LitDW_OP_litPush(Lit)
0x50 + RegDW_OP_regPush(original->value[Reg])
0x70 + RegDW_OP_breg(sleb128 Lit)Push(original->value[Reg] + Lit)
0x90DW_OP_regx(uleb128 Reg)Push(original->value[Reg])
0x92DW_OP_bregx(uleb128 Reg, sleb128 Lit)Push(original->value[Reg] + Lit)
0x96DW_OP_nopNo-op
0xf1DW_OP_GNU_encoded_addr(uint8_t PtrEncoding, uint8_t Ptr[])Push(DecodePtr(PtrEncoding, Ptr))

The gcc implementation of EvalExpr assumes that the stack never holds more than 64 elements. Bad things will happen if the stack exceeds this size.

By far the most common usage of this expression language is to allow a single FDE to succinctly describe an arbitrary number of PLT entries. The serialised bytecode in this case will be something like:

0x92 0x07 0x08  DW_OP_bregx(7 /* rsp */, 8)
0x90 0x10       DW_OP_regx(16 /* rip */)
0x08 0x0f       DW_OP_const1u(15)
0x1a            DW_OP_and
0x08 0x0b       DW_OP_const1u(11)
0x2a            DW_OP_ge
0x08 0x03       DW_OP_const1u(3)
0x24            DW_OP_shl
0x22            DW_OP_plus

Which encodes the expression:

rsp + 8 + ((((rip & 15) >= 11) ? 1 : 0) << 3) 

In other words, each 16-byte PLT entry is split into two pieces: the 1st being 11 bytes long, and the 2nd being 5 bytes long. If in the 1st piece, the expression gives rsp + 8, whereas in the 2nd piece it gives rsp + 16.

References

The DWARF Standard is documented, though .eh_frame diverges from it in a few areas. It also requires architecture-specific extensions (e.g. x86, x86_64, aarch64). The LSB has a pointer encoding specification, though it is incomplete. The only true reference is the gcc source code, some relevant entry points into which are:

See also

For a completely different take on the same problem, consider ARM64 unwinding on Windows. An .xdata record therein is roughly equivalent to an FDE, and it has a bytecode interpreter similar in purpose (but very different in style) to the DW_CFA_ bytecode.

Some examples of complete .eh_frame sections are available in LuaJIT:

__VA_OPT__ Minutiae

Let's say we're writing C code, and would like to define a macro for printf-with-newline. We might start with:

#define printfln(fstr, ...) \
  printf(fstr "\n", __VA_ARGS__)

So far so good; we can write printfln("1+1 is %d", 1+1) and it'll print 1 + 1 is 2 followed by a newline. However, simpler cases such as printfln("Hello") result in a syntax error, as this macro expands to printf("Hello" "\n",), which is invalid due to the trailing comma.

The non-standard solution

One conventional solution is to rely on a non-standard language extension whereby inserting ## between , and __VA_ARGS__ has a special effect:

#define printfln(fstr, ...) \
  printf(fstr "\n", ## __VA_ARGS__)

With this, both printfln("1+1 is %d", 1+1) and printfln("Hello") work fine. However, combining the two into something like printfln("Wrote %d chars", printfln("Hello")) results in a mysterious error. To figure out why, we need to look into exactly what special effect this non-standard language extension has. The GNU C Preprocessor documentation on Variadic Macros says:

The ## token paste operator has a special meaning when placed between a comma and a variable argument. If you write [...] and the variable argument is left out [...], then the comma before the ## will be deleted. This does not happen if you pass an empty argument, nor does it happen if the token preceding ## is anything other than a comma.

Meanwhile, the GCC documentation on Variadic Macros says:

If the variable arguments are omitted or empty, the ## operator causes the preprocessor to remove the comma before it. If you do provide some variable arguments in your macro invocation, GNU CPP does not complain about the paste operation and instead places the variable arguments after the comma. Just like any other pasted macro argument, these arguments are not macro expanded.

The last sentence of this 2nd piece of documentation tells us what is going on: macro arguments are usually expanded before being substituted, but this doesn't happen to macro arguments used as operands to # or ##. In either case, after substitution has been performed, the resultant tokens are rescanned for more expansions, albeit with the original macro deactivated during this rescan. The only major observable difference between expansion-before-substitution and expansion-during-rescan is that the original macro is active in the former but deactivated in the latter. Hence printfln("Wrote %d chars", printfln("Hello")) expands to printf("Wrote %d chars" "\n", printfln("Hello")) without expanding the inner printfln, which the compiler then tries to parse as a call to the non-existent function printfln.

The two pieces of documentation are contradictory as to what happens if the variable arguments are present but empty (as in printfln("Hello",)). In practice the 1st piece of documentation is correct; the comma is kept if the variable arguments are present but empty.

The nor does it happen if the token preceding ## is anything other than a comma phrase from the 1st piece of documentation is interesting, as it turns out this is a slightly dynamic property: comma deletion can happen for things like , ## x ## __VA_ARGS__ provided that x expands to nothing. Clang will also delete the comma in , x ## __VA_ARGS__ when x expands to nothing. Things also seem to get funky if more pasting immediately follows , ## __VA_ARGS__. As examples:

#define F(x, ...) asF()F(1)F(,)F(1,)F(1,2)
,##__VA_ARGS__emptyempty,,,2
,##__VA_ARGS__ xempty1,,1,2 1
,##__VA_ARGS__##x,error,errorerror† or ,21
,##x##__VA_ARGS__emptyerror,errorerror
,x##__VA_ARGS__,† or empty ‡,1,,1,12

† According to gcc 13.2.
‡ According to clang 17.0.1.

In the cases where gcc and clang differ, who is to say which is correct? After all, there is no standard documenting the desired behaviour for non-standard language extensions.

Enter __VA_OPT__

Rather than standardise this mess, the C and C++ languages adopted a different solution, namely __VA_OPT__. With __VA_OPT__, our motivating example looks like:

#define printfln(fstr, ...) \
  printf(fstr "\n" __VA_OPT__(,) __VA_ARGS__)

In this, __VA_OPT__(,) expands to nothing if the variable arguments are absent or empty or become empty after macro expansion, whereas it expands to , otherwise. There's no token pasting going on, so printfln("Wrote %d chars", printfln("Hello")) now works. The other behavioural difference is that the odd-looking printfln("Hello",) now expands to the valid printf("Hello" "\n"), as does printfln("Hello", EMPTY) in the context of #define EMPTY /*nothing*/.

The standard didn't stop there though; there can be more than just a , within __VA_OPT__. Any (-)-balanced token sequence is allowed, and it can even access the arguments of the macro invocation, so for example it is valid to have:

#define M(x, ...) (0 __VA_OPT__(-(x)) )

Then M(1) expands to (0) and M(1,2) expands to (0 - (1)).

What about whitespace?

Compilers seem to disagree on the behaviour of whitespace just after __VA_OPT( or just before the matching ). Consider:

#define TILDAS(...) ~__VA_OPT__( ~ )~
#define S2(x) #x
#define S1(x) S2(x)
const char* s = S1(TILDAS());
const char* sa = S1(TILDAS(a));

The observed results are:

gcc 13.2clang 17.0.1
s"~~""~ ~"
sa"~~~""~ ~ ~"

One interpretation is that gcc is correct, based on this paragraph from the standard: (N3096 6.10.4.1¶7)

The preprocessing token sequence for [...] a va-opt-replacement [...] consists of the results of the expansion of the contained pp-tokens as the replacement list of the current function-like macro before removal of placemarker tokens, rescanning, and further replacement.

Combined with an earlier paragraph about replacement lists: (N3096 6.10.4¶7)

[...] Any white-space characters preceding or following the replacement list of preprocessing tokens are not considered part of the replacement list for either form of macro.

What does __VA_OPT__() expand to?

The short answer should be nothing, based on the rules for expanding it:

  1. If the variable arguments are absent or empty or become empty after macro expansion, the expansion of __VA_OPT__() is a single placemarker token.
  2. Otherwise, if used as an operand of ##, the expansion of __VA_OPT__() is a single placemarker token (because an empty expansion becomes a placemarker in this context).
  3. Otherwise, the expansion of __VA_OPT__() is empty (though a single placemarker token would be equally valid, as it'll evaporate away in due course).

#__VA_OPT__() therefore becomes an obfuscated way of writing "". Meanwhile, __VA_OPT__() as an operand of ## can be used to force the other operand to be in a ## context. For example, the undesirable behaviour of , ## __VA_ARGS__ can be reproduced via:

#define printfln(fstr, ...) \
  printf(fstr "\n" __VA_OPT__(,) __VA_OPT__() ## __VA_ARGS__)

If __VA_OPT__() expands to nothing, regardless of whether the variable arguments are absent or empty or become empty after macro expansion, then it might seem rational to optimise it away. There's a corner case though: determining whether the variable arguments become empty after macro expansion requires macro expanding them, and macro expansion of the (non-standard) __COUNTER__ macro has visible side-effects. Consider:

#define EMPTY_VA_OPT(...) __VA_OPT__()
int x = __COUNTER__;
EMPTY_VA_OPT(__COUNTER__)
int y = __COUNTER__;
return y - x;

For the above, gcc 13.2 returns 2, whereas clang 17.0.1 returns 1, indicating that clang optimised away the __VA_OPT__().

Token pasting and __VA_OPT__

The standard is careful to say that the expansion of __VA_OPT__ can contain placemarker tokens: (N3096 6.10.4.1¶7, emphasis mine)

The preprocessing token sequence for [...] a va-opt-replacement [...] consists of the results of the expansion of the contained pp-tokens as the replacement list of the current function-like macro before removal of placemarker tokens, rescanning, and further replacement.

These placemarker tokens were originally a fiction dreamt up by the standard to provide a succinct description of the semantics of ##:

Before __VA_OPT__, it was possible to ignore this fiction, and implement the preprocessing algorithm with careful rules around ## evaluation rather than producing and later removing placemarker tokens. It becomes harder to maintain this fiction with __VA_OPT__. Consider:

#define G(x, y, z, ...) 1 ## __VA_OPT__(x y ## y z) ## 5

Then G(,,) expands to the single token 15, while G(2,3,4,-) expands to the three tokens 12 33 45. If y is empty, then the inner y ## y should produce a placemarker, and then things get interesting:

Expansion ofResultNotes
G(2,,4,-)12 45The __VA_OPT__ expands to 24.
G( ,,4,-)1 45The placemarker inhibits merging to 145.
G(2,, ,-)12 5The placemarker inhibits merging to 125.
G( ,, ,-)1 5† or 15The correct result is the merged 15.

† According to gcc 13.2.
‡ According to clang 17.0.1.

The other fun observation is that macro arguments within __VA_OPT__ get macro expanded, even if the __VA_OPT__ itself is an operand of ##. Consider:

#define H1(...) x ##            __VA_ARGS__
#define H2(...) x ## __VA_OPT__(__VA_ARGS__)

Then H1(__LINE__) expands to x__LINE__, whereas H2(__LINE__) expands to something like x3.

Further reading

Higher quality random floats

Much has been written about how to generate random 64-bit integers; people might argue over which particular (CS)PRNG is best suited for a given task, but there is agreement on the general shape of the solution: keep some state around, and then extract 64 bits of randomness at a time, where each bit is equally likely to be either zero or one. This gives nice uniformly-distributed integers over the range [0, 264).

There is less agreement on how to generate random floating-point values. It is tempting to aim for what looks like a simple transformation of an integer generator: uniformly-distributed floats over the range [0, 1). There are various ways of doing such a transformation; one exposition does it as:

double pcg64_random_double(pcg64_t *rng) {
    return (double)(pcg64_random(rng) >> 11) * 0x1.0p-53;
}

Whereas LuaJIT does it as:

uint64_t lj_prng_u64d(PRNGState *rs) {
    uint64_t z, r = 0;
    TW223_STEP(rs, z, r)
    /* Returns a double bit pattern in the range 1.0 <= d < 2.0. */
    return (r & 0x000fffffffffffffull) | 0x3ff0000000000000ull;
}
/* Then to give d in [0, 1) range: */
U64double u;
double d;
u.u64 = lj_prng_u64d(rs);
d = u.d - 1.0;

And a Go version by Lemire does it as:

// toFloat64 -> [0,1)
func toFloat64(seed *uint64) float64 {
    x := splitmix64(seed)
    x &= 0x1fffffffffffff // %2**53
    return float64(x) / float64(0x1fffffffffffff)
}

The [0, 1) output range is less useful than it might initially appear. It is often stated that [0, 1) can be transformed to [A, B) by multiplying by B-A then adding A, but this is only true for some values of A and B; for other values, a number just less than 1 can be transformed and end up equal to B due to floating-point rounding, i.e. [0, 1) is transformed to [A, B]. The possibility of a zero output is also unhelpful if the result is going to be log-transformed (for example as part of a Box-Muller transform), as log(0) explodes.

An output range of [0, 1] would be better behaved under most additive and multiplicative transforms. After such a transform, a range like [A, B] can be easily turned into [A, B) via rejection sampling, should the half-open range be desired. It will also turn out to be very easy (on either theoretical or practical grounds) to exclude 0 as a possible output, giving the log-friendly output range (0, 1].

Before trying to generate a random float in the range [0, 1], we should instead consider the easier problem of generating a random float in the range [0.5, 1]. There are 252+1 different IEEE-754 doubles in this range. The "rounding basin" of some double d in that range can be defined as the range of infinite-precision real numbers in the range [0.5, 1] that would round to d (assuming round-to-nearest). Taking ε=2-54, most of these doubles have a basin of d-ε through d+ε. The exceptions are the extremes of the range; 0.5 has a basin of 0.5 through 0.5+ε, and 1 has a basin of 1-ε through 1. In other words, there are 252-1 values in the range whose basin is wide, and 2 values in the range whose basin is only ε wide. For a uniform distribution, the probability of seeing d should equal the width of the rounding basin of d. Given all this, there is a fairly simple monotonic transform from the uniform integer range [0, 253) to the uniform float range [0.5, 1]:

Integer(s)Resultant floating-point double (ε=2-54)
00.5
1, 20.5 + 2ε
3, 40.5 + 4ε
5, 60.5 + 6ε
253-7, 253-61 - 6ε
253-5, 253-41 - 4ε
253-3, 253-21 - 2ε
253-11

This transform can be expressed in code like so:

double rand_between_half_and_one() {
    double d;
    uint64_t x = rand_u64() >> 11; /* 53-bit uniform integer */
    x = ((x + 1) >> 1) + (1022ull << 52);
    memcpy(&d, &x, sizeof(d));
    return d;
}

The range [0.25, 0.5] looks a lot like the range [0.5, 1], except that ε reduces to 2-55. That is, there is a fairly simple monotonic transform from the uniform integer range [0, 253) to the uniform float range [0.25, 0.5]:

Integer(s)Resultant floating-point double (ε=2-55)
00.25
1, 20.25 + 2ε
3, 40.25 + 4ε
5, 60.25 + 6ε
253-7, 253-60.5 - 6ε
253-5, 253-40.5 - 4ε
253-3, 253-20.5 - 2ε
253-10.5

Or in code:

double rand_between_quarter_and_half() {
    double d;
    uint64_t x = rand_u64() >> 11; /* 53-bit uniform integer */
    x = ((x + 1) >> 1) + (1021ull << 52);
    memcpy(&d, &x, sizeof(d));
    return d;
}

In turn, the range [0.125, 0.25] looks a lot like [0.25, 0.5], and [0.0625, 0.125] looks a lot like [0.125, 0.25], and so on. To generate a float in [0, 1], we need to choose one of these ranges, and then choose a value within that range. For a uniform distribution, the probability of a range should be proportional to the width of the range, so we have:

Output rangeWidthProbability
[0.5, 1]2-150%
[0.25, 0.5]2-225%
[0.125, 0.25]2-312.5%
[0.0625, 0.125]2-46.25%

As the width of the range approaches zero, so does the probability of the range. Once the width is less than 2-75, the probabilities are so small as to be effectively impossible for most practical purposes. By making them actually impossible, we gain a nice guarantee of algorithm termination, and avoid having to think about denormals, and get the log-friendly output range (0, 1]. One way of implementing this in code is:

double rand_between_zero_and_one() {
    double d;
    uint64_t x = rand_u64() >> 11; /* 53-bit uniform integer */
    uint64_t e = 1022;
    do {
      if (rand_u64() & 1) break; /* 1-bit uniform integer */
      e -= 1;
    } while (e > 1022-75);
    x = ((x + 1) >> 1) + (e << 52);
    memcpy(&d, &x, sizeof(d));
    return d;
}

The above throws away lots of random bits, whereas it would be more economical to remember and use them later:

double rand_between_zero_and_one() {
    double d;
    uint64_t x = rand_u64(); /* 53-bit and 11-bit uniform integers */
    uint64_t bits = x & 0x7ff, nbits = 11;
    uint64_t e = 1022;
    do {
      if (bits & 1) break; /* 1-bit uniform integer */
      bits >>= 1;
      if (--nbits == 0) bits = rand_u64(), nbits = 64;
      e -= 1;
    } while (e > 1022-75);
    x = (((x >> 11) + 1) >> 1) + (e << 52);
    memcpy(&d, &x, sizeof(d));
    return d;
}

Finally, the loop can be eliminated using bit-counting intrinsics:

double rand_between_zero_and_one() {
    double d;
    uint64_t x = rand_u64();
    uint64_t e = __builtin_ctzll(x) - 11ull;
    if ((int64_t)e >= 0) e = __builtin_ctzll(rand_u64());
    x = (((x >> 11) + 1) >> 1) - ((e - 1011ull) << 52);
    memcpy(&d, &x, sizeof(d));
    return d;
}

Or in Go rather than C:

// toFloat64 -> (0,1]
func toFloat64(seed *uint64) float64 {
    x := splitmix64(seed)
    e := bits.TrailingZeros64(x) - 11
    if e >= 0 {
        e = bits.TrailingZeros64(splitmix64(seed))
    }
    x = (((x >> 11) + 1) >> 1) - ((uint64(int64(e)) - 1011) << 52)
    return math.Float64frombits(x)
}

With that exposition done, I can now present my criteria for assessing the quality of functions that claim to generate uniformly-distributed IEEE-754 double-precision floating-point values over (0, 1] or [0, 1] or [0, 1):

CriteriaMost functionsPresented hereIdeal
Probability of zero2-52 or 2-5302-1075
Smallest non-zero2-52 or 2-532-762-1074
Largest non-seeable1 - 2-53 or 0.5 - 2-542-76 - 2-129-0
Distinct values252 or 253258.2262±1

If you've reached this far, then you might be interested in similar commentary from Taylor R. Campbell or Allen B. Downey or @moon_chilled.

page: 1 2 3 4 5