Seeing the raft for the trees

The Raft distributed consensus protocol is normally seen as having an array of log entries. There is however a generalisation of Raft which replaces the array of log entries with a tree of nodes, and then (standard) Raft can be seen as a very simple tree that contains no branches. The resultant protocol can be called Tree Raft, and is very similar in spirit to Chained Raft. This post describes Tree Raft and contrasts it to standard Raft.

In standard Raft, the log is an array, indexed by uint64_t index, with each array element being a log entry:

struct LogEntry {
  uint64_t term;
  string contents;
};

In Tree Raft, the log is a tree. The tree can be stored in a map/dictionary whose key type is NodeRef and value type is Node:

struct NodeRef {
  uint64_t index;
  uint64_t term;
};
struct Node {
  string contents;
  NodeRef parent;
};

There are two restrictions on the parent field: if Node n has key (i, t) then n.parent.index == i - 1 and n.parent.term <= t. This first restriction allows an optimisation: parent.index does not need to be stored; only parent.term needs to be stored.

There are two obvious orderings that can be defined on NodeRef: by ascending index then ascending term (lexicographic), and by ascending term then ascending index (transposed lexicographic). Both orderings turn out to be useful in different places. Note that a node comes after its parent in both orderings.

As an optimisation, a Node and (some number of) its transitive parents can be represented as an array of LogEntry plus the parent term of the oldest Node. This optimised representation is the only representation in standard Raft, and it appears twice: the per-server log takes this form (with the parent term of the oldest Node being lastTerm in the InstallSnapshot RPC), and the AppendEntries RPC takes this form (with the parent term of the oldest Node being prevLogTerm). A Tree Raft implementation could use this optimised form for committed nodes, and only use the more expensive map/dictionary representation for uncommitted nodes.

Note that a NodeRef identifies a single Node, but because every node identifies its parent, a NodeRef implicitly identifies an entire chain of nodes. This is similar to a commit hash in git: a hash identifies a single commit, but a commit references its parent commit(s), and thus a single hash identifies an entire graph of commits leading to that point. Standard Raft describes this as the Log Matching property: if two logs contain an entry with the same index and term, then the logs are identical in all entries up through the given index.

A Tree Raft server maintains two cursors into the tree: NodeRef commit and NodeRef head. The NodeRef commit cursor replaces the uint64_t commitIndex from standard Raft. The NodeRef head cursor replaces "last log entry" everywhere that it comes up: the RequestVote RPC contains the candidate's NodeRef head, and other servers are willing to vote for candidates whose NodeRef head is greater than or equal to theirs (using transposed lexicographic order). The leader is able to create a new node by using the key (head.index + 1, currentTerm) and then setting its head to this new node. For every follower, there will be a path in the tree from the follower's NodeRef head to the leader's NodeRef head, and one objective of a follower is to move its head cursor along this path toward the leader's head cursor (note that the path might involve going up the tree before going back down). Similarly, for every follower, there will be a path in the tree from the follower's NodeRef commit to the leader's NodeRef commit, and another objective of a follower is to move its commit cursor along this path toward the leader's commit cursor (crucially, this path is always downwards and never backtracks). The leader is responsible for moving its commit cursor toward its head cursor when it is safe to do so, and the safety constraint here is very similar to standard Raft: a strict majority of servers must have their head.term equal to the leader's currentTerm, and within this majority, the minimum head.index gives the safe commit point. On every server, the commit cursor will either be equal to the head cursor, or be an ancestor of the head cursor. This permits an optimisation: only the index of the commit cursor needs to be stored, as its term can be reconstructed by finding the ancestor of head with the given index. Standard Raft contains this optimisation, as it only stores uint64_t commitIndex. It can be seen that standard Raft is just Tree Raft where a server stores the chain of nodes between the root and its head (a very simple non-branching tree) as its log, and does not store any other nodes.

Tree Raft replaces the AppendEntries RPC with an AddNodes RPC. The prevLogIndex, prevLogTerm, entries[], and leaderCommit arguments are replaced by the leader's NodeRef head, pair<NodeRef, Node> nodes[], and the leader's NodeRef commit. Note that these arguments can be optimised somewhat (only including commit.index and not commit.term, using the array of LogEntry representation for nodes, not including head if the last nodes entry is the head, not including the term of nodes when equal to the leader's currentTerm, and so forth). After processing the RPC, followers respond with their currentTerm and their (possibly moved) NodeRef head. The receiver implementation is:

  1. If term < currentTerm, finish processing.
  2. Add all the nodes to your map.
  3. If there is a path from your NodeRef head to the leader's NodeRef head, set your head to the leader's.
  4. If there is a path from your NodeRef commit to the leader's NodeRef commit, and the leader's commit either equals your head or is an ancestor of your head, set your commit to the leader's (this can be deferred until after the response has been sent, and the movement can be done one step at a time).

Note that if a follower misses an AddNodes RPC (due to packet loss, or leader failure, or late startup), then the nodes added in step 2 might not have a path back to the root; that is, there'll be a node in the map whose parent is not in the map. In this case, steps 3 and 4 will not be able to find a path to the new nodes. In a departure from standard Raft, in Tree Raft, followers are responsible for identifying these missing parent nodes and issuing Replay RPCs (against a randomly chosen server) in an attempt to obtain them. Once obtained, steps 3 and 4 can be retried. This has a number of nice properties: leaders no longer need to track nextIndex for each follower, the replay workload is shared between followers rather than being yet another task that the leader is responsible for, and the AddNodes RPC can be a UDP multicast to all followers. Furthermore, as nodes are not removed from the map as part of processing an AddNodes RPC, there are slightly nicer forward progress guarantees during leader changeover. As yet another nice property, if implementing the PreVote extension (which is necessary for liveness), then PreVote rejects can carry the (server's latest view of) the leader's currentTerm and NodeRef head and NodeRef commit, and then the receiver of the reject can treat this like an AddNodes and then replay the missing nodes (note that currentTerm is monotonic, and within a term, the leader's head and commit are also monotonic, hence stale views of this triplet can be arbitrated). Consequently, followers can keep up with the leader even when direct communication between the leader and the follower is not possible, avoiding the need for a long recovery process once direct communication is restored.

As nodes are not removed from the map as part of the AddNodes RPC, it is worth considering when nodes do get removed. A server cannot remove its head node, nor any ancestor of its head node (except when moving these nodes into a snapshot), but it is safe to remove any other node at any time. Standard Raft takes a very aggressive policy: when head moves backward (as part of moving it toward the common parent of the follower's head and the leader's head, i.e. moving it along the path in the tree from follower's head to leader's head), immediately remove everything that can be removed. A very lax policy is also possible: never remove nodes. A good middle ground is to remove during commitment: when committing a node, remove any other nodes which have the same index but a different term, and when removing a node, also remove its children (recursively). This fits very nicely with using the optimised array representation for committed nodes. If taking this middle ground, then it becomes attractive to store the uncommitted nodes in a sorted map, with nodes in lexicographic order, as then nodes with the same index can be found with a range scan, and the children of a node can also be found with a range scan (find all nodes with index + 1 using range scan, then filter based on their parent.term).

To efficiently answer "is there a path between node X and node Y" queries, it can be useful for each server to store an additional NodeRef furthest_ancestor field on every Node. Initially, furthest_ancestor is set to parent, but when it is discovered that n.furthest_ancestor exists in the map, n.furthest_ancestor is replaced by lookup(n.furthest_ancestor).furthest_ancestor (giving a union-find-like behaviour). If two nodes have the same furthest_ancestor then a path exists between them. As an optimisation, furthest_ancestor does not need to be stored for committed nodes, as it'll always refer to the root for them. Furthermore, furthest_ancestor precisely identifies the node which needs to be the target of a Replay RPC.

Incrementally snapshotable data structures

A fixed-length array (of length N) supports two O(1) operations (list A):

  1. Retrieve the element at index i (where 0 ≤ i < N).
  2. Set the element at index i (where 0 ≤ i < N) to some new value.

An incrementally snapshotable fixed-length array (of length N) supports two more O(1) operations (list A, continued):

  1. Take a snapshot of all N elements.
  2. Drain some of the snapshot to a sink.

Given the O(1) requirement, it shouldn't be surprising that operation A4 only drains some of the snapshot. It needs to be called O(N) times to fully drain the snapshot to the sink, but this is fine, as the entire snapshot of N elements is taken atomically as operation A3, and other operations can be performed between draining steps.

The only non-obvious part is how to implement operation A3 in O(1) time, as a naïve implementation would take O(N) time. To make A3 possible in O(1) time, some of the work needs to be pushed elsewhere (to A2 and A4), and a number of requirements need to be imposed (list B):

  1. At most one snapshot can exist at any time.
  2. Once a snapshot has been taken, it must be fully drained before the next snapshot is taken.
  3. The underlying data structure must be incrementally iterable (this is trivial for arrays).
  4. Given the iterator cursor from B3, and an element being mutated by operation A2, it needs to be possible to determine on which side of the cursor the element falls (also trivial for arrays).
  5. The sink must be willing to receive the snapshot contents in any order.
  6. The sink must be willing to receive snapshot contents at any time.
  7. An additional one bit of mutable storage must be available for each element in the data structure.

Some of these requirements (B1, B2, B5, and B6) can be relaxed slightly (at a cost elsewhere), but we'll go with the full set of requirements for now, as practical use-cases exist which are compatible with all of these requirements. Given all these requirements, the iteration cursor from B3 splits the underlying data structure into two pieces, which I'll call visited and pending. The additional one bit per element from B7 I'll call already-visited, and impose the invariant that already-visited is false for elements in the visited part of the data structure (whereas it can be either true or false for elements in the pending part). The list A operations are then implemented as (list C):

  1. Retrieve the element at index i (where 0 ≤ i < N): exactly the same as a fixed-length array without snapshots.
  2. Set the element at index i (where 0 ≤ i < N) to some new value: if element i is in the pending part of the data structure, and already-visited is false, then before setting the element, send the previous value of the element to the sink and set already-visited to true. After this, proceed exactly the same as a fixed-length array without snapshots.
  3. Take a snapshot of all N elements: Set the B3 iteration cursor to one end of the data structure, such that the entire structure is pending.
  4. Drain some of the snapshot to a sink: Advance the B3 iteration cursor by one element. If that element was already-visited, then set already-visited to false. Otherwise, send the element to the sink.

There is a choice in C3 as to whether to set the cursor to the very start of the structure, or to the very of end the structure. This then impacts C4: the iteration is either forwards (if starting from the start) or backwards (if starting from the end). Forwards iteration is more natural, but for data structures whose mutations often happen at the end, backwards iteration can reduce the number of already-visited elements, which can improve efficiency. Requirement B2 (once a snapshot has been taken, it must be fully drained before the next snapshot is taken) ensures that the cursor can be set to either of these points without having any simultaneously visited and already-visited elements, as C4 ensures that already-visited is false everywhere after the previous draining is complete.

C++ happens to give the name operator[] to both operation A1 (retrieve) and A2 (set), albeit with different const specifiers. This point of confusion aside, the implementation of a fixed-length array without snapshots is trivial:

template <typename T, size_t N>
struct fixed_length_array {
  T _elems[N];

  const T& operator[](size_t i) const { // A1
    assert(("index should be within bounds", i < N));
    return _elems[i];
  }
  T& operator[](size_t i) { // A2
    assert(("index should be within bounds", i < N));
    return _elems[i];
  }
};

It can be extended to an incrementally snapshotable fixed-length array like so:

template <typename T, size_t N, typename Sink>
struct fixed_length_array_with_snapshot : fixed_length_array<T, N> {
  bool _already_visited[N] = {};
  Sink _sink = nullptr;
  size_t _cursor = 0;

  const T& operator[](size_t i) const { // A1
    return fixed_length_array<T, N>::operator[](i);
  }
  T& operator[](size_t i) { // A2
    if (i < _cursor && !_already_visited[i]) {
      _already_visited[i] = true;
      (*_sink)[i] = this->_elems[i];
    }
    return fixed_length_array<T, N>::operator[](i); 
  }

  void take_snapshot(Sink sink) { // A3
    assert(("requirement B2", _sink == nullptr));
    assert(("sink should not be null", sink != nullptr));
    _sink = sink;
    _cursor = N;
  }
  bool drain_snapshot() { // A4
    assert(("snapshot not yet taken", _sink != nullptr));
    if (_cursor == 0) {
      _sink = nullptr;
      return true; // finished draining
    } else if (_already_visited[--_cursor]) {
      _already_visited[_cursor] = false;
      return false;
    } else {
      (*_sink)[_cursor] = this->_elems[_cursor];
      return false;
    }
  }
};

An example usage is:

fixed_length_array<int, 3> sink;
fixed_length_array_with_snapshot<int, 3, decltype(&sink)> arr;
arr[0] = 10;
arr[1] = 20;
arr[2] = 30;
arr.take_snapshot(&sink);
arr[1] = 25;
do {} while (arr.drain_snapshot() == false);
std::cout << arr[0] << " " << arr[1] << " " << arr[2] << "\n";
std::cout << sink[0] << " " << sink[1] << " " << sink[2] << "\n";

Requirement B6 (the sink must be willing to receive snapshot contents at any time) can be relaxed by introducing a buffer, with operation A2 pushing into this buffer when the sink is not willing to receive contents, and operation A4 popping from the buffer (if non-empty) rather than advancing the cursor. This gives a slightly different implementation:

template <typename T, size_t N>
struct fixed_length_array_with_snapshot : fixed_length_array<T, N> {
  bool _already_visited[N] = {};
  std::vector<std::pair<size_t, T>> _buffer;
  size_t _cursor = 0;

  const T& operator[](size_t i) const { // A1
    return fixed_length_array<T, N>::operator[](i);
  }
  T& operator[](size_t i) { // A2
    if (i < _cursor && !_already_visited[i]) {
      _already_visited[i] = true;
      _buffer.emplace_back(i, this->_elems[i]);
    }
    return fixed_length_array<T, N>::operator[](i); 
  }

  void take_snapshot() { // A3
    _cursor = N;
  }
  template <typename Sink>
  bool drain_snapshot(Sink sink) { // A4
    if (!_buffer.empty()) {
      (*sink)[_buffer.back().first] = _buffer.back().second;
      _buffer.pop_back();
      return false;
    } else if (_cursor == 0) {
      return true; // finished draining
    } else if (_already_visited[--_cursor]) {
      _already_visited[_cursor] = false;
      return false;
    } else {
      (*sink)[_cursor] = this->_elems[_cursor];
      return false;
    }
  }
};

And slightly different example usage:

fixed_length_array<int, 3> sink;
fixed_length_array_with_snapshot<int, 3> arr;
arr[0] = 10;
arr[1] = 20;
arr[2] = 30;
arr.take_snapshot();
arr[1] = 25;
do {} while (!arr.drain_snapshot(&sink));
std::cout << arr[0] << " " << arr[1] << " " << arr[2] << "\n";
std::cout << sink[0] << " " << sink[1] << " " << sink[2] << "\n";

From here, requirement B5 (the sink must be willing to receive the snapshot contents in any order) can be relaxed by changing the data structure of the buffer: rather than a first-in-last-out stack, use a tree (as in binary tree or btree) data structure, and have A4 perform dual iteration (using the same cursor) of the underlying data structure and the buffer.

To explore a different area of the implementation space, we can reinstate requirements B5 and B6, and instead relax requirement B2 (once a snapshot has been taken, it must be fully drained before the next snapshot is taken). For this, one option is to replace the one bit of mutable storage per element with a sequence number per element. The implementation looks like:

template <typename T, size_t N, typename Sink>
struct fixed_length_array_with_snapshot : fixed_length_array<T, N> {
  uint64_t _sequence[N] = {};
  Sink _sink = nullptr;
  uint64_t _snap_sequence = 0;
  size_t _cursor = 0;

  const T& operator[](size_t i) const { // A1
    return fixed_length_array<T, N>::operator[](i);
  }
  T& operator[](size_t i) { // A2
    if (i < _cursor && _sequence[i] < _snap_sequence) {
      _sequence[i] = _snap_sequence;
      (*_sink)[i] = this->_elems[i];
    }
    return fixed_length_array<T, N>::operator[](i); 
  }

  void take_snapshot(Sink sink) { // A3
    _sink = sink;
    _snap_sequence += 1;
    _cursor = N;
  }
  bool drain_snapshot() { // A4
    if (_cursor == 0) {
      return true; // finished draining
    } else if (_sequence[--_cursor] < _snap_sequence) {
      (*_sink)[_cursor] = this->_elems[_cursor];
    }
    return false;
  }
};

One advantage of this implementation is that drain_snapshot no longer needs to mutate anything except the cursor, which can be useful when mutation is expensive. It also provides a path to relaxing requirement B1 (at most one snapshot can exist at any time) slightly: to support a handful of snapshots at a time, each one needs its own _cursor, its own _snap_sequence, and its own _sink, and mutating operator[] needs to check against every snapshot and possibly send the element to every sink.

To explore yet another different area, we can reinstate requirements B1 and B2, and instead think about different data structures. A fixed-length array of T is arguably the easiest data structure to consider, but the exact same methodology can be used to add incremental snapshots to any kind of array, or any kind of tree (including binary trees and btrees), or any kind of hash table: a cursor needs to be maintained, mutating operations (including element insertion and removal) need to check against that cursor, and some extra state (either one bit or a snapshot sequence number) needs to be maintained per element. If the underlying data structure already needs to maintain some state per element, then the state for snapshots might fit in very easily. For an example of this, consider a fixed-length array of Maybe[T]. That is, an array where every element is either Nothing or Something(T). A simple implementation without snapshots might look like:

template <typename T, size_t N>
struct fixed_length_maybe {
  T _elems[N];
  bool _has[N] = {};

  bool has(size_t i) const {
    return i < N && _has[i];
  }
  void emplace(size_t i, const T& value) {
    assert(("index should be within bounds", i < N));
    assert(("element already exists", !_has[i]));
    _has[i] = true;
    _elems[i] = value;
  }
  void remove(size_t i) {
    assert(has(i));
    _has[i] = false;
  }
  const T& get(size_t i) const {
    assert(has(i));
    return _elems[i];
  }
  void set(size_t i, const T& value) {
    assert(has(i));
    _elems[i] = value;
  }
};

Note that the above implementation assumes that T is default-constructible. It should be clear how to remove this assumption (change the element type of the _elems array from T to std::aligned_storage_t<sizeof(T), alignof(T)>, use placement-new in emplace, invoke destructor in remove, write copy/move constructors/operators and destructor as appropriate), but the resultant boilerplate isn't interesting to the topic of this blog post.

The implementation can then be extended with support for incremental snapshots:

template <typename T, size_t N, typename Sink>
struct fixed_length_maybe_with_snapshot : fixed_length_maybe<T, N> {
  bool _already_visited[N] = {};
  Sink _sink = nullptr;
  size_t _cursor = 0;

  void emplace(size_t i, const T& value) {
    _already_visited[i] = (i < _cursor);
    return fixed_length_maybe<T, N>::emplace(i, value);
  }
  void remove(size_t i) {
    if (i < _cursor && !_already_visited[i]) {
      _sink->emplace(i, this->_elems[i]);
    }
    _already_visited[i] = false;
    return fixed_length_maybe<T, N>::remove(i);
  }
  void set(size_t i, const T& value) {
    if (i < _cursor && !_already_visited[i]) {
      _already_visited[i] = true;
      _sink->emplace(i, this->_elems[i]);
    }
    return fixed_length_maybe<T, N>::set(i, value);
  }

  void take_snapshot(Sink sink) {
    assert(("requirement B2", _sink == nullptr));
    assert(("sink should not be null", sink != nullptr));
    _sink = sink;
    _cursor = N;
  }
  bool drain_snapshot() {
    assert(("snapshot not yet taken", _sink != nullptr));
    if (_cursor == 0) {
      _sink = nullptr;
      return true; // finished draining
    } else if (_already_visited[--_cursor]) {
      _already_visited[_cursor] = false;
    } else if (this->_has[_cursor]) {
      _sink->emplace(_cursor, this->_elems[_cursor]);
    }
    return false;
  }
};

A better implementation would note that the _has array and the _already_visited array can be combined into a single array, with every element being in one of three possible states:

  1. Nothing
  2. Something(T)
  3. Something(T) already-visited

A fixed-length array of Maybe[T] might feel slightly contrived, but an open addressing hash table from K to V is basically just a fixed-length array of Maybe[Pair[K,V]]. Depending on how deletion is implemented, a fourth element state (Tombstone) might be added. It is also common for the Something states to contain a fingerprint of the element hash (in the otherwise spare bits). The only non-obvious part is how rehashing (to grow the table) interacts with the snapshot state, but this turns out to be easy: at the end of rehashing, set the snapshot cursor such that the entire new table is pending, and when moving elements from the old table to the new table, set them as already-visited if they were already-visited to start with or they were in the visited part of the old table.

Zstd streaming compression using ZSTD_compressBlock

Zstandard is, at its core, a block compressor. This core is usually packaged up and presented as a higher-level API (either streaming compression with ZSTD_compressStream2, or one-shot compression with ZSTD_compress), but the core remains available in the form of ZSTD_compressBlock:

ZSTDLIB_STATIC_API
size_t ZSTD_compressBlock(ZSTD_CCtx* cctx,
                          void* dst, size_t dstCapacity,
                          const void* src, size_t srcSize);

The ZSTDLIB_STATIC_API specifier means that this API isn't considered stable, and is only available when statically linking against the zstd library. It also means that #define ZSTD_STATIC_LINKING_ONLY needs to appear before #include <zstd.h>. The cctx argument contains a bunch of state which is carried over between successive blocks. The result of compression will be written into the memory range [dst, dst + dstCapacity). The dstCapacity argument exists mostly as a safety check; ZSTD_compressBlock will return an error if this value is not large enough. The block to be compressed is read from [src, src + srcSize), and the caller needs to keep this memory range unmodified for as long as it remains within the compression window. srcSize must not exceed 1<<17. The return value falls into one of a few ranges:

The compression window is formed by the concatenation of two spans of bytes, both initially empty. Diverging from the internal terminology, I'll call the two spans the old generation and the current generation. Within ZSTD_compressBlock, if the current generation ends at src, then the current generation is expanded to the right by srcSize. Otherwise, the old generation is discarded, the current generation becomes the old generation, and [src, src + srcSize) becomes the current generation. If the old generation overlaps with the current generation, then the old generation is trimmed (from the left) to eliminate the overlap. The caller is responsible for calculating the maximum window size resulting from all this.

To generate a compressed stream which is compatible with the higher-level API, two layers of framing need to be applied. Each block needs a three-byte header, and then multiple blocks are concatencated to form frames, with frames having an additional header and optional footer (and in turn, multiple frames can be concatenated).

The three-byte block header consists of three fields, which starting from the least significant bit are:

  1. is_last_block flag (1 bit)
  2. block_type field (2 bits)
  3. block_size field (21 bits, though value not allowed to exceed 1<<17)

If the is_last_block flag is not set, then another block is expected to follow in the stream. If it is set, then the current frame footer (if any) is expected to follow, and then either the end of the stream or the start of the next frame.

The block_type field has three valid values, though only two of them arise when using ZSTD_compressBlock directly. A block_type of 0 is used for blocks which could not be compressed (ZSTD_compressBlock returning 0). In this case, the "compressed data" exactly equals the decompressed data, and block_size gives the length of this data. A block_type of 2 is used for compressed blocks. In this case, block_size gives the length of the compressed data (the return value of ZSTD_compressBlock).

A frame header consists of:

  1. Magic number (4 bytes 28 B5 2F FD)
  2. Compression parameters, notably including the window size (1-6 bytes in general, typically 2 bytes)
  3. Optional uncompressed size (if present, 1-8 bytes)

See RFC8878 for details on the frame header fields. One of the compression parameters controls whether a frame footer is present. When a footer is present, it consists of a single field:

  1. XXH64 checksum of uncompressed data (4 bytes)

Special skippable frames can also be present. The reference implementation of the decompressor simply ignores such frames, but other implementations are allowed to do other things with them. They consist of:

  1. Magic number (4 bytes, first byte in range 50 - 5F, then 2A 4D 18)
  2. length field (4 bytes)
  3. length bytes of data

Comparison after bit reversal

ARM processors have an rbit instruction for reversing bits. On these processors, the following function can be executed as three instructions; rbit followed by rbit followed by cmp, with the result in flags:

bool rbit_lt(uint32_t a, uint32_t b) {
  return __builtin_arm_rbit(a) < __builtin_arm_rbit(b);
}

x86 processors lack an rbit instruction, but following a derivation from the expired patent US 6347318 B1, there is a different three-instruction sequence for x86 processors which achieves the same thing. Start by pulling out the less-than comparison:

bool rbit_lt(uint32_t a, uint32_t b) {
  return lt(__builtin_arm_rbit(a), __builtin_arm_rbit(b));
}

bool lt(uint32_t a, uint32_t b) {
  return a < b;
}

Then replace the less-than comparison with a much more complicated version. This is based on the observation that the less-than comparison only cares about the most significant bit position where the two inputs differ. In turn, most significant bit position is just reversed least significant bit position, and standard bit tricks can be used to get the least significant set bit:

bool lt(uint32_t a, uint32_t b) {
  return (msb(a ^ b) & b) != 0;
}

uint32_t msb(uint32_t x) {
  return __builtin_arm_rbit(lsb(__builtin_arm_rbit(x)));
}

uint32_t lsb(uint32_t x) {
  return x & -x;
}

Then merge (the more complicated) lt back into rbit_lt:

bool rbit_lt(uint32_t a, uint32_t b) {
  return (msb(__builtin_arm_rbit(a ^ b)) & __builtin_arm_rbit(b)) != 0;
}

Then expand msb:

bool rbit_lt(uint32_t a, uint32_t b) {
  return (__builtin_arm_rbit(lsb(a ^ b)) & __builtin_arm_rbit(b)) != 0;
}

Then the reversals can be dropped:

bool rbit_lt(uint32_t a, uint32_t b) {
  return (lsb(a ^ b) & b) != 0;
}

This is three instructions; xor followed by blsi followed by test, with the result in flags. The blsi instruction is part of the BMI1 instruction set, and thus not present on older x86 processors. If we care about older processors, then xor-lsb can be replaced with with equivalent sub-lsb:

bool rbit_lt(uint32_t a, uint32_t b) {
  return (lsb(a - b) & b) != 0;
}

Then lsb expanded:

bool rbit_lt(uint32_t a, uint32_t b) {
  return ((a - b) & (b - a) & b) != 0;
}

This comes out to four instructions on ARM; sub, sub, and, and tst. As x86 lacks relevant three-operand operations, it comes out to five on x86: mov, sub, sub, and, and test (or sub, mov, neg, and, and test), though the mov will be almost free on recent x86 processors.

Reed-Solomon for software RAID

Software RAID is a mechanism for aggregating multiple hard drives together, with the aim of improving at least one of:

Optimising purely for storage capacity is easy; given N hard drives each with 1 terabyte of storage capacity, it isn't hard to imagine how to aggregate them into a system with N terabytes of storage capacity. Optimising purely for data throughput rate is similarly easy. Optimising for MTBF is the interesting part. In order to improve MTBF, the aggregation needs to ensure that data is not lost even when some hard drives fail. It isn't hard to see that some capacity needs to be sacrified in order to improve MTBF. One simple approach would be to write the same data to every hard drive, so N hard drives each with 1 terabyte of storage capacity would yield an aggregate with just 1 terabyte of storage capacity, but any N-1 drives could fail without data loss. Another simple approach would be to arrange the hard drives in pairs and write any given piece of data to both drives in some pair, yielding an aggregate system with N/2 terabytes of storage capacity and which allows one hard drive failure within each pair. A slightly more complex approach would be to nominate one hard drive as storing the XOR of all the others, yielding N-1 terabytes of storage and allowing one hard drive failure.

Compared against these simple aggregations, a Reed-Solomon (RS) aggregation initially looks like magic. An RS(10,4) system is an aggregation of 14 hard drives which has the capacity of 10 hard drives, and allows any 4 of the 14 to fail without data being lost. The general RS(N,M) construction requires N+M hard drives, has the capacity of N hard drives, and allows any M of the N+M to fail without data being lost. The system is described at the level of hard drives, but the underlying mechanism operates at the level of small words. As hard drives store bytes, it is useful for a byte to be an integer number of words, or for a word to be an integer number of bytes. For this reason, the word size is typically one of: 1 bit, 2 bits, 4 bits, 8 bits, 16 bits, 32 bits, or 64 bits.

In the framing of matrix math, the RS(N,M) system can be viewed as taking a vector of N words and multiplying it by some N×(N+M) matrix (of words) to give a vector of N+M words. The N+M words are then stored on N+M different hard drives, and any N of the output N+M words can be used to recover all N original input words. For this system to work, "all" we need is for every N×N submatrix of the N×(N+M) matrix to be invertible. To see why, assume we have N output words, then take the N×N submatrix which gives just those outputs, invert it, and multiply it by the outputs to give the original inputs. Note that throughout this piece I'm using the most general meaning of submatrix; a submatrix is formed by deleting zero or more rows and zero or more columns - in particular, a submatrix need not be a contiguous block of the original matrix. It is convenient if some N×N submatrix of the N×(N+M) matrix is the identity matrix, as then N of the output words are identical to N of the input words, and this is actually easy to achieve: take any N×N submatrix, invert it, and then multiply this inverse with the full N×(N+M) matrix. Accordingly, we can view the N×(N+M) matrix as the concatenation of an N×N identity matrix with an N×M matrix.

The problem thus decomposes to:

  1. Coming up with an N×(N+M) matrix which happens to have every N×N submatrix be invertible.
  2. Being able to invert N×N matrices when the time comes.
  3. Efficiently doing K×N by N×M matrix multiplication for large K (K of 1 is the basic case, but in practice K will be the file system block size divided by the word size).

For step 2, it helps if all the non-zero elements of the matrix are themselves invertible. This isn't true in general for integers modulo 2W, so the W-bit words need to be viewed as elements of GF(2W) rather than integers modulo 2W.

For step 1, there are a few common approaches: Vandermonde matrices, Cauchy matrices, and special case constructions. The latter two approaches tend to operate directly on the view of the concatenation of an N×N identity matrix with an N×M matrix. Every N×N submatrix of such a concatenation being invertible is equivalent to every square submatrix (of any size) of the N×M matrix being invertible. To see why, we can use matrix determinants and Laplace expansion: first note that matrix invertibility is equivalent to having a non-zero determinant, then consider some N×N submatrix of the concatenation, and do Laplace expansion of all the parts of the submatrix which came from the identity part of the concatenation. After repeated expansion, the remaining minor will be some square submatrix of the N×M matrix, and modulo signs, the determinant of this minor will equal the determinant of the N×N submatrix.

Special case constructions are worth considering first. Firstly, if the word size is just 1 bit, then the only N×M matrix meeting the invertibility requirement is the N×1 matrix with a value of one in every position: no zeroes can appear in the N×M matrix (as the 1×1 square submatrix containing that zero isn't invertible), and 1-bit words can only be zero or one, so the N×M matrix has to be entirely ones, at which point M ≥ 2 is impossible (as any 2×2 square submatrix consisting entirely of ones isn't invertible). This special case is exactly the case of having one hard drive store the XOR of all the others. One possible conclusion is that word sizes larger than 1 bit are the key component of doing better than the simple approaches outlined at the start of this piece. The second special case to consider is M of 2 with the N×M matrix being the stacking of two N×1 vectors: the first vector consisting entirely of ones, and the second vector containing N distinct non-zero elements of GF(2W). This meets the invertibility requirements (every 1×1 submatrix is non-zero, and every 2×2 submatrix can have its determinant easily computed and seen to be non-zero). Such a matrix can be constructed provided that N < 2W, and is a common RAID6 construction.

Next up are Cauchy matrices. A square Cauchy matrix is always invertible, and every submatrix of a Cauchy matrix is a Cauchy matrix. Taken together, this means that an N×M Cauchy matrix meets the requirement of every square submatrix being invertible. Furthermore, an N×M Cauchy matrix is easy to construct: choose N+M distinct elements of GF(2W), call the first N of these X, and the last M of these Y, then the N×M matrix has inv(X[i] ^ Y[j]) in position i,j. Such a matrix can be constructed provided that N+M ≤ 2W. For RS(10,4) this would mean that words need to be at least 4 bits.

Finally we get to Vandermonde matrices. This construction gives an N×(N+M) matrix meeting the requirement of every N×N submatrix being invertible, which then requires further processing to get to the concatenation-with-identity form. Some texts instead directly concatenate an N×M Vandermonde matrix to an identity matrix, but this does not work in general. Again, N+M distinct elements of GF(2W) are chosen, but this time each element gives rise to an N×1 vector. For an element e, the vector is [1, e, e*e, e*e*e, ...]. These vectors are stacked to give an N×(N+M) Vandermonde matrix. Every N×N submatrix of this N×(N+M) matrix is itself a Vandermonde matrix, and as the N elements which define the submatrix are distinct, the determinant is non-zero, and thus the submatrix is invertible.

Vandermonde matrices are appealing because they contain lots of 1 values; the first element in every vector is 1, and an entire vector of 1 is obtained when e == 1. If presence of these 1 values in particular locations can be presumed, then some computationally expensive GF(2W) multiplications can be elided. For example, in an RS(10,4) system, if all these 1 values could be presumed, then the number of GF(2W) multiplications required to calculate 4 checksum words from 10 input words reduces from 40 to just 27. Unfortunately, the transformation of the N×(N+M) Vandermonde matrix to concatenation-with-identity form need not preserve any of these nice 1 values. All is not lost though; if we have an identity matrix concatencated with an N×M matrix (be that a transformed Vandermonde matrix, or a Cauchy matrix, or any other construction), then the N×M matrix can be transformed in certain ways without affecting the invertibility properties of (every square submatrix of) that N×M matrix. In particular, any row and/or column can be scaled by any non-zero constant without affecting invertibility. This means we can look at every row in turn, and scale that row by the inv of the row's first element, and then do the same for every column. This will give an N×M matrix where the first row and the first column are both entirely 1.

As a worked example for RS(10,4), here is a 10×14 Vandermonde matrix in GF(28) with x8 == x4 + x3 + x + 1, elements given as their hex representation:

01 01 01 01 01 01 01 01 01 01 01 01 01 01
00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d
00 01 04 05 10 11 14 15 40 41 44 45 50 51
00 01 08 0f 40 55 78 6b 36 7f 9e d1 ed b0
00 01 10 11 1b 1a 0b 0a ab aa bb ba b0 b1
00 01 20 33 6c 72 3a 36 2f 8d c2 72 01 bc
00 01 40 55 ab a1 9c 82 63 89 d5 2b 0c ed
00 01 80 ff 9a 13 65 a3 35 ad 43 3e 50 5d
00 01 1b 1a 5e 5f 45 44 b3 b2 a8 a9 ed ec
00 01 36 2e 63 38 85 c7 ef 55 7c df b0 50

The inverse of the leftmost 10×10 submatrix is:

01 ee 01 ab 42 42 00 45 43 43
00 01 ef ee 45 07 45 45 00 43
00 ad 68 99 a5 66 8a 45 78 28
00 3f 3e dc 0c 23 cf 45 50 28
00 c5 2f 88 b6 8c 0f 45 9b 89
00 0d ed cd 70 c9 4a 45 12 89
00 95 90 ad e0 92 85 45 e8 f2
00 1d fd e8 b2 d7 c0 45 1a f2
00 ce 92 17 f1 3a 00 00 90 10
00 f3 85 17 cb 3a 00 00 80 10

Multiplying these two matrices gives concatenation-with-identity form:

01 00 00 00 00 00 00 00 00 00 81 96 b3 da
00 01 00 00 00 00 00 00 00 00 96 81 da b3
00 00 01 00 00 00 00 00 00 00 af b8 6e 06
00 00 00 01 00 00 00 00 00 00 b8 af 06 6e
00 00 00 00 01 00 00 00 00 00 d2 c4 0c 65
00 00 00 00 00 01 00 00 00 00 c4 d2 65 0c
00 00 00 00 00 00 01 00 00 00 fe e8 d5 bd
00 00 00 00 00 00 00 01 00 00 e8 fe bd d5
00 00 00 00 00 00 00 00 01 00 03 02 05 04
00 00 00 00 00 00 00 00 00 01 02 03 04 05

Then rescaling rows and columns of the rightmost 10×4 block to give 1 values along its leading row and column:

01 00 00 00 00 00 00 00 00 00 01 01 01 01
00 01 00 00 00 00 00 00 00 00 01 2c 5e 2e
00 00 01 00 00 00 00 00 00 00 01 45 4e 1e
00 00 00 01 00 00 00 00 00 00 01 2d c6 b1
00 00 00 00 01 00 00 00 00 00 01 d9 7a 94
00 00 00 00 00 01 00 00 00 00 01 fe d0 56
00 00 00 00 00 00 01 00 00 00 01 5e 53 da
00 00 00 00 00 00 00 01 00 00 01 2e da 7e
00 00 00 00 00 00 00 00 01 00 01 30 f6 85
00 00 00 00 00 00 00 00 00 01 01 3c a4 d5

It can be exhaustively verified that every 10×10 submatrix is invertible.

Alternatively, for doing a Cauchy construction, start with N distinct values vertically, and M more horizontally:

     0a 0b 0c 0d
    ------------
00 |
01 |
02 |
03 |
04 |
05 |
06 |
07 |
08 |
09 |

Then construct the N×M matrix one element at a time by doing GF(28) inv of the XOR of the corresponding vertical value and horizontal value:

     0a 0b 0c 0d
    ------------
00 | 29 c0 b0 e1
01 | c0 29 e1 b0
02 | e8 4f e5 c7
03 | 4f e8 c7 e5
04 | e5 c7 e8 4f
05 | c7 e5 4f e8
06 | b0 e1 29 c0
07 | e1 b0 c0 29
08 | 8d f6 cb 52
09 | f6 8d 52 cb

Then rescale each row and column to give 1 values along the leading row and column:

     0a 0b 0c 0d
    ------------
00 | 01 01 01 01
01 | 01 2c 5e 2e
02 | 01 45 4e 1e
03 | 01 2d c6 b1
04 | 01 d9 7a 94
05 | 01 fe d0 56
06 | 01 5e 53 da
07 | 01 2e da 7e
08 | 01 30 f6 85
09 | 01 3c a4 d5

Then remove the guide values and instead concatenate with an identity matrix:

01 00 00 00 00 00 00 00 00 00 01 01 01 01
00 01 00 00 00 00 00 00 00 00 01 2c 5e 2e
00 00 01 00 00 00 00 00 00 00 01 45 4e 1e
00 00 00 01 00 00 00 00 00 00 01 2d c6 b1
00 00 00 00 01 00 00 00 00 00 01 d9 7a 94
00 00 00 00 00 01 00 00 00 00 01 fe d0 56
00 00 00 00 00 00 01 00 00 00 01 5e 53 da
00 00 00 00 00 00 00 01 00 00 01 2e da 7e
00 00 00 00 00 00 00 00 01 00 01 30 f6 85
00 00 00 00 00 00 00 00 00 01 01 3c a4 d5

This happens to be the exact same 10×14 matrix as given by the Vandermonde construction. This is no coincidence; it can be shown that taking the inverse of a square Vandermonde matrix and multiplying it by a different Vandermonde matrix (which is effectively what the Vandermonde construction does for the rightmost N×M block) yields a Cauchy matrix, modulo some per-row and per-column scaling. That scaling, whatever it is, gets undone by forcing the leading row and column to 1.

page: 2 3 4 5 6