Priority Queues

Jaime LopezJaime Lopez
5 min read

There are many cases where maintaining a priority-based queue is essential. For instance, it may be required to rebalance a financial portfolio by reallocating assets with the highest deviation first. Another example is ranking updates when processing asynchronous market data, where prioritization is critical. This note explains what a priority queue is and provides an efficient implementation in Zig.

Priority Queue

In a basic queue, elements are appended to the tail and removed from the front, following a first-in, first-out (FIFO) order. This means the first element to arrive is the first to leave. The primary operations of a queue are:

  • enqueue: append a new element to the tail of the queue
  • dequeue: remove the older element from the head of queue
  • peek: query the value of the element located at the head of the queue

In a priority queue, each element has an associated priority level. The priority can be represented as an enumeration (e.g., high, medium, low), an integer (with higher values indicating higher priority), or a real number (e.g., price), among other ordering schemes.

Elements in a priority queue are sorted by priority and time of arrival. When a new element is added, it is positioned behind the last element with the same priority. The element at the head of the queue is the oldest one with the highest priority.

Unlike a basic queue, a priority queue requires a sorting algorithm when inserting a new element. Some implementations may also require a search algorithm to extract the highest-priority element. In the next section, a priority queue is implemented using the insertion sort algorithm for enqueueing new elements.

A Zig's implementation

This section presents a priority queue implementation in Zig, a programming language well-suited for performance-critical applications. Zig’s manual memory management ensures efficient, fine-grained, and predictable allocations. Additionally, its low-latency and cache-efficient data structures make it ideal for real-time applications.

This implementation is based on a generic data type and avoids heap memory allocations. It uses two arrays—one for storing elements and another for storing priorities. These arrays are accessed as a circular list using two indices: head and tail. The enqueue operation runs in \(O(n)\) time, while dequeue and peek execute in \(O(1)\) time.

To provide a generic implementation, a function is defined that takes the type T, an example value of type T, and the queue length as arguments. For example, calling PriorityCircularQueue(u32, 0, 128) defines a queue for unsigned integers with a capacity of 128 elements. Additionally, an error enumeration is included to indicate when the queue is empty or full.

pub const QueueError = error {
    Full,
    Empty,
};

pub fn PriorityCircularQueue(
    T: type,
    example_value: T,
    len: usize
) type {
    return struct {
        ...
    };
}

The attributes of this struct are shown below. It has two arrays: items for storing elements and priorities for storing priority values. The len attribute represents the number of elements in the queue, while size defines the queue's capacity. The queue’s front and rear are represented by head and tail.

struct {
        items: [size]T = [_]T{example_value} ** size,
        priorities: [size]u8 = [_]u8{0} ** size,
        size: usize = size,
        len: usize = 0,
        head: usize = 0,
        tail: usize = size,
        ...
};

To indicate that the queue is empty, a special flag is used with tail == size. If the queue is not empty, tail will have a value between 0 and size - 1, inclusive.

The simplest operation is peek, which only needs to check if the queue is not empty before returning the value at the queue’s head. The dequeue operation is similar to peek, but before returning the value, moves head to the next position. If no more elements remain in the queue, tail is set to size. As mentioned earlier, both operations run in \(O(1)\) time and work the same way in both priority and basic circular queues.

pub fn peek(self: @This()) !T {
    if (self.tail == self.size) {
        return QueueError.Empty;
    }
    return self.items[self.head];
}

pub fn dequeue(self: *@This()) !T {
    if (self.tail == self.size) {
        return QueueError.Empty;
    }
    const curr = self.head;
    self.head = (self.head + 1) % self.size;
    if (self.head == self.tail) {
        // Set the queue as empty
        self.tail = self.size;
    }
    self.len -= 1;
    return self.items[curr];
}

As first action, enqueue appends the new element at the tail of the queue, the same as a basic queue. It also stores the priority of the new element, using the same position as in the items array. However, to be priority queue, the new element has to be moved from the tail if it has a higher priority. It has to be inserted behind the last element with the same priority. Below, the definition of enqueue is presented, except for the sorting operation, that at this moment has been indicated only with a comment.

pub fn enqueue(
    self: *@This(),
    item: T,
    priority: u8
) !void {
    if (self.tail == self.head) {
        return QueueError.Full;                
    }
    if (self.tail == self.size) {
        // The queue is empty, so let's
        // start filling it
        self.tail = self.head;
    }
    self.items[self.tail] = item;
    self.priorities[self.tail] = priority;

    // TODO: Sorting by priority

    self.len += 1;
    self.tail = (self.tail + 1) % self.size;
}

In this example, the insertion sort algorithm is used to maintain the correct order in the priorities array. The first step is to iterate through the queue, starting from the head, to determine the correct position for the new element based on its priority. Once identified, all elements after that position are shifted to make space for the new element. This algorithm has \(O(n)\) execution time. For small queue sizes, this approach may be optimal, but for larger sizes, a more efficient sorting algorithm should be considered.

var curr = self.head;
var move_to = self.tail;

while (curr != self.tail) {
    if (self.priorities[self.tail] > self.priorities[curr]) {
        move_to = curr;
        break;
    }
    curr = (curr + 1) % self.size;
}

while (move_to != self.tail) {
    std.mem.swap(
        T,
        &self.items[self.tail],
        &self.items[move_to]
    );
    std.mem.swap(
        u8,
        &self.priorities[self.tail],
        &self.priorities[move_to]
    );
    move_to = (move_to + 1) % self.size;
}

In summary, this implementation of a priority queue in Zig demonstrates how to efficiently manage prioritized elements using a circular buffer and insertion sort. While this approach provides \(O(1)\) performance for peek and dequeue, the \(O(n)\) complexity of enqueue may become a bottleneck for larger queues. In such cases, more advanced data structures like binary heaps or balanced trees could be considered to optimize performance. Nonetheless, this implementation serves as a solid foundation for real-time applications that require fine-grained control over memory and execution speed.

Source code

0
Subscribe to my newsletter

Read articles from Jaime Lopez directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jaime Lopez
Jaime Lopez