The Many Mutexes of C++

When and how to use each type of mutex guard in the STL

Variety in the STL

The mutex is an ubiquitous thread synchronization primitive which is in no way exclusive to C++. Of course it wouldn’t be C++ if we didn’t have many different ways to use mutexes in the language!

Let’s summarize the options available to us in modern C++, and present some examples.

LinkedInTable of Contents

Enter the Mutex link icon

Mutexes have been around since the dawn of modern operating systems. Mutexes offer a mechanism to provide mutually exclusive access to resources which may be accessed concurrently. Mutexes are generally used to avoid data races and other undefined behaviour during concurrent programming.

std::mutex - the vanilla mutex link icon

The building block of mutexes in C++. This is a pretty standard mutex primitive, with .lock() and .unlock() operations for manually locking and unlocking as needed.

You can use it as is, but if an exception is thrown while the mutex is locked, it will not automatically unlock, which can spell disaster for concurrent applications.

Using RAII to manage mutex locks link icon

Many things in C++ can be made easier by leaning on RAII. One of the guarantees of RAII is that the destructor of an object is called when that object reaches the end of its scope. By placing certain functionality in the destructors/constructors of our classes, desired behaviour can happen automatically at the beginning and end of object scope – in this case, locking and unlocking a mutex. The added benefit is that RAII also guarantees the destructor gets called in the event of an exception. This makes an RAII-wrapped mutex (almost totally) exception safe.

By placing the mutex’s .unlock()/.lock() calls in the destructor/construtor and wrapping it, we can make a std::mutex that never forgets to unlock itself, eg:

class raii_mutex {
public:
    explicit raii_mutex(std::mutex& m) : m(m) {
        m.lock();
    }
    ~raii_mutex() {
        m.unlock();
    }

private:
    std::mutex& m;
};

// example usage
std::mutex m;

int main() {
    // critical section - a scope is created
    {
        auto lock = raii_mutex{ m }; // automatic lock
        do_work();
        do_more_work();
    }   // lock is destroyed here and .unlock() is called on m
}

It turns out, this is a super distilled version of exactly what std::lock_guard does from the C++ Standard Library.

Mutex wrappers & helpers link icon

Combined with the primitive std::mutex, C++ provides numerous wrappers which enable composition of different locking behaviours.

std::lock_guard - RAII mutex wrapper link icon

Likely the most commonly encountered mutex wrapper in modern C++ code. This is the STL’s RAII mutex wrapper, which provides automatic lock/unlock and exception-safe mutex locking functionality.

Usage is similar to the crude implementation given previously, eg:

int main() {
    std::mutex m;
    std::lock_guard<std::mutex> lock{ m }; // <- lock
    do_work();
    do_more_work();
    // lock goes out of scope; unlocks the mutex
}

std::lock - deadlock avoidance link icon

A template function which helps avoid deadlock in simple circumstances. Calling std::lock with two or more mutexes as parameter carries out the locking operation on those mutexes in a deadlock-free way.

/*
 *  Ordinarily, this code would cause a deadlock, since
 *  the locking order is different in each function.
 *    - passing std::adopt_lock to the lock_guard
 *      says "don't autolock the mutex for me"
 */

std::mutex a, b;

void a_then_b() {
    std::lock(a, b);
    auto lock_a = std::lock_guard<std::mutex>{ a, std::adopt_lock };
    std::this_thread::sleep_for(10ms);
    auto lock_b = std::lock_guard<std::mutex>{ b, std::adopt_lock };
    std::osyncstream{std::cout} << "a_then_b()\n";
}

void b_then_a() {
    std::lock(a, b);
    auto lock_b = std::lock_guard<std::mutex>{ b, std::adopt_lock };
    std::this_thread::sleep_for(10ms);
    auto lock_a = std::lock_guard<std::mutex>{ a, std::adopt_lock };
    std::osyncstream{std::cout} << "b_then_a()\n";
}

int main() {
    auto AB = std::jthread{ a_then_b };
    auto BA = std::jthread{ b_then_a };
}

std::scoped_lock - the “desert island” mutex wrapper link icon

If you could only have one mutex wrapper in modern C++, it should be std::scoped_lock.

Available as of C++17, this template combines the best of the RAII wrapper std::lock_guard and std::lock function into a single template. Internally, the scoped_lock handles the two cases we’ve just covered –

  1. Single mutex - behaves identically to std::lock_guard (a simple RAII wrapper)
  2. Two or more mutexes - identical to std::lock in combination with std::lock_guard (deadlock prevention)

\ Let the template do the heavy lifting

There is no low-level performance difference when using std::scoped_lock in place of the “manual way”.

Internally, the template selects and executes the same lock/unlock sequence you would be writing yourself. There’s no point in using the “old” way unless you need granular control for some reason.

Improved readability link icon

Perhaps you noticed how disgusting the syntax for std::lock is in the previous example (along with the required use of std::adopt_lock). Thankfully, the addition of std::scoped_lock fixes this in C++17, reducing boilerplate code and increasing code readability.

Reworking the previous deadlock example –

\ Improved readability with std::scoped_lock

/*
 *  std::scoped_lock and template argument
 *    deduction in C++17 greatly improve
 *    the readability of the example
 */
std::mutex a, b;

void a_then_b() {
    auto lock_ab = std::scoped_lock{ a, b };
    std::this_thread::sleep_for(10ms);
    std::osyncstream{std::cout} << "a_then_b()\n";
}

void b_then_a() {
    auto lock_ba = std::scoped_lock{ b, a };
    std::this_thread::sleep_for(10ms);
    std::osyncstream{std::cout} << "b_then_a()\n";
}

int main() {
    auto AB = std::jthread{ a_then_b };
    auto BA = std::jthread{ b_then_a };
}

std::unique_lock - an ultra-flexible lock guard link icon

A superset of the std::lock_guard with more functionality. Marginally more heavy in terms of resources, so only use it when necessary.

Additional mechanisms include –

A common use case is to combine a mutex with a predicate (via std::conditional_variable) for syncronization. A good example can be found at CppReference. It’s also useful in conjunction with std::shared_mutex, when locking may be deferred.

std::shared_mutex - a readers-writers mutex link icon

Used in conjunction with std::shared_lock, the shared mutex is effectively a classic readers-writers lock.

Use cases link icon

Practical use cases for readers-writers mutexes include anywhere a concurrent piece of data is being updated (written) rarely, but frequently read from. Generally speaking, reading is thread safe as long as a write is not occurring at the same time. A readers-writers mutex leverages this fact to allow many readers concurrent (read-only) access, and only permit a single writer to modify the data when no readers are present. That is, writing and reading cannot occur at the same time.

Practical example - DNS table link icon

An example which helps to understand real world application of std::shared_mutex is to imagine when DNS entries are being cached in a table - perhaps on a DNS server or the local DNS cache of a client system.

In a server situation, the IP address of a DNS entry probably changes very seldomly. Perhaps it’s written to only once in a matter of days, months or possibly even years. On the other hand, a production DNS server could serve thousands of concurrent requests to read the entry every second.

(Example inspired by - C++ Concurrency in Action (2nd ed.) - Anthony Williams)

\ Concurrent DNS cache and std::shared_mutex

/** @brief DNS A Record entry (ipv4) */
struct ARecord {
    std::string host{ };
    std::string ip{ };
    std::chrono::seconds ttl{ 0 };
};

/**
 * @brief Table of cached DNS A Records providing
 *  concurrent access which is read-prioritized.
 */
class DNSCacheTable {
public:
    DNSCacheTable() = default;

    ARecord find_record(const std::string& name) const {
        // locking for read-only concurrent access
        auto lock = std::shared_lock{ mutex };
        const auto it = a_records.find(name);
        return (it == a_records.end()) 
                    ? ARecord{ } : it->second;
    }

    void add_or_update_record(const std::string& name,
                              const ARecord& record) {
        // lock for write access
        auto lock = std::lock_guard{ mutex };
        a_records[name] = record;
    }

private:
    std::map<std::string, ARecord> a_records;
    mutable std::shared_mutex mutex;
};

Write starvation and priority link icon

Note that the Standard does not guarantee any provisions for preventing writer starvation. This means that if the system is swarmed with threads constantly reading, it’s possible that a thread which wants to write will never be allowed to .lock() the mutex and do its work.

Readers-writers mutexes will sometimes be enhanced with a mechanism to offer write priority. If you need this functionality, you’ll have to add it yourself (perhaps with a binary semaphore acting as a turnstile).

std::shared_timed_mutex - adds timeout mechanisms link icon

Same thing as a shared_mutex, but with additional facilities for adding a timeout and upper/lower time bounds to try_lock(). Unless the extra functions are needed, the base shared mutex may offer better performance.

Comments