Site Loader
4. Luther street, Budapest, Hungary 1087

DISCLAIMER: This article was migrated from the legacy personal technical blog originally hosted here, and thus may contain formatting and content differences compared to the original post. Additionally, it likely contains technical inaccuracies, opinions that the author may no longer align with, and most certainly poor use of English. This article remains public for those who may find it useful despite its flaws.

Previously I talked about how one can easily take advantage of multiprocessing using OpenMP. Even if the C pragmas introduced by the parallel programming API standard is very straightforward for simple programs, it simply doesn’t fit nicely in a complex C++ application that is built from the ground with the OOP in mind. To smoothly introduce OpenMP into such projects one need higher level constructs that hide the actual implementation details. This is the first article of a series that will try to provide reference implementations of such an abstraction. First, we will start with synchronizable primitives that try to reflect the functionality provided by the “synchronized” statement of Java.

This article is highly inspired by an article written by Achilleas Margaritis and is mostly equivalent with his thoughts. My article tries to provide a portable reference implementation of a slightly modified version of the trick presented by Margaritis that uses OpenMP as the multiprocessing API back-end.


According to the OO paradigm, classes and consequently objects provide an abstract interface to the underlying internal data or services of the modeled entity or entity class. When it comes to parallel programing we should provide facilities to enable concurrent access to shared resources that are in this case objects. Using plain OpenMP can be satisfactory, however when used extensively the OpenMP pragmas and API function calls introduced can greatly affect the readability and the maintainability of the code. Nevertheless, there can be platforms that use other APIs for handling race conditions. It is obvious that we need to encapsulate these facilities and provide an abstract tool-set instead.


The very first building block of such a framework can be a mutex class that provides mutually exclusive access to certain resources. In the world of OpenMP this should look like something similar to the following:

class Mutex {
    Mutex() { omp_init_lock(&_mutex); }
    ~Mutex() { omp_destroy_lock(&_mutex); }
    void lock() { omp_set_lock(&_mutex); }
    void unlock() { omp_unset_lock(&_mutex); }
    omp_lock_t _mutex;

This seems already enough for us to make our Java-like “synchronized” statement, however we would like to create a framework that makes usage as easy and safe as possible. In order to get closer to this goal we apply the RAII (Resource Acquisition Is Initialization) design pattern to create our lock class:

class Lock {
    Lock(Mutex& mutex) : _mutex(mutex), _release(false) { _mutex.lock(); }
    ~Lock() { _mutex.unlock(); }
    bool operator() const { return !_release; }
    void release() { _release = true; }
    Mutex& _mutex;
    bool _release;

Our goal is to provide an inheritable interface for such objects that needs synchronization. However, this step has to involve severe considerations regarding to the provided interface as we explicitly need to conform to the following requirements:

  • The interface shall not expose the interface of the underlying synchronization primitive, in our case the mutex class methods.
  • The interface shall be available only to the synchronizable objects but not for the external world as we would like to not just hide the implementation details of our abstract entity but also prevent the users to synchronize our objects as it should be the responsibility of the object itself.
  • The interface shall expose methods which are less prone to name collision, for convenience.

If we take care of the presented conventions we end up with an interface similar to the following:

class Synchronizable: protected Mutex {
    void enterSyncBlock() { this->lock(); }
    void exitSyncBlock() { this->unlock(); }

Now we are almost at the finish line. We just need to inherit this class in order to have the needed facilities for an object that needs synchronization. However, using this interface directly is not the most comfortable and safe. If we would like to have a Java-like “synchronized” statement we have to call for additional help. Fortunately, we have our not so well respected C macro language coming to rescue us as we can use it to make some pseudo-language extensions. The simplest way to define our new statement is using the following line:

#define synchronized(obj)  for(Lock obj##_lock = *obj; obj##_lock; obj##_lock.release())

From now, we can really use object synchronization in C++ as easy as in Java, we just need the following syntax in the method of our shared objects:

synchronized(this) {
    // some code that needs synchronization

Now it is clearly visible how handy the RAII pattern became in our case. Beside that it is now very straightforward to use this statement it provides additional benefits:

  • It makes the code more readable and as a result it is easier to maintain.
  • No need to call inconveniently named methods and use lock variables.
  • The synchronized code has it’s own scope inside the code.
  • It is exception-safe as the mutex is unlocked upon destruction.

Additionally, we can also take advantage of the inherent problem in C++ regarding to multiple inheritance. If we inherit our object from other two synchronized objects then using a simple type casting we can explicitly specify which ancestor we would like to synchronize in a particular block. Also, to ease this we can define our synchronization statement instead of the Java-like one using the following line:

#define synchronized(cls)  for(Lock obj##_lock = *static_cast<cls*>(this); obj##_lock; obj##_lock.release())

In this case we pass the class name instead of the object pointer this. Using this later construct we can easily specify the correct ancestor that we would like to synchronize in case when we deal with multiple inheritance situations. Personally I prefer the later syntax as it is much more customized for C++ use cases.

As from now we don’t need a direct interface for entering and exiting our synchronization block we can simplify our synchronizable interface to the following chunk:

class Synchronizable: protected Mutex {

This is enough from now to provide the facilities needed for a synchronization block but still complies to the requirement that we would like to hide the synchronization primitive related details.

Beside this, Jörg came up with the idea today to replace the for loop in our macro with a single if statement. This seems reasonable as we don’t have to sacrifice any scoping and safety related benefits of our framework. This simplifies our lock class to the following:

class Lock {
    Lock(Mutex& mutex) : _mutex(mutex) { _mutex.lock(); }
    ~Lock() { _mutex.unlock(); }
    bool operator() const { return true; }
    Mutex& _mutex;

This definition of the lock class is satisfactory if we redefine our synchronized macro to use an if statement instead:

/* Java-like synchronized statement */
#define synchronized(obj)  if (Lock obj##_lock = *obj)
/* alternative synchronized statement to support multiple inheritance */
#define synchronized(cls)  if (Lock obj##_lock = *static_cast<cls*>(this))

Thanks to the useful comments we even managed to further optimize and minimize the support code needed for our new pseudo-language extension.


We have seen an example how one can implement an easy to use synchronizable interface for C++. Also, we’ve provided a concrete implementation that is based on OpenMP. This library is still far from an API that provides all the necessary constructs that one needs for using parallel programming in their C++ projects, however we made our first step and I will recap on the subject in subsequent articles to further extend this framework.

Credits go to Achilleas Margaritis whose article inspired me to write mine and to Jörg for the useful improvement ideas.

Links: source code

Post Author: Daniel Rákos