Multithreading

Introduction

Universal Extensions run in a "worker process" that manages execution and facilitates integration services between the extension and the Universal Controller.  The worker process utilizes Python threading capabilities to perform its required actions in a cooperative manner alongside the extension specific actions.  In general, this is transparent to the extension developer and requires no special action on the developer's part.  However, there are some concepts that would be helpful for the extension developer to keep in mind.

User Code Entry Points

The Universal Extension API provides four entry points where user code is executed:

  • Dynamic Choice Command: @dynamic_choice_command
  • Dynamic Command: @dynamic_command
  • Extension Start: extension_start()
  • Extension Cancel: extension_cancel()

Each of the entry points listed above run in their own thread.  In most cases, there is only one user entry point that is executed during the lifespan of a worker process instance.  For example, when an end user executes a Dynamic Choice command, a new worker process is started to process that command. It is impossible for two Dynamic Choice commands to run in the same process.  It is also impossible for a Dynamic Choice command to run in the same process as any other user entry point.  

The only scenario where multiple user entry points will be active in the same process is when in-process Dynamic Commands are run against an active extension instance.  In this case, extension_start() and the executed in-process Dynamic Command(s) are both running in the same process. Additionally, asynchronous in-process Dynamic Commands may be run in parallel.  It is therefore possible to have multiple Dynamic Commands executing simultaneously along with extension_start().  This scenario alone does not require any special action by the extension developer.  However, the point of in-process Dynamic Commands is their ability to interact with a running extension.  It is these interactions that will potentially cross thread boundaries may require access to shared resources in a thread safe way.

Python Multithreading

Multithreading in Python can be useful for concurrent execution of tasks, but it also introduces some potential code problems. Below are some of the most common issues that may arise in a multithreaded environment along with techniques to prevent them.

Race Conditions

One of the most common problems is race condition, which occurs when two or more threads access and modify a shared resource without proper synchronization. For example, consider the following code that increments a global variable by one in two threads:

import threading

counter = 0

def increment():
    global counter
    counter += 1

t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=increment)

t1.start()
t2.start()

t1.join()
t2.join()

print(counter)

The expected output of this code is 2, but it may sometimes print 1. This is because the counter += 1 operation is not atomic, and it involves three steps: reading the current value of counter, adding one to it, and writing the new value back to counter. If two threads perform this operation at the same time, they may read the same value of counter, increment it by one, and overwrite each other's result.

One way to solve this problem is to use a lock object, which prevents multiple threads from accessing a resource at the same time. A lock can be acquired and released by a thread using the acquire() and release() methods. Only one thread can hold a lock at a time, and other threads have to wait until the lock is released. For example, the following code uses a lock to protect the counter variable:

import threading

counter = 0
lock = threading.Lock()

def increment():
    global counter
    lock.acquire()
    counter += 1
    lock.release()

t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=increment)

t1.start()
t2.start()

t1.join()
t2.join()

print(counter)

This code will always print 2, as each thread will increment the counter atomically.

Deadlocks

Using locks is a best practice for accessing shared resources in a multithreaded environment. However, using locks also introduces some overhead and complexity, and it may cause deadlock if not used carefully. Deadlock occurs when two or more threads are waiting for each other to release a lock that they hold. For example, consider the following code that tries to swap the values of two global variables using two locks:

import threading

a = 1
b = 2
lock_a = threading.Lock()
lock_b = threading.Lock()

def swap():
    global a, b
    lock_a.acquire()
    lock_b.acquire()
    a, b = b, a
    lock_b.release()
    lock_a.release()

t1 = threading.Thread(target=swap)
t2 = threading.Thread(target=swap)

t1.start()
t2.start()

t1.join()
t2.join()

print(a, b)

This code may never terminate, as each thread may acquire one lock and wait for the other lock to be released by the other thread. To avoid deadlock, one possible solution is to use a single lock for both variables, or to acquire the locks in the same order in both threads.

Conclusion

In conclusion, Universal Extensions run in a multithreaded environment.  In most cases, the complexity of multithreading is transparent to the extension developer and no special action is required to deal with it.  However, under certain scenarios, like when using in-process Dynamic Commands, the extension developer may have to take special precautions to ensure their code will perform in a thread safe manner.  The sections above discuss the most common issues that may be encountered in such a scenario and ways to deal them. This is by no means a complete reference on the topic and the developer is encouraged to seek additional sources for a more complete understanding of multithreading programming in Python.