The Thread-Per-Message design is the simplest concurrency technique wherein a thread is created for each incoming request. The benefits of creating a new thread to handle each request should outweigh the corresponding thread creation overheads. This design is generally recommended over sequential executions for time consuming, I/O bound, session based or isolated tasks.
On the other hand, there can be several disadvantages of this design such as creation overhead in case of frequent or recurring requests, significant processing overhead, resource exhaustion pertaining to threads (leading to the OutOfMemoryError
), thread scheduling and context switching overhead [[Lea 00]].
Thread Pools overcome these disadvantages as the maximum number of worker threads that can be initiated and executed simultaneously, can be controlled. Every worker accepts a Runnable
from a request and stores it in a temporary Channel
like a buffer or a queue until resources become available. Since threads are reused and can be efficiently added to the Channel
, most of the thread creation overhead is eliminated.
Noncompliant Code Example
This noncompliant code example demonstrates the Thread-Per-Message design that fails to provide graceful degradation of service.
class Helper { public void handle(String request) { //... } } class GetRequest { protected final Helper h = new Helper(); String request; public synchronized String accept() { String data = "Read data from pipe"; //read the request data, else block return data; } public void request() { while(true) { request = accept(); new Thread(new Runnable() { public void run() { h.handle(request); } }).start(); } } }
Compliant Solution
This compliant solution uses a Fixed Thread Pool that places an upper bound on the number of simultaneously executing threads. Tasks submitted to the pool are stored in an internal queue. The system will not get overwhelmed trying to respond to all incoming requests but will degrade gracefully by serving a fixed number of clients at a particular time. [[Tutorials 08]]
According to [[API 06]] the java.util.concurrent
Interface Executor
:
[The Interface Executor is] An object that executes submitted Runnable tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An Executor is normally used instead of explicitly creating threads.
import java.util.concurrent.Executors; class GetRequest { protected final Helper h = new Helper(); String request; public synchronized String accept() { String data = "Read data from pipe"; //read the request data, else block return data; } public void request() { int NoOfThreads = 200; Executor exec = (Executor) Executors.newFixedThreadPool(NoOfThreads); while(true) { request = accept(); exec.Execute(new Runnable() { public void run() { h.handle(request); } }); } } }
Noncompliant Code Example
In reality, there are some gotchas associated with the usage of the Executor
interface. For one, a task that depends on other tasks should not execute in the same Thread Pool. A task that submits another task to a single threaded Executor
remains blocked until the results are received whereas the second task waits until the first one has concluded. This constitutes a deadlock.
This noncompliant code example shows a thread starvation deadlock. This situation not only occurs in singe threaded Executors, but also in those with large Thread Pools. This can happen when all the threads executing in the pool are blocked on tasks that are waiting on the queue. A blocking operation within a subtask can also lead to unbounded queue growth. [[Goetz 06]]
class NetworkServer extends InitialHandshake implements Runnable { private final ServerSocket serverSocket; private final ExecutorService pool; public NetworkServer(int port, int poolSize) throws IOException { serverSocket = new ServerSocket(port); pool = Executors.newFixedThreadPool(poolSize); } public void run() { try { //Interdependent tasks pool.submit(new SanitizeInput(password)); // password is defined in class InitialHandshake pool.submit(new CustomHandshake(password)); // for e.g. client puzzles pool.execute(new Handle(serverSocket.accept())); // handle connection } catch (IOException ex) { pool.shutdown(); } } }
Compliant Solution
Always try to submit independent tasks to the Executor
. Choosing a large pool size can also help reduce thread starvation problems. Note that any operation that has further constraints, such as the total number of database connections or total ResultSets
open at a particular time impose an upper bound on the Thread Pool size since each thread would continue blocking until the resource becomes available. The other rules of fair concurrency, such as not running response sensitive tasks, also apply.
Sometimes, a private static
ThreadLocal
variable is used per thread to maintain local state. With Thread Pools, these should be employed only if their lifetime is shorter than the life of the corresponding task [[Goetz 06]]. Moreover, such variables should not be used as a communication mechanism between tasks. Finally, the choice of the unbounded newFixedThreadPool
may not always be the best. Refer to the API documentation for choosing between the former, newCachedThreadPool
, newSingleThreadExecutor
and newScheduledThreadPool
to suit the design requirements.
This compliant solution recommends executing the interdependent tasks as a single task within the Executor
. In other cases, where the subtasks do not require concurrency safeguards, the subtasks can be moved outside the threaded region that is to be executed by the Executor
.
class NetworkServer extends InitialHandshake implements Runnable { private final ServerSocket serverSocket; private final ExecutorService pool; public NetworkServer(int port, int poolSize) throws IOException { serverSocket = new ServerSocket(port); pool = Executors.newFixedThreadPool(poolSize); } public void run() { try { // Execute interdependent subtasks as a single combined task within this block pool.execute(new Handle(serverSocket.accept())); // handle connection } catch (IOException ex) { pool.shutdown(); } } }
Risk Assessment
Using simplistic concurrency primitives (often incorrectly too) may lead to severe performance degradation, deadlocks and starvation, or exhaustion of system resources.
Rule |
Severity |
Likelihood |
Remediation Cost |
Priority |
Level |
---|---|---|---|---|---|
CON02- J |
low |
probable |
high |
P2 |
L3 |
Automated Detection
TODO
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
References
[[API 06]] Interface Executor
[[Lea 00]] Section 4.1.3 Thread-Per-Message and 4.1.4 Worker Threads
[[Tutorials 08]] Thread Pools
[[Goetz 06]] Chapter 8, Applying Thread Pools
[[MITRE 09]] CWE ID 405 "Asymmetric Resource Consumption (Amplification)", CWE ID 410 "Insufficient Resource Pool"
CON01-J. Avoid using ThreadGroup APIs 09. Concurrency (CON) CON03-J. Do not assume that elements of an array declared volatile are volatile