The Many programs must address the problem of handling a series of incoming requests. One simple concurrency strategy is the Thread-Per-Message design is the simplest concurrency technique wherein a thread is created for each incoming request. The benefits of creating a new thread to handle each request should outweigh the corresponding thread creation overheads. This design is generally recommended over sequential executions for time pattern, which uses a new thread for each request [Lea 2000a]. This pattern is generally preferred over sequential executions of time-consuming, I/O-bound, session-based, or isolated tasks.
Wiki Markup |
---|
On the other hand, there can be several disadvantages of this design such as creation overhead in case of frequent or recurring requests, significant processing overhead, resource exhaustion pertaining to threads (leading to the {{OutOfMemoryError}}), thread scheduling and context switching overhead \[[Lea 00|AA. Java References#Lea 00]\]. |
However, the pattern also introduces overheads not seen in sequential execution, including the time and resources required for thread creation and scheduling, for task processing, for resource allocation and deallocation, and for frequent context switching [Lea 2000a]. Furthermore, an attacker can cause a denial of service (DoS) by overwhelming the system with too many requests at once, causing the system to become unresponsive rather than degrading gracefully. From a safety perspective, one component can exhaust all resources because of an intermittent error, consequently starving all other components.
Thread pools allow a system to limit the maximum number of simultaneous requests that it processes to a number that it can comfortably serve rather than terminating all services when presented with a deluge of requests. Thread pools overcome these issues by controlling Thread Pools overcome these disadvantages as the maximum number of worker threads that can be initiated and executed simultaneously, can be controlled. Every worker accepts a Runnable
from a request execute concurrently. Each object that supports thread pools accepts a Runnable
or Callable<T>
task and stores it in a temporary Channel
like a buffer or a queue until resources become available. Since threads are Additionally, thread life-cycle management overhead is minimized because the threads in a thread pool can be reused and can be efficiently added to the Channel
, most of the thread creation overhead is eliminatedor removed from the pool.
Programs that use multiple threads to service requests should—and programs that may be subjected to DoS attacks must—ensure graceful degradation of service during traffic bursts. Use of thread pools is one acceptable approach to meeting this requirement.
Noncompliant Code Example (Thread-Per-Message)
This noncompliant code example demonstrates the Thread-Per-Message design that fails to provide graceful degradation of servicepattern. The RequestHandler
class provides a public static factory method so that callers can obtain a RequestHandler
instance. The handleRequest()
method is subsequently invoked to handle each request in its own thread.
Code Block | ||
---|---|---|
| ||
class Helper { public void handle(StringSocket requestsocket) { // ... } } final class GetRequestRequestHandler { protectedprivate final Helper hhelper = new Helper(); String requestprivate final ServerSocket server; private public synchronized String accept()RequestHandler(int port) throws IOException { String dataserver = "Read data from pipe";new ServerSocket(port); } public static //read the request data, else blockRequestHandler newInstance() throws IOException { return data; new RequestHandler(0); // Selects next available port } public void requesthandleRequest() { while(truenew Thread(new Runnable() { request public =void acceptrun(); { try { new Thread(new Runnable() { helper.handle(server.accept()); public} voidcatch run(IOException e) { h.handle(request);// Forward to handler } } }).start(); } } } |
Compliant Solution
Wiki Markup |
---|
This compliant solution uses a _Fixed Thread Pool_ that places an upper bound on the number of simultaneously executing threads. Tasks submitted to the pool are stored in a internal queue. The system will not get overwhelmed trying to respond to all incoming requests but will degrade gracefully by serving a fixed number of clients at a particular time. \[[Tutorials 08|AA. Java References#Tutorials 08]\] |
Wiki Markup |
---|
According to \[[API 06|AA. Java References#API 06]\] the {{java.util.concurrent}} Interface {{Executor}}: |
Wiki Markup \[The Interface Executor is\] An object that executes submitted Runnable tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An Executor is normally used instead of explicitly creating threads.
The thread-per-message strategy fails to provide graceful degradation of service. As threads are created, processing continues normally until some scarce resource is exhausted. For example, a system may allow only a limited number of open file descriptors even though additional threads can be created to serve requests. When the scarce resource is memory, the system may fail abruptly, resulting in a DoS.
Compliant Solution (Thread Pool)
This compliant solution uses a fixed thread pool that places a strict limit on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. Storing tasks in a queue prevents the system from being overwhelmed when attempting to respond to all incoming requests and allows it to degrade gracefully by serving a fixed maximum number of simultaneous clients [Java Tutorials].
Code Block | ||
---|---|---|
| ||
// class Helper remains unchanged
final class RequestHandler {
private final Helper helper | ||
Code Block | ||
| ||
import java.util.concurrent.Executors; class GetRequest { protected final Helper h = new Helper(); String request; public synchronized String accept() { String data = "Read data from pipe"private final ServerSocket server; private final //read the request data, else block return data; }ExecutorService exec; publicprivate void requestRequestHandler() { int port, int NoOfThreads = 200; Executor exec = (Executor) Executors.newFixedThreadPool(NoOfThreads); while(true) poolSize) throws IOException { server = requestnew = acceptServerSocket(port); exec.Execute(new Runnable() { public void run() { h.handle(request); } }); } } } |
Noncompliant Code Example
In reality, there are some gotchas associated with the usage of the Executor
interface. For one, a task that depends on other tasks should not execute in the same Thread Pool. A task that submits another task to a single threaded Executor
remains blocked until the results are received whereas the second task waits until the first one has concluded. This constitutes a deadlock.
Wiki Markup |
---|
This noncompliant code example shows a _thread starvation deadlock_. This situation not only occurs in singe threaded Executors, but also in those with large Thread Pools. This can happen when all the threads executing in the pool are blocked on tasks that are waiting on the queue. A blocking operation within a subtask can also lead to unbounded queue growth. \[[Goetz 06|AA. Java References#Goetz 06]\] |
Code Block | ||
---|---|---|
| ||
class NetworkServer implements Runnable { private final ServerSocket serverSocket; private final ExecutorService pool; public NetworkServer(int port, int poolSize) = Executors.newFixedThreadPool(poolSize); } public static RequestHandler newInstance(int poolSize) throws IOException { serverSocketreturn = new ServerSocket(port); pool = Executors.newFixedThreadPool(RequestHandler(0, poolSize); } public void runhandleRequest() { tryFuture<?> {future = exec.submit(new Runnable() { //Interdependent tasks @Override public pool.submit(new SanitizeInput(password)); void run() { pool.submit(new CustomHandshake(password)); // for e.g. client puzzles try { poolhelper.execute(new Handle(serverSockethandle(server.accept())); // handle connection } catch (IOException exe) { pool.shutdown(); } } } |
Compliant Solution
Always try to submit independent tasks to the Executor
. Choosing a large pool size can also help reduce thread starvation problems. Note that any operation that has further constraints, such as the total number of database connections or total ResultSets
open at a particular time impose an upper bound on the Thread Pool size since each thread would continue blocking until the resource becomes available. The other rules of fair concurrency, such as not running response sensitive tasks, also apply.
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
// |
...
Forward to |
...
handler |
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
} |
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
} |
...
|
...
|
...
|
...
This compliant solution recommends executing the interdependent tasks as a single task within the Executor
. In other cases, where the subtasks do not require concurrency safeguards, the subtasks can be moved outside the threaded region that is to be executed by the Executor
.
...
bgColor | #ccccff |
---|
...
});
}
// ... Other methods such as shutting down the thread pool
// and task cancellation ...
}
|
According to the Java API documentation for the Executor
interface [API 2014]:
[The interface
Executor
is] an object that executes submittedRunnable
tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. AnExecutor
is normally used instead of explicitly creating threads.
The ExecutorService
interface used in this compliant solution derives from the java.util.concurrent.Executor
interface. The ExecutorService.submit()
method allows callers to obtain a Future<V>
object. This object both encapsulates the as-yet unknown result of an asynchronous computation and enables callers to perform additional functions such as task cancellation.
The choice of newFixedThreadPool
is not always appropriate. Refer to the Java API documentation [API 2014] for guidance on choosing among the following methods to meet specific design requirements:
newFixedThreadPool()
newCachedThreadPool()
newSingleThreadExecutor()
newScheduledThreadPool()
Risk Assessment
Using simplistic concurrency primitives (often incorrectly too) may lead to to process an unbounded number of requests could result in severe performance degradation, deadlocks and starvation deadlock, or exhaustion of system resourcessystem resource exhaustion and DOS.
Rule | Severity | Likelihood | Remediation Cost | Priority | Level |
---|
TPS00-J |
Low |
Probable |
High | P2 | L3 |
Automated Detection
...
TODO
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
References
Wiki Markup |
---|
\[[API 06|AA. Java References#API 06]\] [Interface Executor|http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Executor.html]
\[[Lea 00|AA. Java References#Lea 00]\] Section 4.1.3 Thread-Per-Message and 4.1.4 Worker Threads
\[[Tutorials 08|AA. Java References#Tutorials 08]\] [Thread Pools|http://java.sun.com/docs/books/tutorial/essential/concurrency/pools.html]
\[[Goetz 06|AA. Java References#Goetz 06]\] Chapter 8, Applying Thread Pools |
Sound automated detection is infeasible; heuristic checks could be useful.
Tool | Version | Checker | Description | ||||||
---|---|---|---|---|---|---|---|---|---|
Parasoft Jtest |
| CERT.TPS00.ISTART | Do not call the 'start()' method directly on Thread class instances |
Related Guidelines
Bibliography
[API 2014] | |
Chapter 8, "Applying Thread Pools" | |
Section 4.1.3, "Thread-Per-Message" |
...
CON01-J. Avoid using ThreadGroup APIs 08. Concurrency (CON) 08. Concurrency (CON)