Many programs must address the problem of handling a series of incoming requests. One simple concurrency strategy is the threadThread-per-message Message design pattern, which uses a new thread for each request [Lea 2000a]. This pattern is generally preferred over sequential executions of time-consuming, I/O-bound, session-based, or isolated tasks.
However, the pattern also introduces overheads not seen in sequential execution, including the time and resources required for thread creation and scheduling, for task processing, for resource allocation and deallocation, and for frequent context switching [Lea 2000a]. Furthermore, an attacker can cause a a denial of service (DoS) by overwhelming the system with too many requests all at once, causing the system to become unresponsive rather than degrading gracefully. From a safety perspective, one component can exhaust all resources because of an intermittent error, consequently starving all other components.
...
Programs that use multiple threads to service requests should — and should—and programs that may be subjected to DoS attacks must — ensure —ensure graceful degradation of service during traffic bursts. Use of thread pools is one acceptable approach to meeting this requirement.
...
This noncompliant code example demonstrates the threadThread-per-message Message design pattern. The RequestHandler
class provides a public static factory method so that callers can obtain a RequestHandler
instance. The handleRequest()
method is subsequently invoked to handle each request in its own thread.
Code Block | ||
---|---|---|
| ||
class Helper { public void handle(Socket socket) { // ... } } final class RequestHandler { private final Helper helper = new Helper(); private final ServerSocket server; private RequestHandler(int port) throws IOException { server = new ServerSocket(port); } public static RequestHandler newInstance() throws IOException { return new RequestHandler(0); // Selects next available port } public void handleRequest() { new Thread(new Runnable() { public void run() { try { helper.handle(server.accept()); } catch (IOException e) { // Forward to handler } } }).start(); } } |
...
This compliant solution uses a fixed thread pool that places a strict limit on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. This Storing tasks in a queue prevents the system from being overwhelmed when attempting to respond to all incoming requests and allows it to degrade gracefully by serving a fixed maximum number of simultaneous clients [Java Tutorials 2008].
Code Block | ||
---|---|---|
| ||
// class Helper remains unchanged final class RequestHandler { private final Helper helper = new Helper(); private final ServerSocket server; private final ExecutorService exec; private RequestHandler(int port, int poolSize) throws IOException { server = new ServerSocket(port); exec = Executors.newFixedThreadPool(poolSize); } public static RequestHandler newInstance(int poolSize) throws IOException { return new RequestHandler(0, poolSize); } public void handleRequest() { Future<?> future = exec.submit(new Runnable() { @Override public void run() { try { helper.handle(server.accept()); } catch (IOException e) { // Forward to handler } } }); } // ... otherOther methods such as shutting down the thread pool // and task cancellation ... } |
According to the Java API documentation for the Executor
interface [API 20062014]:
[The interface
Executor
is] an object that executes submittedRunnable
tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. AnExecutor
is normally used instead of explicitly creating threads.
The ExecutorService
interface used in this compliant solution derives from the java.util.concurrent.Executor
interface. The ExecutorService.submit()
method allows callers to obtain a Future<V>
object. This object both encapsulates the as-yet - unknown result of an asynchronous computation and also enables callers to perform additional functions such as task cancellation.
The choice of newFixedThreadPool
is not always appropriate. Refer to the Java API documentation [API documentation 2014] for guidance on choosing among the following methods to meet specific design requirements [API 2006]:
newFixedThreadPool()
newCachedThreadPool()
newSingleThreadExecutor()
newScheduledThreadPool()
...
Using simplistic concurrency primitives to process an unbounded number of requests could result in severe performance degradation, deadlock, or system resource exhaustion and DoS DOS.
Rule | Severity | Likelihood | Remediation Cost | Priority | Level |
---|---|---|---|---|---|
TPS00-J | low Low | probable Probable | high High | P2 | L3 |
Related Guidelines
CWE-405. , Asymmetric resource consumption Resource Consumption (amplificationAmplification) |
Bibliography
Chapter 8, "Applying Thread Pools" | |
Section 4.1.3, "Thread-Per-Message; " | |
Chapter 8, Applying Thread Pools |
...