Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0

Wiki MarkupMany programs must address the problem of handling a series of incoming requests. One simple concurrency strategy is the thread-per-message design pattern, which uses a new thread for each request \[ [Lea 2000a|AA. References#Lea 00]\]. This pattern is generally preferred over sequential executions of time-consuming, I/O-bound, session-based, or isolated tasks.unmigrated-wiki-markup

However, the pattern also introduces overheads not seen in sequential execution, including the time and resources required for thread creation and scheduling, for task processing, for resource allocation and deallocation, and for frequent context switching \[ [Lea 2000a|AA. References#Lea 00]\]. Furthermore, an attacker can cause a denial of service (DoS) by overwhelming the system with too many requests all at once, causing the system to become unresponsive rather than degrading gracefully. From a safety perspective, one component can exhaust all resources because of an intermittent error, consequently starving all other components.

Thread pools allow a system to limit the maximum number of simultaneous requests that it processes to a number that it can comfortably serve rather than terminating all services when presented with a deluge of requests. Thread pools overcome these issues by controlling the maximum number of worker threads that can execute concurrently. Each object that supports thread pools accepts a Runnable or Callable<T> task and stores it in a temporary queue until resources become available. Additionally, thread life-cycle management overhead is minimized because the threads in a thread pool can be reused and can be efficiently added to or removed from the pool.

...

Compliant Solution (Thread Pool)

...

This compliant solution uses a fixed thread pool that places a strict limit on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. This prevents the system from being overwhelmed when attempting to respond to all incoming requests and allows it to degrade gracefully by serving a fixed maximum number of simultaneous clients \[ [Tutorials 2008|AA. References#Tutorials 08]\].

Code Block
bgColor#ccccff
// class Helper remains unchanged

final class RequestHandler {
  private final Helper helper = new Helper();
  private final ServerSocket server;
  private final ExecutorService exec;

  private RequestHandler(int port, int poolSize) throws IOException {
    server = new ServerSocket(port);
    exec = Executors.newFixedThreadPool(poolSize);
  }

  public static RequestHandler newInstance(int poolSize) 
                                           throws IOException {
    return new RequestHandler(0, poolSize);
  }

  public void handleRequest() {
    Future<?> future = exec.submit(new Runnable() {
        @Override public void run() {
          try {
            helper.handle(server.accept());
          } catch (IOException e) {
            // Forward to handler
          }
        }
    });
  }
  // ... other methods such as shutting down the thread pool 
  // and task cancellation ...
}

...

According to the Java API documentation for the {{Executor}} interface \[ [API 2006|AA. References#API 06]\]

...

\[The interface {{Executor}} is\] an object that executes submitted {{Runnable}} tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An {{Executor}} is normally used instead of explicitly creating threads.

The ExecutorService interface used in this compliant solution derives from the java.util.concurrent.Executor interface. The ExecutorService.submit() method allows callers to obtain a Future<V> object. This object both encapsulates the as-yet-unknown result of an asynchronous computation and also enables callers to perform additional functions such as task cancellation.unmigrated-wiki-markup

The choice of {{newFixedThreadPool}} is not always appropriate. Refer to the Java API documentation for guidance on choosing among the following methods to meet specific design requirements \[ [API 2006|AA. References#API 06]\]:

  • newFixedThreadPool()
  • newCachedThreadPool()
  • newSingleThreadExecutor()
  • newScheduledThreadPool()

...

MITRE CWE

CWE-405. Asymmetric resource consumption (amplification)

 

CWE-410. Insufficient resource pool

Bibliography

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="334a3ad8-a4dc-4ce3-b130-28e44501717b"><ac:plain-text-body><![CDATA[

[ [API 2006AA. References#API 06]]

[Interface Executor

http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Executor.html]

]]></ac:plain-text-body></ac:structured-macro>

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="bbdc321d-6044-47f3-9095-2131f563e8b9"><ac:plain-text-body><![CDATA[

[[Lea 2000aAA. References#Lea 00] ]

4.1.3, Thread-Per-Message; 4.1.4, Worker Threads

]]></ac:plain-text-body></ac:structured-macro>

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="e74abbfa-458d-412a-af74-a20d18b0bbe4"><ac:plain-text-body><![CDATA[

[ [Tutorials 2008AA. References#Tutorials 08]]

[Thread Pools

http://java.sun.com/docs/books/tutorial/essential/concurrency/pools.html]

]]></ac:plain-text-body></ac:structured-macro>

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="23eef8fc-7553-4924-8987-1a34387a3545"><ac:plain-text-body><![CDATA[

[ [Goetz 2006aAA. References#Goetz 06]]

Chapter 8, Applying Thread Pools ]]></ac:plain-text-body></ac:structured-macro>

...

10. Thread Pools (TPS)      10. Thread Pools (TPS)