You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 77 Next »

Many programs must address the problem of handling a series of incoming requests. One simple concurrency strategy is the Thread-Per-Message design pattern, which uses a new thread for each request [[Lea 2000]]. This pattern is generally preferred over sequential executions of time-consuming, I/O-bound, session-based, or isolated tasks.

However, the pattern also introduces additional overheads not seen in sequential execution, including the time and resource required for thread-creation and scheduling, for task processing, for resource allocation and deallocation, and for frequent context switching [[Lea 2000]]. Furthermore, an attacker can cause a denial of service by overwhelming the system with too many requests all at once, causing the system to become unresponsive rather than degrading gracefully. This can lead to a denial of service. From a safety perspective, one component can exhaust all resources due to an intermittent error, consequently starving all other components.

Thread pools allow a system to limit the maximum number of simultaneous request that it processes to a number that it can comfortably serve, rather than terminating all service when presented with a deluge of requests. Thread pools overcome these issues by controlling the maximum number of worker threads that will execute concurrently. Each object that supports thread pools accepts a Runnable or Callable<T> task and stores it in a temporary queue until resources become available. Additionally, thread life-cycle management overhead is minimized because the threads in a thread pool can be reused and can be efficiently added to or removed from the pool.

Programs that use multiple threads to serve requests should — and security-sensitive programs must — ensure graceful degradation of service during traffic bursts. Use of thread pools is one acceptable approach to meeting this requirement.

Noncompliant Code Example

This noncompliant code example demonstrates the Thread-Per-Message design pattern. The RequestHandler class provides a public static factory method so that callers can obtain its instance. The handleRequest() method is subsequently invoked to handle each request in its own thread.

class Helper {
  public void handle(Socket socket) {
    //...
  }
}

final class RequestHandler {
  private final Helper helper = new Helper();
  private final ServerSocket server;

  private RequestHandler(int port) throws IOException {
    server = new ServerSocket(port);
  }

  public static RequestHandler newInstance() throws IOException {
    return new RequestHandler(0); // Selects next available port
  }

  public void handleRequest() {
    new Thread(new Runnable() {
      public void run() {
        try {
          helper.handle(server.accept());
        } catch (IOException e) {
          // Forward to handler
        }
      }
    }).start();
  }

}

The Thread-Per-Message strategy fails to provide graceful degradation of service. As threads are created, processing continues normally until some scarce resource is exhausted. For example, a system may allow only a limited number of open file descriptors, even though additional threads can be created to serve requests. When the scarce resource is memory, the system may fail abruptly, resulting in a denial of service.

Compliant Solution

This compliant solution uses a fixed-thread pool that places an upper bound on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. This prevents the system from being overwhelmed when attempting to respond to all incoming requests and allows it to degrade gracefully by serving a fixed maximum number of simultaneous clients [[Tutorials 2008]].

// class Helper remains unchanged

final class RequestHandler {
  private final Helper helper = new Helper();
  private final ServerSocket server;
  private final ExecutorService exec;

  private RequestHandler(int port, int poolSize) throws IOException {
    server = new ServerSocket(port);
    exec = Executors.newFixedThreadPool(poolSize);
  }

  public static RequestHandler newInstance(int poolSize) throws IOException {
    return new RequestHandler(0, poolSize);
  }

  public void handleRequest() {
    Future<?> future = exec.submit(new Runnable() {
      @Override public void run() {
	try {
  	  helper.handle(server.accept());
	} catch (IOException e) {
          // Forward to handler
        }
      }
    });
  }
  // ... other methods such as shutting down the thread pool and task cancellation ...
}

According to the Java API documentation for the Executor interface [[API 2006]]

[The Interface Executor is] An object that executes submitted Runnable tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An Executor is normally used instead of explicitly creating threads.

The ExecutorService interface used in this compliant solution derives from the java.util.concurrent.Executor interface. The ExecutorService.submit() method allows callers to obtain a Future<V> object. This object both encapsulates the as-yet-unknown result of an asynchronous computation and also enables callers to perform additional functions such as task cancellation.

The choice of the unbounded newFixedThreadPool may be inappropriate. Refer to the Java API documentation for guidance on choosing between the following to meet specific design requirements [[API 2006]]:

  • newFixedThreadPool()
  • newCachedThreadPool()
  • newSingleThreadExecutor()
  • newScheduledThreadPool()

Risk Assessment

Using simplistic concurrency primitives to process an unbounded number of requests could result in severe performance degradation, deadlock, or system resource exhaustion and denial of service.

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

TPS00-J

low

probable

high

P2

L3

Related Vulnerabilities

Apache Geronimo 3838

Related Guidelines

MITRE CWE:

CWE-405 "Asymmetric Resource Consumption (Amplification)"

 

CWE-410 "Insufficient Resource Pool"

Bibliography

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="e12f4555-188f-4e8a-ac81-553a65b5889a"><ac:plain-text-body><![CDATA[

[[API 2006

AA. Bibliography#API 06]]

[Interface Executor

http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Executor.html]

]]></ac:plain-text-body></ac:structured-macro>

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="d65ce859-1f0f-4cd9-948e-f60c980e7ef6"><ac:plain-text-body><![CDATA[

[[Lea 2000

AA. Bibliography#Lea 00]]

Section 4.1.3 Thread-Per-Message and 4.1.4 Worker Threads

]]></ac:plain-text-body></ac:structured-macro>

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="4c822d17-d1ca-46f0-aff9-e5252d074099"><ac:plain-text-body><![CDATA[

[[Tutorials 2008

AA. Bibliography#Tutorials 08]]

[Thread Pools

http://java.sun.com/docs/books/tutorial/essential/concurrency/pools.html]

]]></ac:plain-text-body></ac:structured-macro>

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="a432ceb2-8e91-4add-a9da-b736f9f420c2"><ac:plain-text-body><![CDATA[

[[Goetz 2006

AA. Bibliography#Goetz 06]]

Chapter 8, Applying Thread Pools

]]></ac:plain-text-body></ac:structured-macro>


10. Thread Pools (TPS)      10. Thread Pools (TPS)      

  • No labels