You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 32 Next »

The Thread-Per-Message design is the simplest concurrency technique wherein a thread is created for each incoming request. The benefits of creating a new thread to handle each request should outweigh the corresponding thread creation overheads. This design is generally recommended over sequential executions for time consuming, I/O bound, session based or isolated tasks.

On the other hand, there can be several disadvantages of this design such as thread creation overhead in case of frequent or recurring requests, significant processing overhead, resource exhaustion of threads (leading to OutOfMemoryError), thread scheduling and context switching overhead [[Lea 00]].

An attacker can cause a denial of service by overwhelming the system with too many requests, all at once. Instead of degrading gracefully, the system goes down abruptly, resulting in an availability issue. Thread pools allow the system to service as many requests as it can comfortably sustain, instead of stopping all services when faced with a deluge of requests. From the safety point of view, it is possible for one component to exhaust all resources because of some intermittent error, starving all others from using them.

Thread Pools overcome these issues as the maximum number of worker threads that can be initiated and executed simultaneously can be suitably controlled. Every worker accepts a Runnable object from a request and stores it in a temporary Channel like a buffer or a queue until resources become available. Because threads are reused and can be efficiently added to the Channel, most of the thread creation overhead is also eliminated.

Noncompliant Code Example

This noncompliant code example demonstrates the Thread-Per-Message design that fails to provide graceful degradation of service.

class Helper {
  public void handle(String request) {
    //... 		
  }	
}

class GetRequest {
  final Helper h = new Helper();
  String request;

  public synchronized String accept() {
    String data = "Read data from pipe";
    // Read the request data, else block
    return data;
  }

  public void handleRequest() {
    while(true) {
      request = accept();
      new Thread(new Runnable() {
        public void run() {
          h.handle(request);
        }
      }).start();
    }
  }
}

Compliant Solution

This compliant solution uses a Fixed Thread Pool that places an upper bound on the number of simultaneously executing threads. Tasks submitted to the pool are stored in an internal queue. This prevents the system from getting overwhelmed when trying to respond to all incoming requests and allows it to degrade gracefully by serving a fixed number of clients at a particular time. [[Tutorials 08]]

class GetRequest {
  final Helper h = new Helper();
  String request;
  final int NoOfThreads = 200; // Maximum number of threads allowed in pool
  final Executor exec;
 
  GetRequest() {
    exec = (Executor) Executors.newFixedThreadPool(NoOfThreads);	  
  }

  public synchronized String accept() {
    String data = "Read data from pipe";
    // Read the request data, else block
    return data;
  }

  public void handleRequest() {
    while(true) {
      request = accept();
      exec.execute(new Runnable() {
        public void run() {
          h.handle(request);
        }
      });
    }
  }
}

According to the Java API [[API 06]] documentation for the Executor interface:

[The Interface Executor is] An object that executes submitted Runnable tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An Executor is normally used instead of explicitly creating threads.

Noncompliant Code Example

There are some problems associated with the incorrect use of the Executor interface. For one, tasks that depend on other tasks should not execute in the same thread pool. A task that submits another task to a single threaded Executor remains blocked until the results are received whereas the second task may have dependencies on the first task. This constitutes a deadlock.

This noncompliant code example shows a thread starvation deadlock. This situation not only occurs in single threaded Executors, but also in those with large Thread Pools. This can happen when all the threads executing in the pool are blocked on tasks that are waiting on the queue. A blocking operation within a subtask can also lead to unbounded queue growth. [[Goetz 06]]

// Field password is defined in class InitialHandshake
class NetworkServer extends InitialHandshake implements Runnable {
  private final ServerSocket serverSocket;
  private final ExecutorService pool;

  public NetworkServer(int port, int poolSize) throws IOException {
    serverSocket = new ServerSocket(port);
    pool = Executors.newFixedThreadPool(poolSize);
  }
 
  public void run() {
    try { 
      // Interdependent tasks
      pool.submit(new SanitizeInput(password));  // password is defined in class InitialHandshake 
      pool.submit(new CustomHandshake(password));  // for e.g. client puzzles 
      pool.execute(new Handle(serverSocket.accept()));  // Handle connection
    } catch (IOException ex) { 
      pool.shutdown();
    }	 
  }
}

In this noncompliant code example, the SanitizeInput task depends upon the CustomHandshake task for the value of password whereas the latter depends on the former to return a password that has been correctly sanitized.

Compliant Solution

This compliant solution recommends executing the interdependent tasks as a single task within the Executor. In other cases, where the subtasks do not require concurrency safeguards, the subtasks can be moved outside the threaded region that is required to be executed by the Executor.

class NetworkServer extends InitialHandshake implements Runnable {
  private final ServerSocket serverSocket;
  private final ExecutorService pool;

  public NetworkServer(int port, int poolSize) throws IOException {
    serverSocket = new ServerSocket(port);
    pool = Executors.newFixedThreadPool(poolSize);
  }
 
  public void run() {
    try {
      // Execute interdependent subtasks as a single combined task within this block
      // Tasks SanitizeInput() and CustomHandshake() are performed together in Handle()
      pool.execute(new Handle(serverSocket.accept())); // Handle connection
    } catch (IOException ex) { 
      pool.shutdown();
    }	 
  }
}

Always try to submit independent tasks to the Executor. Thread starvation issues can be mitigated by choosing a large pool size. Note that operations that have further constraints, such as the total number of database connections or total ResultSets open at a particular time, impose an upper bound on the thread pool size as each thread continues to block until the resource becomes available. The other rules of fair concurrency, such as not running time consuming tasks, also apply. When this is not possible, obtaining real time result guarantees from the execution of tasks is usually an unattainable target.

Sometimes, a private static ThreadLocal variable is used per thread to maintain local state. When using thread pools, ThreadLocal variable should be used only if their lifetime is shorter than that of the corresponding task [[Goetz 06]]. Moreover, such variables should not be used as a communication mechanism between tasks.

Finally, the choice of the unbounded newFixedThreadPool may not always be the best. Refer to the API documentation for choosing between newFixedThreadPool, newCachedThreadPool, newSingleThreadExecutor and newScheduledThreadPool to meet the design requirements.

Risk Assessment

Using simplistic concurrency primitives to process an unbounded number of requests may result in severe performance degradation, deadlocks and starvation, or exhaustion of system resources (denial-of-service).

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

CON21- J

low

probable

high

P2

L3

Automated Detection

TODO

Related Vulnerabilities

Apache Geronimo 3838

References

[[API 06]] Interface Executor
[[Lea 00]] Section 4.1.3 Thread-Per-Message and 4.1.4 Worker Threads
[[Tutorials 08]] Thread Pools
[[Goetz 06]] Chapter 8, Applying Thread Pools
[[MITRE 09]] CWE ID 405 "Asymmetric Resource Consumption (Amplification)", CWE ID 410 "Insufficient Resource Pool"


CON20-J. Never apply a lock to methods making network calls      11. Concurrency (CON)      CON22-J. Do not use incorrect forms of the double-checked locking idiom

  • No labels