Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Parasoft Jtest 2021.1

The Many programs must address the problem of handling a series of incoming requests. One simple concurrency strategy is the Thread-Per-Message design pattern is the simplest concurrency strategy wherein a thread is created , which uses a new thread for each incoming request [Lea 2000a]. This design pattern is only productive when the benefits of creating a new thread outweigh the corresponding thread creation overheads. This design strategy is generally recommended generally preferred over sequential executions of time-consuming, I/O-bound, session-based, or isolated tasks.

Wiki MarkupAt the same time, this design pattern has several pitfalls, including overheads of thread-creation and scheduling, task processing, resource allocation and deallocation, and frequent context switching \[[Lea 00|AA. Java References#Lea 00]\]. Furthermore, an attacker can cause a denial of service by overwhelming the system with too many requests, all at once. Instead of degrading gracefully, the system becomes unresponsive, resulting in an availability issue. From the safety point of view, one component can potentially exhaust all resources because of some intermittent error, starving all other However, the pattern also introduces overheads not seen in sequential execution, including the time and resources required for thread creation and scheduling, for task processing, for resource allocation and deallocation, and for frequent context switching [Lea 2000a]. Furthermore, an attacker can cause a denial of service (DoS) by overwhelming the system with too many requests at once, causing the system to become unresponsive rather than degrading gracefully. From a safety perspective, one component can exhaust all resources because of an intermittent error, consequently starving all other components.

Thread pools allow the a system to service as many requests as limit the maximum number of simultaneous requests that it processes to a number that it can comfortably sustain, instead of serve rather than terminating all services when faced presented with a deluge of requests. They Thread pools overcome these issues because by controlling the maximum number of worker threads that can be initiated and executed concurrently can be suitably controlled. Every worker execute concurrently. Each object that supports thread pools accepts a Runnable or Callable<T> task and stores it in a temporary Channel such as a buffer or a queue until resources become available. Because Additionally, thread life-cycle management overhead is minimized because the threads in a thread pool can be reused , and can be efficiently added and to or removed from the Channel, there is minimal thread life-cycle management related overheadpool.

Programs that use multiple threads to service requests should—and programs that may be subjected to DoS attacks must—ensure graceful degradation of service during traffic bursts. Use of thread pools is one acceptable approach to meeting this requirement.

Noncompliant Code Example (Thread-Per-Message)

This noncompliant code example demonstrates the Thread-Per-Message design pattern which fails to provide graceful degradation of service. The RequestHandler class RequestHandler provides a public static factory method so that callers can obtain its a RequestHandler instance. Subsequently, the The handleRequest() method is used subsequently invoked to handle each request in its own thread.

Code Block
bgColor#FFCCCC

class Helper {
  public void handle(Socket socket) {
    // ... 		
  }	
}

final class RequestHandler {
  private final Helper helper = new Helper();
  private final ServerSocket server;

  private RequestHandler(int port) throws IOException {
    server = new ServerSocket(port);
  }
  
  public static RequestHandler getInstancenewInstance(int port) throws IOException {
    return new RequestHandler(port0); // Selects next available port
  }
  
  public void handleRequest() {
    new Thread(new Runnable() {
        public void run() {
          try {
            helper.handle(server.accept());
	          } catch (IOException e) {
 	            // Forward to handler
   
        }
        }
    }).start();
  }
  // Other methods such as for shutting down the thread pool and task cancellation ...
}

Compliant Solution

}

}

The thread-per-message strategy fails to provide graceful degradation of service. As threads are created, processing continues normally until some scarce resource is exhausted. For example, a system may allow only a limited number of open file descriptors even though additional threads can be created to serve requests. When the scarce resource is memory, the system may fail abruptly, resulting in a DoS.

Compliant Solution (Thread Pool)

This compliant solution uses a fixed thread pool that places a strict limit on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. Storing tasks in a queue prevents the system from being overwhelmed when attempting to respond to all incoming requests and allows it to degrade gracefully by serving a fixed maximum number of simultaneous clients [Java Tutorials]. Wiki MarkupThis compliant solution uses a _Fixed Thread Pool_ that places an upper bound on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. This prevents the system from being overwhelmed when trying to respond to all incoming requests and allows it to degrade gracefully by serving a fixed number of clients at a particular time. \[[Tutorials 08|AA. Java References#Tutorials 08]\]

Code Block
bgColor#ccccff

// class Helper remains unchanged

final class RequestHandler {
  private final Helper helper = new Helper();
  private final ServerSocket server;
  private final ExecutorService exec;
	 
  private RequestHandler(int port, int poolSize) throws IOException {
    server = new ServerSocket(port);
    exec = Executors.newFixedThreadPool(poolSize);
  }
	  
  public static RequestHandler getInstance(int port, int poolSize)newInstance(int poolSize) 
                                           throws IOException {
    return new RequestHandler(port0, poolSize);
  }
	  
  public void handleRequest() {	
    Future<?>    
   future = exec.submit(new Runnable() {
        @Override public void run() {
	          try {
     	       helper.handle(server.accept());
	          } catch (IOException e) {
            // Forward to handler						
          }
        }
    });
  }
}

Wiki Markup
According to the Java API \[[API 06|AA. Java References#API 06]\] documentation for the {{Executor}} interface:

Wiki Markup
\[The Interface {{Executor}} is\] An object that executes submitted {{Runnable}} tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An {{Executor}} is normally used instead of explicitly creating threads.

  // ... Other methods such as shutting down the thread pool 
  // and task cancellation ...
}

According to the Java API documentation for the Executor interface [API 2014]:

[The interface Executor is] an object that executes submitted Runnable tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An Executor is normally used instead of explicitly creating threads.

The ExecutorService interface The interface ExecutorService used in this compliant solution derives from the java.util.concurrent.Executor interface and . The ExecutorService.submit() method allows callers to also obtain a "future" ( Future<V> object. This object both encapsulates the as-yet unknown result of an asynchronous computation ). The caller can use the future and enables callers to perform additional tasks functions such as task cancellation.

The choice of the unbounded newFixedThreadPool may is not always be the bestappropriate. Refer to the API documentation for choosing between newFixedThreadPool, newCachedThreadPool, newSingleThreadExecutor and newScheduledThreadPool Java API documentation [API 2014] for guidance on choosing among the following methods to meet specific design requirements.:

  • newFixedThreadPool()
  • newCachedThreadPool()
  • newSingleThreadExecutor()
  • newScheduledThreadPool()

Risk Assessment

Using simplistic concurrency primitives to process an unbounded number of requests may could result in severe performance degradation, deadlocks and starvation deadlock, or exhaustion of system resources (denial-of-service)system resource exhaustion and DOS.

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

CON21

TPS00-J

low

Low

probable

Probable

high

High

P2

L3

Automated Detection

...

TODO

Related Vulnerabilities

Apache Geronimo 3838

References

Wiki Markup
\[[API 06|AA. Java References#API 06]\] [Interface Executor|http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Executor.html]
\[[Lea 00|AA. Java References#Lea 00]\] Section 4.1.3 Thread-Per-Message and 4.1.4 Worker Threads
\[[Tutorials 08|AA. Java References#Tutorials 08]\] [Thread Pools|http://java.sun.com/docs/books/tutorial/essential/concurrency/pools.html]
\[[Goetz 06|AA. Java References#Goetz 06]\] Chapter 8, Applying Thread Pools
\[[MITRE 09|AA. Java References#MITRE 09]\] [CWE ID 405|http://cwe.mitre.org/data/definitions/405.html] "Asymmetric Resource Consumption (Amplification)", [CWE ID 410|http://cwe.mitre.org/data/definitions/410.html] "Insufficient Resource Pool"

Sound automated detection is infeasible; heuristic checks could be useful.

ToolVersionCheckerDescription
Parasoft Jtest

Include Page
Parasoft_V
Parasoft_V

CERT.TPS00.ISTARTDo not call the 'start()' method directly on Thread class instances

Related Guidelines

MITRE CWE

CWE-405, Asymmetric Resource Consumption (Amplification)
CWE-410, Insufficient Resource Pool

Bibliography

[API 2014]

Interface Executor

[Goetz 2006a]

Chapter 8, "Applying Thread Pools"

[Java Tutorials]

Thread Pools

[Lea 2000a]

Section 4.1.3, "Thread-Per-Message"
Section 4.1.4, "Worker Threads"


...

Image Added Image Added Image AddedCON20-J. Do not perform operations that may block while holding a lock      11. Concurrency (CON)      CON22-J. Do not use incorrect forms of the double-checked locking idiom