Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Parasoft Jtest 2021.1

...

Many

...

programs

...

must

...

address

...

the

...

problem

...

of

...

handling

...

a

...

series

...

of

...

incoming

...

requests.

...

One simple concurrency strategy is the Thread-Per-Message

...

design pattern, which uses a new thread for each request [Lea 2000a]. This pattern is generally preferred over sequential executions of time-consuming, I/O-bound, session-based, or isolated tasks.

However, the pattern also introduces overheads not seen in sequential execution, including the time and resources required for thread creation and scheduling, for task processing, for resource allocation and deallocation, and for frequent context switching [Lea 2000a]. Furthermore, an attacker can cause a denial of service (DoS) by overwhelming the system with too many requests at once, causing the system to become unresponsive rather than degrading gracefully. From a safety perspective, one component can exhaust all resources because of an intermittent error, consequently starving all other components.

Thread pools allow a system to limit the maximum number of simultaneous requests that it processes to a number that it can comfortably serve rather than terminating all services when presented with a deluge of requests. Thread pools overcome these issues by controlling the maximum number of worker threads that can execute concurrently. Each object that supports thread pools accepts a Runnable or Callable<T> task and stores it in a temporary queue until resources become available. Additionally, thread life-cycle management overhead is minimized because the threads in a thread pool can be reused and can be efficiently added to or removed from the pool.

Programs that use multiple threads to service requests should—and programs that may be subjected to DoS attacks must—ensure graceful degradation of service during traffic bursts. Use of thread pools is one acceptable approach to meeting this requirement.

Noncompliant Code Example (Thread-Per-Message)

This noncompliant code example demonstrates the Thread-Per-Message design pattern. The RequestHandler class provides a public static factory method so that callers can obtain a RequestHandler instance. The handleRequest() method is subsequently invoked to handle each request in its own thread.

Code Block
bgColor#FFCCCC
 pattern (described in \[[Lea 00|AA. Java References#Lea 00]\]) is the simplest concurrency strategy wherein a new thread is created for each request.  This design pattern is generally recommended over sequential executions of time consuming, I/O bound, session based or isolated tasks.

However, this design pattern also has several pitfalls, including overheads of thread-creation and scheduling, task processing, resource allocation and deallocation, and frequent context switching \[[Lea 00|AA. Java References#Lea 00]\]. Furthermore, an attacker can cause a denial of service by overwhelming the system with too many requests all at once. Instead of degrading gracefully, the system becomes unresponsive, resulting in a denial of service.  From the safety point of view, one component can potentially exhaust all resources because of some intermittent error, starving all other components.

Thread pools allow a system to service as many requests as it can comfortably sustain, rather than terminating all services when presented with a deluge of requests. Thread pools overcome these issues because the maximum number of _worker threads_ that can be initialized and executed concurrently can be suitably controlled. Every object that supports thread pools accepts a {{Runnable}} or {{Callable<T>}} task and stores it in a temporary queue until resources become available.  Because the threads in a thread pool can be reused and efficiently added or removed from the pool, thread life-cycle management related overhead is minimized.  


h2. Noncompliant Code Example

This noncompliant code example demonstrates the Thread-Per-Message design pattern.  The class {{RequestHandler}} provides a public static factory method so that callers can obtain its instance. Subsequently, the {{handleRequest()}} method is used to handle each request in its own thread.

{code:bgColor=#FFCCCC}
class Helper {
  public void handle(Socket socket) {
    // ... 		
  }	
}

final class RequestHandler {
  private final Helper helper = new Helper();
  private final ServerSocket server;

  private RequestHandler(int port) throws IOException {
    server = new ServerSocket(port);
  }
  
  public static RequestHandler newInstance() throws IOException {
    return new RequestHandler(0); // Selects next available port
  }
  
  public void handleRequest() {
    new Thread(new Runnable() {
        public void run() {
          try {
            helper.handle(server.accept());
          } catch (IOException e) {
            // Forward to handler 
  
        }
        }
    }).start();
  }

  // ... other methods such as shutting down the thread pool and task cancellation ...
}
{code}

The

...

thread-per-message strategy fails to provide graceful degradation of service. As threads are created, processing continues normally until some scarce resource is exhausted. For example, a system may allow only a limited number of open file descriptors even though additional threads can be created to serve requests. When the scarce resource is memory, the system may fail abruptly, resulting in a DoS.

Compliant Solution (Thread Pool)

This compliant solution uses a fixed thread pool that places a strict limit on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. Storing tasks in a queue prevents the system from being overwhelmed when attempting to respond to all incoming requests and allows it to degrade gracefully by serving a fixed maximum number of simultaneous clients [Java Tutorials].

Code Block
bgColor#ccccff
Per-Message strategy fails to provide graceful degradation of service. As the number of concurrent threads increases, processing continues normally until some resource is exhausted. The resource to be exhausted first depends on the tasks being performed, and could be available file descriptors, available threads provided by the system, available memory, or any number of other resources. When a critical resource, such as memory, gets exhausted, the system will fail hard, refusing to service any more requests.

{mc} If you want to include the above para, I suggest using a version similar to the following: ~DM
The Thread-Per-Message strategy fails to provide graceful degradation of service. As more threads are created, processing continues normally until some scarce resource is exhausted. For example, a system may only allow only a limited number of open file descriptors even though several more threads can be created to service requests. When the scarce resource is memory, the system may fail abruptly, resulting in a denial of service.
{mc}

h2. Compliant Solution

This compliant solution uses a _Fixed Thread Pool_ that places an upper bound on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. This prevents the system from being overwhelmed when trying to respond to all incoming requests and allows it to degrade gracefully by serving a fixed number of clients at a particular time. \[[Tutorials 08|AA. Java References#Tutorials 08]\]

{code:bgColor=#ccccff}
// class Helper remains unchanged

final class RequestHandler {
  private final Helper helper = new Helper();
  private final ServerSocket server;
  private final ExecutorService exec;
	 
  private RequestHandler(int port, int poolSize) throws IOException {
    server = new ServerSocket(port);
    exec = Executors.newFixedThreadPool(poolSize);
  }
	  
  public static RequestHandler newInstance(int poolSize) 
                                           throws IOException {
    return new RequestHandler(0, poolSize); 
  }
	  
  public void handleRequest() {	
    Future<?> future = exec.submit(new Runnable() {
        @Override public void run() {
	          try {
  	          helper.handle(server.accept());
	          } catch (IOException e) {
            // Forward to handler						
          }
        }
    });
  }
}
{code}

According to the Java API documentation for the {{Executor}} interface \[[API 06|AA. Java References#API 06]\]:

{quote}
\[The Interface {{Executor}} is\] An object that executes submitted {{Runnable}} tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An {{Executor}} is normally used instead of explicitly creating threads.
{quote}

The {{ExecutorService}} interface used in this compliant solution derives from the {{  // ... Other methods such as shutting down the thread pool 
  // and task cancellation ...
}

According to the Java API documentation for the Executor interface [API 2014]:

[The interface Executor is] an object that executes submitted Runnable tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An Executor is normally used instead of explicitly creating threads.

The ExecutorService interface used in this compliant solution derives from the java.util.concurrent.Executor

...

interface.

...

The

...

ExecutorService.submit()

...

method

...

allows

...

callers

...

to

...

obtain

...

a

...

Future<V>

...

object.

...

This

...

object

...

both encapsulates the

...

as-yet

...

unknown

...

result

...

of

...

an

...

asynchronous

...

computation

...

and

...

enables

...

callers

...

to

...

perform

...

additional

...

functions

...

such

...

as

...

task

...

cancellation.

...

The

...

choice

...

of

...

newFixedThreadPool is not always appropriate. Refer to the Java API documentation [API 2014] for guidance on choosing among the following methods to meet specific design requirements:

  • newFixedThreadPool()
  • newCachedThreadPool()
  • newSingleThreadExecutor()
  • newScheduledThreadPool()

Risk Assessment

Using simplistic concurrency primitives to process an unbounded number of requests could result in severe performance degradation, deadlock, or system resource exhaustion and DOS.

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

TPS00-J

Low

Probable

High

P2

L3

Automated Detection

Sound automated detection is infeasible; heuristic checks could be useful.

ToolVersionCheckerDescription
Parasoft Jtest

Include Page
Parasoft_V
Parasoft_V

CERT.TPS00.ISTARTDo not call the 'start()' method directly on Thread class instances

Related Guidelines

MITRE CWE

CWE-405, Asymmetric Resource Consumption (Amplification)
CWE-410, Insufficient Resource Pool

Bibliography

[API 2014]

Interface Executor

[Goetz 2006a]

Chapter 8, "Applying Thread Pools"

[Java Tutorials]

Thread Pools

[Lea 2000a]

Section 4.1.3, "Thread-Per-Message"
Section 4.1.4, "Worker Threads"


...

Image Added Image Added Image Added