Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: wordsmithing

Wiki Markup
Many programs must address the problem of handling a series of incoming requests. The Thread-Per-Message design pattern (described in \[[Lea 00|AA. Java References#Lea 00]\]) is the simplest concurrency strategy wherein a new thread is created for each
incoming
 request.  This design pattern
is only productive when the benefits of creating a new thread outweigh the corresponding thread creation overheads. This design strategy is generally recommended over sequential executions of time consuming, I/O bound, session based or isolated
 is generally recommended over sequential executions of time consuming, I/O bound, session based or isolated tasks.

Wiki Markup
At the same timeHowever, this design pattern also has several pitfalls, including overheads of thread-creation and scheduling, task processing, resource allocation and deallocation, and frequent context switching \[[Lea 00|AA. Java References#Lea 00]\]. Furthermore, an attacker can cause a denial of service by overwhelming the system with too many requests, all at once. Instead of degrading gracefully, the system becomes unresponsive, resulting in ana denial availabilityof issueservice.  From the safety point of view, one component can potentially exhaust all resources because of some intermittent error, starving all other components.

Thread pools allow the a system to service as many requests as it can comfortably sustain, instead of rather than terminating all services when faced presented with a deluge of requests. They Thread pools overcome these issues because the maximum number of worker threads that can be initiated initialized and executed concurrently can be suitably controlled. Every worker object that supports thread pools accepts a Runnable or Callable<T> task and stores it in a temporary Channel such as a buffer or a queue until resources become available. Because the threads in a thread pool can be reused , and efficiently added or removed from the Channelpool, thread life-cycle management related overhead is minimized.

...

This noncompliant code example demonstrates the Thread-Per-Message design pattern which fails to provide graceful degradation of service. The class RequestHandler provides a public static factory method so that callers can obtain its instance. Subsequently, the handleRequest() method is used to handle each request in its own thread.

Code Block
bgColor#FFCCCC
class Helper {
  public void handle(Socket socket) {
    //... 		
  }	
}

final class RequestHandler {
  private final Helper helper = new Helper();
  private final ServerSocket server;

  private RequestHandler(int port) throws IOException {
    server = new ServerSocket(port);
  }
  
  public static RequestHandler newInstance() throws IOException {
    return new RequestHandler(0); // Selects next available port
  }
  
  public void handleRequest() {
    new Thread(new Runnable() {
      public void run() {
        try {
          helper.handle(server.accept());
	        } catch (IOException e) {
 	          // Forward to handler   
        }
      }
    }).start();
  }

  // Other...other methods such as for shutting down the thread pool and task cancellation ...
}

The Thread-Per-Message strategy fails to provide graceful degradation of service. As the number of concurrent threads increases, processing continues normally until some resource is exhausted. The resource to be exhausted first depends on the tasks being performed, and could be available file descriptors, available threads provided by the system, available memory, or any number of other resources. When a critical resource, such as memory, gets exhausted, the system will fail hard, refusing to service any more requests.

Compliant Solution

Wiki Markup
This compliant solution uses a _Fixed Thread Pool_ that places an upper bound on the number of concurrently executing threads. Tasks submitted to the pool are stored in an internal queue. This prevents the system from being overwhelmed when trying to respond to all incoming requests and allows it to degrade gracefully by serving a fixed number of clients at a particular time. \[[Tutorials 08|AA. Java References#Tutorials 08]\]

...

Wiki Markup
\[The Interface {{Executor}} is\] An object that executes submitted {{Runnable}} tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An {{Executor}} is normally used instead of explicitly creating threads.

The ExecutorService interface ExecutorService used in this compliant solution derives from the java.util.concurrent.Executor interface. The ExecutorService.submit() method allows callers to obtain a "future" ( Future<?> object. This object encapuslates the as-yet-unknown result of an asynchronous computation) that , and enables them callers to perform additional functions such as task cancellation.

The choice of the unbounded newFixedThreadPool may not always be the best. Refer to the API documentation for choosing between newFixedThreadPool(), newCachedThreadPool(), newSingleThreadExecutor() and newScheduledThreadPool() to meet specific design requirements.

...